Archives For market definition

Last week the editorial board of the Washington Post penned an excellent editorial responding to the European Commission’s announcement of its decision in its Google Shopping investigation. Here’s the key language from the editorial:

Whether the demise of any of [the complaining comparison shopping sites] is specifically traceable to Google, however, is not so clear. Also unclear is the aggregate harm from Google’s practices to consumers, as opposed to the unlucky companies. Birkenstock-seekers may well prefer to see a Google-generated list of vendors first, instead of clicking around to other sites…. Those who aren’t happy anyway have other options. Indeed, the rise of comparison shopping on giants such as Amazon and eBay makes concerns that Google might exercise untrammeled power over e-commerce seem, well, a bit dated…. Who knows? In a few years we might be talking about how Facebook leveraged its 2 billion users to disrupt the whole space.

That’s actually a pretty thorough, if succinct, summary of the basic problems with the Commission’s case (based on its PR and Factsheet, at least; it hasn’t released the full decision yet).

I’ll have more to say on the decision in due course, but for now I want to elaborate on two of the points raised by the WaPo editorial board, both in service of its crucial rejoinder to the Commission that “Also unclear is the aggregate harm from Google’s practices to consumers, as opposed to the unlucky companies.”

First, the WaPo editorial board points out that:

Birkenstock-seekers may well prefer to see a Google-generated list of vendors first, instead of clicking around to other sites.

It is undoubtedly true that users “may well prefer to see a Google-generated list of vendors first.” It’s also crucial to understanding the changes in Google’s search results page that have given rise to the current raft of complaints.

As I noted in a Wall Street Journal op-ed two years ago:

It’s a mistake to consider “general search” and “comparison shopping” or “product search” to be distinct markets.

From the moment it was technologically feasible to do so, Google has been adapting its traditional search results—that familiar but long since vanished page of 10 blue links—to offer more specialized answers to users’ queries. Product search, which is what is at issue in the EU complaint, is the next iteration in this trend.

Internet users today seek information from myriad sources: Informational sites (Wikipedia and the Internet Movie Database); review sites (Yelp and TripAdvisor); retail sites (Amazon and eBay); and social-media sites (Facebook and Twitter). What do these sites have in common? They prioritize certain types of data over others to improve the relevance of the information they provide.

“Prioritization” of Google’s own shopping results, however, is the core problem for the Commission:

Google has systematically given prominent placement to its own comparison shopping service: when a consumer enters a query into the Google search engine in relation to which Google’s comparison shopping service wants to show results, these are displayed at or near the top of the search results. (Emphasis in original).

But this sort of prioritization is the norm for all search, social media, e-commerce and similar platforms. And this shouldn’t be a surprise: The value of these platforms to the user is dependent upon their ability to sort the wheat from the chaff of the now immense amount of information coursing about the Web.

As my colleagues and I noted in a paper responding to a methodologically questionable report by Tim Wu and Yelp leveling analogous “search bias” charges in the context of local search results:

Google is a vertically integrated company that offers general search, but also a host of other products…. With its well-developed algorithm and wide range of products, it is hardly surprising that Google can provide not only direct answers to factual questions, but also a wide range of its own products and services that meet users’ needs. If consumers choose Google not randomly, but precisely because they seek to take advantage of the direct answers and other options that Google can provide, then removing the sort of “bias” alleged by [complainants] would affirmatively hurt, not help, these users. (Emphasis added).

And as Josh Wright noted in an earlier paper responding to yet another set of such “search bias” charges (in that case leveled in a similarly methodologically questionable report by Benjamin Edelman and Benjamin Lockwood):

[I]t is critical to recognize that bias alone is not evidence of competitive harm and it must be evaluated in the appropriate antitrust economic context of competition and consumers, rather individual competitors and websites. Edelman & Lockwood´s analysis provides a useful starting point for describing how search engines differ in their referrals to their own content. However, it is not useful from an antitrust policy perspective because it erroneously—and contrary to economic theory and evidence—presumes natural and procompetitive product differentiation in search rankings to be inherently harmful. (Emphasis added).

We’ll have to see what kind of analysis the Commission relies upon in its decision to reach its conclusion that prioritization is an antitrust problem, but there is reason to be skeptical that it will turn out to be compelling. The Commission states in its PR that:

The evidence shows that consumers click far more often on results that are more visible, i.e. the results appearing higher up in Google’s search results. Even on a desktop, the ten highest-ranking generic search results on page 1 together generally receive approximately 95% of all clicks on generic search results (with the top result receiving about 35% of all the clicks). The first result on page 2 of Google’s generic search results receives only about 1% of all clicks. This cannot just be explained by the fact that the first result is more relevant, because evidence also shows that moving the first result to the third rank leads to a reduction in the number of clicks by about 50%. The effects on mobile devices are even more pronounced given the much smaller screen size.

This means that by giving prominent placement only to its own comparison shopping service and by demoting competitors, Google has given its own comparison shopping service a significant advantage compared to rivals. (Emphasis added).

Whatever truth there is in the characterization that placement is more important than relevance in influencing user behavior, the evidence cited by the Commission to demonstrate that doesn’t seem applicable to what’s happening on Google’s search results page now.

Most crucially, the evidence offered by the Commission refers only to how placement affects clicks on “generic search results” and glosses over the fact that the “prominent placement” of Google’s “results” is not only a difference in position but also in the type of result offered.

Google Shopping results (like many of its other “vertical results” and direct answers) are very different than the 10 blue links of old. These “universal search” results are, for one thing, actual answers rather than merely links to other sites. They are also more visually rich and attractively and clearly displayed.

Ironically, Tim Wu and Yelp use the claim that users click less often on Google’s universal search results to support their contention that increased relevance doesn’t explain Google’s prioritization of its own content. Yet, as we note in our response to their study:

[I]f a consumer is using a search engine in order to find a direct answer to a query rather than a link to another site to answer it, click-through would actually represent a decrease in consumer welfare, not an increase.

In fact, the study fails to incorporate this dynamic even though it is precisely what the authors claim the study is measuring.

Further, as the WaPo editorial intimates, these universal search results (including Google Shopping results) are quite plausibly more valuable to users. As even Tim Wu and Yelp note:

No one truly disagrees that universal search, in concept, can be an important innovation that can serve consumers.

Google sees it exactly this way, of course. Here’s Tim Wu and Yelp again:

According to Google, a principal difference between the earlier cases and its current conduct is that universal search represents a pro-competitive, user-serving innovation. By deploying universal search, Google argues, it has made search better. As Eric Schmidt argues, “if we know the answer it is better for us to answer that question so [the user] doesn’t have to click anywhere, and in that sense we… use data sources that are our own because we can’t engineer it any other way.”

Of course, in this case, one would expect fewer clicks to correlate with higher value to users — precisely the opposite of the claim made by Tim Wu and Yelp, which is the surest sign that their study is faulty.

But the Commission, at least according to the evidence cited in its PR, doesn’t even seem to measure the relative value of the very different presentations of information at all, instead resting on assertions rooted in the irrelevant difference in user propensity to click on generic (10 blue links) search results depending on placement.

Add to this Pinar Akman’s important point that Google Shopping “results” aren’t necessarily search results at all, but paid advertising:

[O]nce one appreciates the fact that Google’s shopping results are simply ads for products and Google treats all ads with the same ad-relevant algorithm and all organic results with the same organic-relevant algorithm, the Commission’s order becomes impossible to comprehend. Is the Commission imposing on Google a duty to treat non-sponsored results in the same way that it treats sponsored results? If so, does this not provide an unfair advantage to comparison shopping sites over, for example, Google’s advertising partners as well as over Amazon, eBay, various retailers, etc…?

Randy Picker also picks up on this point:

But those Google shopping boxes are ads, Picker told me. “I can’t imagine what they’re thinking,” he said. “Google is in the advertising business. That’s how it makes its money. It has no obligation to put other people’s ads on its website.”

The bottom line here is that the WaPo editorial board does a better job characterizing the actual, relevant market dynamics in a single sentence than the Commission seems to have done in its lengthy releases summarizing its decision following seven full years of investigation.

The second point made by the WaPo editorial board to which I want to draw attention is equally important:

Those who aren’t happy anyway have other options. Indeed, the rise of comparison shopping on giants such as Amazon and eBay makes concerns that Google might exercise untrammeled power over e-commerce seem, well, a bit dated…. Who knows? In a few years we might be talking about how Facebook leveraged its 2 billion users to disrupt the whole space.

The Commission dismisses this argument in its Factsheet:

The Commission Decision concerns the effect of Google’s practices on comparison shopping markets. These offer a different service to merchant platforms, such as Amazon and eBay. Comparison shopping services offer a tool for consumers to compare products and prices online and find deals from online retailers of all types. By contrast, they do not offer the possibility for products to be bought on their site, which is precisely the aim of merchant platforms. Google’s own commercial behaviour reflects these differences – merchant platforms are eligible to appear in Google Shopping whereas rival comparison shopping services are not.

But the reality is that “comparison shopping,” just like “general search,” is just one technology among many for serving information and ads to consumers online. Defining the relevant market or limiting the definition of competition in terms of the particular mechanism that Google (or Foundem, or Amazon, or Facebook…) happens to use doesn’t reflect the extent of substitutability between these different mechanisms.

Properly defined, the market in which Google competes online is not search, but something more like online “matchmaking” between advertisers, retailers and consumers. And this market is enormously competitive. The same goes for comparison shopping.

And the fact that Amazon and eBay “offer the possibility for products to be bought on their site” doesn’t take away from the fact that they also “offer a tool for consumers to compare products and prices online and find deals from online retailers of all types.” Not only do these sites contain enormous amounts of valuable (and well-presented) information about products, including product comparisons and consumer reviews, but they also actually offer comparisons among retailers. In fact, Fifty percent of the items sold through Amazon’s platform, for example, are sold by third-party retailers — the same sort of retailers that might also show up on a comparison shopping site.

More importantly, though, as the WaPo editorial rightly notes, “[t]hose who aren’t happy anyway have other options.” Google just isn’t the indispensable gateway to the Internet (and definitely not to shopping on the Internet) that the Commission seems to think.

Today over half of product searches in the US start on Amazon. The majority of web page referrals come from Facebook. Yelp’s most engaged users now access it via its app (which has seen more than 3x growth in the past five years). And a staggering 40 percent of mobile browsing on both Android and iOS now takes place inside the Facebook app.

Then there are “closed” platforms like the iTunes store and innumerable other apps that handle copious search traffic (including shopping-related traffic) but also don’t figure in the Commission’s analysis, apparently.

In fact, billions of users reach millions of companies every day through direct browser navigation, social media, apps, email links, review sites, blogs, and countless other means — all without once touching Google.com. So-called “dark social” interactions (email, text messages, and IMs) drive huge amounts of some of the most valuable traffic on the Internet, in fact.

All of this, in turn, has led to a competitive scramble to roll out completely new technologies to meet consumers’ informational (and merchants’ advertising) needs. The already-arriving swarm of VR, chatbots, digital assistants, smart-home devices, and more will offer even more interfaces besides Google through which consumers can reach their favorite online destinations.

The point is this: Google’s competitors complaining that the world is evolving around them don’t need to rely on Google. That they may choose to do so does not saddle Google with an obligation to ensure that they can always do so.

Antitrust laws — in Europe, no less than in the US — don’t require Google or any other firm to make life easier for competitors. That’s especially true when doing so would come at the cost of consumer-welfare-enhancing innovations. The Commission doesn’t seem to have grasped this fundamental point, however.

The WaPo editorial board gets it, though:

The immense size and power of all Internet giants are a legitimate focus for the antitrust authorities on both sides of the Atlantic. Brussels vs. Google, however, seems to be a case of punishment without crime.

On Thursday, March 30, Friday March 31, and Monday April 3, Truth on the Market and the International Center for Law and Economics presented a blog symposium — Agricultural and Biotech Mergers: Implications for Antitrust Law and Economics in Innovative Industries — discussing three proposed agricultural/biotech industry mergers awaiting judgment by antitrust authorities around the globe. These proposed mergers — Bayer/Monsanto, Dow/DuPont and ChemChina/Syngenta — present a host of fascinating issues, many of which go to the core of merger enforcement in innovative industries — and antitrust law and economics more broadly.

The big issue for the symposium participants was innovation (as it was for the European Commission, which cleared the Dow/DuPont merger last week, subject to conditions, one of which related to the firms’ R&D activities).

Critics of the mergers, as currently proposed, asserted that the increased concentration arising from the “Big 6” Ag-biotech firms consolidating into the Big 4 could reduce innovation competition by (1) eliminating parallel paths of research and development (Moss); (2) creating highly integrated technology/traits/seeds/chemicals platforms that erect barriers to new entry platforms (Moss); (3) exploiting eventual network effects that may result from the shift towards data-driven agriculture to block new entry in input markets (Lianos); or (4) increasing incentives to refuse to license, impose discriminatory restrictions in technology licensing agreements, or tacitly “agree” not to compete (Moss).

Rather than fixating on horizontal market share, proponents of the mergers argued that innovative industries are often marked by disruptions and that investment in innovation is an important signal of competition (Manne). An evaluation of the overall level of innovation should include not only the additional economies of scale and scope of the merged firms, but also advancements made by more nimble, less risk-averse biotech companies and smaller firms, whose innovations the larger firms can incentivize through licensing or M&A (Shepherd). In fact, increased efficiency created by economies of scale and scope can make funds available to source innovation outside of the large firms (Shepherd).

In addition, innovation analysis must also account for the intricately interwoven nature of agricultural technology across seeds and traits, crop protection, and, now, digital farming (Sykuta). Combined product portfolios generate more data to analyze, resulting in increased data-driven value for farmers and more efficiently targeted R&D resources (Sykuta).

While critics voiced concerns over such platforms erecting barriers to entry, markets are contestable to the extent that incumbents are incentivized to compete (Russell). It is worth noting that certain industries with high barriers to entry or exit, significant sunk costs, and significant costs disadvantages for new entrants (including automobiles, wireless service, and cable networks) have seen their prices decrease substantially relative to inflation over the last 20 years — even as concentration has increased (Russell). Not coincidentally, product innovation in these industries, as in ag-biotech, has been high.

Ultimately, assessing the likely effects of each merger using static measures of market structure is arguably unreliable or irrelevant in dynamic markets with high levels of innovation (Manne).

Regarding patents, critics were skeptical that combining the patent portfolios of the merging companies would offer benefits beyond those arising from cross-licensing, and would serve to raise rivals’ costs (Ghosh). While this may be true in some cases, IP rights are probabilistic, especially in dynamic markets, as Nicolas Petit noted:

There is no certainty that R&D investments will lead to commercially successful applications; (ii) no guarantee that IP rights will resist to invalidity proceedings in court; (iii) little safety to competition by other product applications which do not practice the IP but provide substitute functionality; and (iv) no inevitability that the environmental, toxicological and regulatory authorization rights that (often) accompany IP rights will not be cancelled when legal requirements change.

In spite of these uncertainties, deals such as the pending ag-biotech mergers provide managers the opportunity to evaluate and reorganize assets to maximize innovation and return on investment in such a way that would not be possible absent a merger (Sykuta). Neither party would fully place its IP and innovation pipeline on the table otherwise.

For a complete rundown of the arguments both for and against, the full archive of symposium posts from our outstanding and diverse group of scholars, practitioners and other experts is available at this link, and individual posts can be easily accessed by clicking on the authors’ names below.

We’d like to thank all of the participants for their excellent contributions!

John E. Lopatka is A. Robert Noll Distinguished Professor of Law at Penn State Law School

People need to eat. All else equal, the more food that can be produced from an acre of land, the better off they’ll be. Of course, people want to pay as little as possible for their food to boot. At heart, the antitrust analysis of the pending agribusiness mergers requires a simple assessment of their effects on food production and price. But making that assessment raises difficult questions about institutional competence.

Each of the three mergers – Dow/DuPont, ChemChina/Syngenta, and Bayer/Monsanto – involves agricultural products, such as different kinds of seeds, pesticides, and fertilizers. All of these products are inputs in the production of food – the better and cheaper are these products, the more food is produced. The array of products these firms produce invites potentially controversial market definition determinations, but these determinations are standard fare in antitrust law and economics, and conventional analysis handles them tolerably well. Each merger appears to pose overlaps in some product markets, though they seem to be relatively small parts of the firms’ businesses. Traditional merger analysis would examine these markets in properly defined geographic markets, some of which are likely international. The concern in these markets seems to be coordinated interaction, and the analysis of potential anticompetitive coordination would thus focus on concentration and entry barriers. Much could be said about the assumption that product markets perform less competitively as concentration increases, but that is an issue for others or at least another day.

More importantly for my purposes here, to the extent that any of these mergers creates concentration in a market that is competitively problematic and not likely to be cured by new entry, a fix is fairly easy. These are mergers in which asset divestiture is feasible, in which the parties seem willing to divest assets, and in which interested and qualified asset buyers are emerging. To be sure, firms may be willing to divest assets at substantial cost to appease regulators even when competitive problems are illusory, and the cost of a cure in search of an illness is a real social cost. But my concern lies elsewhere.

The parties in each of these mergers have touted innovation as a beneficial byproduct of the deal if not its raison d’être. Innovation effects have made their way into merger analysis, but not smoothly. Innovation can be a kind of efficiency, distinguished from most other efficiencies by its dynamic nature. The benefits of using a plant to its capacity are immediate: costs and prices decrease now. Any benefits of innovation will necessarily be experienced in the future, and the passage of time makes benefits both less certain and less valuable, as people prefer consumption now rather than later. The parties to these mergers in their public statements, to the extent they intend to address antitrust concerns, are implicitly asserting innovation as a defense, a kind of efficiency defense. They do not concede, of course, that their deals will be anticompetitive in any product market. But for antitrust purposes, an accelerated pace of innovation is irrelevant unless the merger appears to threaten competition.

Recognizing increased innovation as a merger defense raises all of the issues that any efficiencies defense raises, and then some. First, can efficiencies be identified?  For instance, patent portfolios can be combined, and the integration of patent rights can lower transaction costs relative to a contractual allocation of rights just as any integration can. In theory, avenues of productive research may not even be recognized until the firms’ intellectual property is combined. A merger may eliminate redundant research efforts, but identifying that which is truly duplicative is often not easy. In all, identifying efficiencies related to research and development is likely to be more difficult than identifying many other kinds of efficiencies. Second, are the efficiencies merger-specific?  The less clearly research and development efficiencies can be identified, the weaker is the claim that they cannot be achieved absent the merger. But in this respect, innovation efficiencies can be more important than most other kinds of efficiencies, because intellectual property sometimes cannot be duplicated as easily as physical property can. Third, can innovation efficiencies be quantified?  If innovation is expected to take the form of an entirely new product, such as a new pesticide, estimating its value is inherently speculative. Fourth, when will efficiencies save a merger that would otherwise be condemned?  An efficiencies defense implies a comparison between the expected harm a merger will cause and the expected benefits it will produce. Arguably those benefits have to be realized by consumers to count at all, but, in any event, a comparison between expected immediate losses of customers in an input market and expected future gains from innovation may be nearly impossible to make. The Merger Guidelines acknowledge that innovation efficiencies can be considered and note many of the concerns just listed. The takeaway is a healthy skepticism of an innovation defense. The defense should generally fail unless the model of anticompetitive harm in product (or service) markets is dubious or the efficiency claim is unusually specific and the likely benefits substantial.

Innovation can enter merger analysis in an even more troublesome way, however: as a club rather than a shield. The Merger Guidelines contemplate that a merger may have unilateral anticompetitive effects if it results in a “reduced incentive to continue with an existing product-development effort or reduced incentive to initiate development of new products.”  The stark case is one in which a merger poses no competitive problem in a product market but would allegedly reduce innovation competition. The best evidence that the elimination of innovation competition might be a reason to oppose one or more of the agribusiness mergers is the recent decision of the European Commission approving the Dow/DuPont merger, subject to various asset divestitures. The Commission, echoing the Guidelines, concluded that the merger would significantly reduce “innovation competition for pesticides” by “[r]emoving the parties’ incentives to continue to pursue ongoing parallel innovation efforts” and by “[r]emoving the parties’ incentives to develop and bring to market new pesticides.”  The agreed upon fix requires DuPont to divest most of its research and development organization.

Enforcement claims that a merger will restrict innovation competition should be met with every bit the skepticism due defense claims that innovation efficiencies save a merger. There is nothing inconsistent in this symmetry. The benefits of innovation, though potentially immense – large enough to dwarf the immediate allocative harm from a lessening of competition in product markets – is speculative. In discounted utility terms, the expected harm will usually exceed the expected benefits, given our limited ability to predict the future. But the potential gains from innovation are immense, and unless we are confident that a merger will reduce innovation, antitrust law should not intervene. We rarely are, at least we rarely should be.

As Geoffrey Manne points out, we still do not know a great deal about the optimal market structure for innovation. Evidence suggests that moderate concentration is most conducive to innovation, but it is not overwhelming, and more importantly no one is suggesting a merger policy that single-mindedly pursues a particular market structure. An examination of incentives to continue existing product development projects or to initiate projects to develop new products is superficially appealing, but its practical utility is elusive. Any firm has an incentive to develop products that increase demand. The Merger Guidelines suggest that a merger will reduce incentives to innovate if the introduction of a new product by one merging firm will capture substantial revenues from the other. The E.C. likely had this effect in mind in concluding that the merged entity would have “lower incentives . . . to innovate than Dow and DuPont separately.”  The Commission also observed that the merged firm would have “a lower ability to innovate” than the two firms separately, but just how a combination of research assets could reduce capability is utterly obscure.

In any event, whether a merger reduces incentives depends not only on the welfare of the merging parties but also on the development activities of actual and would-be competitors. A merged firm cannot afford to have its revenue captured by a new product introduced by a competitor. Of course, innovation by competitors will not spur a firm to develop new products if those competitors do not have the resources needed to innovate. One can imagine circumstances in which resources necessary to innovate in a product market are highly specialized; more realistically, the lack of specialized resources will decrease the pace of innovation. But the concept of specialized resources cannot mean resources a firm has developed that are conducive to innovate and that could be, but have not yet been, developed by other firms. It cannot simply mean a head start, unless it is very long indeed. If the first two firms in an industry build a plant, the fact that a new entrant would have to build a plant is not a sufficient reason to prevent the first two from merging. In any event, what resources are essential to innovation in an area can be difficult to determine.

Assuming essential resources can be identified, how many firms need to have them to create a competitive environment? The Guidelines place the number at “very small” plus one. Elsewhere, the federal antitrust agencies suggest that four firms other than the merged firm are sufficient to maintain innovation competition. We have models, whatever their limitations, that predict price effects in oligopolies. The Guidelines are based on them. But determining the number of firms necessary for competitive innovation is another matter. Maybe two is enough. We know for sure that innovation competition is non-existent if only one firm has the capacity to innovate, but not much else. We know that duplicative research efforts can be wasteful. If two firms would each spend $1 million to arrive at the same place, a merged firm might be able to invest $2 million and go twice as far or reach the first place at half the total cost. This is only to say that a merger can increase innovation efficiency, a possibility that is not likely to justify an otherwise anticompetitive merger but should usually protect from condemnation a merger that is not otherwise anticompetitive.

In the Dow/DuPont merger, the Commission found “specific evidence that the merged entity would have cut back on the amount they spent on developing innovative products.”  Executives of the two firms stated that they expected to reduce research and development spending by around $300 million. But a reduction in spending does not tell us whether innovation will suffer. The issue is innovation efficiency. If the two firms spent, say, $1 billion each on research, $300 million of which was duplicative of the other firm’s research, the merged firm could invest $1.7 billion without reducing productive effort. The Commission complained that the merger would reduce from five to four the number of firms that are “globally active throughout the entire R&D process.”  As noted above, maybe four firms competing are enough. We don’t know. But the Commission also discounts firms with “more limited R&D capabilities,” and the importance to successful innovation of multi-level integration in this industry is not clear.

When a merger is challenged because of an adverse effect on innovation competition, a fix can be difficult. Forced licensing might work, but that assumes that the relevant resource necessary to carry on research and development is intellectual property. More may be required. If tangible assets related to research and development are required, a divestiture might cripple the merged firm. The Commission remedy was to require the merged firm to divest “DuPont’s global R&D organization” that is related to the product operations that must be divested. The firm is permitted to retain “a few limited [R&D] assets that support the part of DuPont’s pesticide business” that is not being divested. In this case, such a divestiture may or may not hobble the merged firm, depending on whether the divested assets would have contributed to the research and development efforts that it will continue to pursue. That the merged firm was willing to accept the research and development divestiture to secure Commission approval does not mean that the divestiture will do no harm to the firm’s continuing research and development activities. Moreover, some product markets at issue in this merger are geographically limited, whereas the likely benefits of innovation are largely international. The implication is that increased concentration in product markets can be avoided by divesting assets to other large agribusinesses that do not operate in the relevant geographic market. But if the Commission insists on preserving five integrated firms active in global research and development activities, DuPont’s research and development activities cannot be divested to one of the other major players, which the Commission identifies as BASF, Bayer, and Syngenta, or firms with which any of them are attempting to merge, namely Monsanto and ChemChina. These are the five firms, of course, that are particularly likely to be interested buyers.

Innovation is important. No one disagrees. But the role of competition in stimulating innovation is not well understood. Except in unusual cases, antitrust institutions are ill-equipped either to recognize innovation efficiencies that save a merger threatening competition in product markets or to condemn mergers that threaten only innovation competition. Indeed, despite maintaining their prerogative to challenge mergers solely on the ground of a reduction in innovation competition, the federal agencies have in fact complained about an adverse effect on innovation in cases that also raise competitive issues in product markets. Innovation is at the heart of the pending agribusiness mergers. How regulators and courts analyze innovation in these cases will say something about whether they perceive their limitations.

Truth on the Market is pleased to announce its next blog symposium:

Agricultural and Biotech Mergers: Implications for Antitrust Law and Economics in Innovative Industries

March 30 & 31, 2017

Earlier this week the European Commission cleared the merger of Dow and DuPont, subject to conditions including divestiture of DuPont’s “global R&D organisation.” As the Commission noted:

The Commission had concerns that the merger as notified would have reduced competition on price and choice in a number of markets for existing pesticides. Furthermore, the merger would have reduced innovation. Innovation, both to improve existing products and to develop new active ingredients, is a key element of competition between companies in the pest control industry, where only five players are globally active throughout the entire research & development (R&D) process.

In addition to the traditional focus on price effects, the merger’s presumed effect on innovation loomed large in the EC’s consideration of the Dow/DuPont merger — as it is sure to in its consideration of the other two pending mergers in the agricultural biotech and chemicals industries between Bayer and Monsanto and ChemChina and Syngenta. Innovation effects are sure to take center stage in the US reviews of the mergers, as well.

What is less clear is exactly how antitrust agencies evaluate — and how they should evaluate — mergers like these in rapidly evolving, high-tech industries.

These proposed mergers present a host of fascinating and important issues, many of which go to the core of modern merger enforcement — and antitrust law and economics more generally. Among other things, they raise issues of:

  • The incorporation of innovation effects in antitrust analysis;
  • The relationship between technological and organizational change;
  • The role of non-economic considerations in merger review;
  • The continued relevance (or irrelevance) of the Structure-Conduct-Performance paradigm;
  • Market definition in high-tech markets; and
  • The patent-antitrust interface

Beginning on March 30, Truth on the Market and the International Center for Law & Economics will host a blog symposium discussing how some of these issues apply to these mergers per se, as well as the state of antitrust law and economics in innovative-industry mergers more broadly.

As in the past (see examples of previous TOTM blog symposia here), we’ve lined up an outstanding and diverse group of scholars to discuss these issues:

  • Allen Gibby, Senior Fellow for Law & Economics, International Center for Law & Economics
  • Shubha Ghosh, Crandall Melvin Professor of Law and Director of the Technology Commercialization Law Program, Syracuse University College of Law
  • Ioannis Lianos,  Chair of Global Competition Law and Public Policy, Faculty of Laws, University College London
  • John E. Lopatka (tent.), A. Robert Noll Distinguished Professor of Law, Penn State Law
  • Geoffrey A. Manne, Executive Director, International Center for Law & Economics
  • Diana L. Moss, President, American Antitrust Institute
  • Nicolas Petit, Professor of Law, Faculty of Law, and Co-director, Liege Competition and Innovation Institute, University of Liege
  • Levi A. Russell, Assistant Professor, Agricultural & Applied Economics, University of Georgia
  • Joanna M. Shepherd, Professor of Law, Emory University School of Law
  • Michael Sykuta, Associate Professor, Agricultural and Applied Economics, and Director, Contracting Organizations Research Institute, University of Missouri

Initial contributions to the symposium will appear periodically on the 30th and 31st, and the discussion will continue with responsive posts (if any) next week. We hope to generate a lively discussion, and readers are invited to contribute their own thoughts in comments to the participants’ posts.

The symposium posts will be collected here.

We hope you’ll join us!

Thanks to Truth on the Market for the opportunity to guest blog, and to ICLE for inviting me to join as a Senior Scholar! I’m honoured to be involved with both of these august organizations.

In Brussels, the talk of the town is that the European Commission (“Commission”) is casting a new eye on the old antitrust conjecture that prophesizes a negative relationship between industry concentration and innovation. This issue arises in the context of the review of several mega-mergers in the pharmaceutical and AgTech (i.e., seed genomics, biochemicals, “precision farming,” etc.) industries.

The antitrust press reports that the Commission has shown signs of interest for the introduction of a new theory of harm: the Significant Impediment to Industry Innovation (“SIII”) theory, which would entitle the remediation of mergers on the sole ground that a transaction significantly impedes innovation incentives at the industry level. In a recent ICLE White Paper, I discuss the desirability and feasibility of the introduction of this doctrine for the assessment of mergers in R&D-driven industries.

The introduction of SIII analysis in EU merger policy would no doubt be a sea change, as compared to past decisional practice. In previous cases, the Commission has paid heed to the effects of a merger on incentives to innovate, but the assessment has been limited to the effect on the innovation incentives of the merging parties in relation to specific current or future products. The application of the SIII theory, however, would entail an assessment of a possible reduction of innovation in (i) a given industry as a whole; and (ii) not in relation to specific product applications.

The SIII theory would also be distinct from the innovation markets” framework occasionally applied in past US merger policy and now marginalized. This framework considers the effect of a merger on separate upstream “innovation markets,i.e., on the R&D process itself, not directly linked to a downstream current or future product market. Like SIII, innovation markets analysis is interesting in that the identification of separate upstream innovation markets implicitly recognises that the players active in those markets are not necessarily the same as those that compete with the merging parties in downstream product markets.

SIII is way more intrusive, however, because R&D incentives are considered in the abstract, without further obligation on the agency to identify structured R&D channels, pipeline products, and research trajectories.

With this, any case for an expansion of the Commission’s power to intervene against mergers in certain R&D-driven industries should rely on sound theoretical and empirical infrastructure. Yet, despite efforts by the most celebrated Nobel-prize economists of the past decades, the economics that underpin the relation between industry concentration and innovation incentives remains an unfathomable mystery. As Geoffrey Manne and Joshua Wright have summarized in detail, the existing literature is indeterminate, at best. As they note, quoting Rich Gilbert,

[a] careful examination of the empirical record concludes that the existing body of theoretical and empirical literature on the relationship between competition and innovation “fails to provide general support for the Schumpeterian hypothesis that monopoly promotes either investment in research and development or the output of innovation” and that “the theoretical and empirical evidence also does not support a strong conclusion that competition is uniformly a stimulus to innovation.”

Available theoretical research also fails to establish a directional relationship between mergers and innovation incentives. True, soundbites from antitrust conferences suggest that the Commission’s Chief Economist Team has developed a deterministic model that could be brought to bear on novel merger policy initiatives. Yet, given the height of the intellectual Everest under discussion, we remain dubious (yet curious).

And, as noted, the available empirical data appear inconclusive. Consider a relatively concentrated industry like the seed and agrochemical sector. Between 2009 and 2016, all big six agrochemical firms increased their total R&D expenditure and their R&D intensity either increased or remained stable. Note that this has taken place in spite of (i) a significant increase in concentration among the largest firms in the industry; (ii) dramatic drop in global agricultural commodity prices (which has adversely affected several agrochemical businesses); and (iii) the presence of strong appropriability devices, namely patent rights.

This brief industry example (that I discuss more thoroughly in the paper) calls our attention to a more general policy point: prior to poking and prodding with novel theories of harm, one would expect an impartial antitrust examiner to undertake empirical groundwork, and screen initial intuitions of adverse effects of mergers on innovation through the lenses of observable industry characteristics.

At a more operational level, SIII also illustrates the difficulties of using indirect proxies of innovation incentives such as R&D figures and patent statistics as a preliminary screening tool for the assessment of the effects of the merger. In my paper, I show how R&D intensity can increase or decrease for a variety of reasons that do not necessarily correlate with an increase or decrease in the intensity of innovation. Similarly, I discuss why patent counts and patent citations are very crude indicators of innovation incentives. Over-reliance on patent counts and citations can paint a misleading picture of the parties’ strength as innovators in terms of market impact: not all patents are translated into products that are commercialised or are equal in terms of commercial value.

As a result (and unlike the SIII or innovation markets approaches), the use of these proxies as a measure of innovative strength should be limited to instances where the patent clearly has an actual or potential commercial application in those markets that are being assessed. Such an approach would ensure that patents with little or no impact on innovation competition in a market are excluded from consideration. Moreover, and on pain of stating the obvious, patents are temporal rights. Incentives to innovate may be stronger as a protected technological application approaches patent expiry. Patent counts and citations, however, do not discount the maturity of patents and, in particular, do not say much about whether the patent is far from or close to its expiry date.

In order to overcome the limitations of crude quantitative proxies, it is in my view imperative to complement an empirical analysis with industry-specific qualitative research. Central to the assessment of the qualitative dimension of innovation competition is an understanding of the key drivers of innovation in the investigated industry. In the agrochemical industry, industry structure and market competition may only be one amongst many other factors that promote innovation. Economic models built upon Arrow’s replacement effect theory – namely that a pre-invention monopoly acts as a strong disincentive to further innovation – fail to capture that successful agrochemical products create new technology frontiers.

Thus, for example, progress in crop protection products – and, in particular, in pest- and insect-resistant crops – had fuelled research investments in pollinator protection technology. Moreover, the impact of wider industry and regulatory developments on incentives to innovate and market structure should not be ignored (for example, falling crop commodity prices or regulatory restrictions on the use of certain products). Last, antitrust agencies are well placed to understand that beyond R&D and patent statistics, there is also a degree of qualitative competition in the innovation strategies that are pursued by agrochemical players.

My paper closes with a word of caution. No compelling case has been advanced to support a departure from established merger control practice with the introduction of SIII in pharmaceutical and agrochemical mergers. The current EU merger control framework, which enables the Commission to conduct a prospective analysis of the parties’ R&D incentives in current or future product markets, seems to provide an appropriate safeguard against anticompetitive transactions.

In his 1974 Nobel Prize Lecture, Hayek criticized the “scientific error” of much economic research, which assumes that intangible, correlational laws govern observable and measurable phenomena. Hayek warned that economics is like biology: both fields focus on “structures of essential complexity” which are recalcitrant to stylized modeling. Interestingly, competition was one of the examples expressly mentioned by Hayek in his lecture:

[T]he social sciences, like much of biology but unlike most fields of the physical sciences, have to deal with structures of essential complexity, i.e. with structures whose characteristic properties can be exhibited only by models made up of relatively large numbers of variables. Competition, for instance, is a process which will produce certain results only if it proceeds among a fairly large number of acting persons.

What remains from this lecture is a vibrant call for humility in policy making, at a time where some constituencies within antitrust agencies show signs of interest in revisiting the relationship between concentration and innovation. And if Hayek’s convoluted writing style is not the most accessible of all, the title captures it all: “The Pretense of Knowledge.

I just posted a new ICLE white paper, co-authored with former ICLE Associate Director, Ben Sperry:

When Past Is Not Prologue: The Weakness of the Economic Evidence Against Health Insurance Mergers.

Yesterday the hearing in the DOJ’s challenge to stop the Aetna-Humana merger got underway, and last week phase 1 of the Cigna-Anthem merger trial came to a close.

The DOJ’s challenge in both cases is fundamentally rooted in a timeworn structural analysis: More consolidation in the market (where “the market” is a hotly-contested issue, of course) means less competition and higher premiums for consumers.

Following the traditional structural playbook, the DOJ argues that the Aetna-Humana merger (to pick one) would result in presumptively anticompetitive levels of concentration, and that neither new entry not divestiture would suffice to introduce sufficient competition. It does not (in its pretrial brief, at least) consider other market dynamics (including especially the complex and evolving regulatory environment) that would constrain the firm’s ability to charge supracompetitive prices.

Aetna & Humana, for their part, contend that things are a bit more complicated than the government suggests, that the government defines the relevant market incorrectly, and that

the evidence will show that there is no correlation between the number of [Medicare Advantage organizations] in a county (or their shares) and Medicare Advantage pricing—a fundamental fact that the Government’s theories of harm cannot overcome.

The trial will, of course, feature expert economic evidence from both sides. But until we see that evidence, or read the inevitable papers derived from it, we are stuck evaluating the basic outlines of the economic arguments based on the existing literature.

A host of antitrust commentators, politicians, and other interested parties have determined that the literature condemns the mergers, based largely on a small set of papers purporting to demonstrate that an increase of premiums, without corresponding benefit, inexorably follows health insurance “consolidation.” In fact, virtually all of these critics base their claims on a 2012 case study of a 1999 merger (between Aetna and Prudential) by economists Leemore Dafny, Mark Duggan, and Subramaniam Ramanarayanan, Paying a Premium on Your Premium? Consolidation in the U.S. Health Insurance Industry, as well as associated testimony by Prof. Dafny, along with a small number of other papers by her (and a couple others).

Our paper challenges these claims. As we summarize:

This white paper counsels extreme caution in the use of past statistical studies of the purported effects of health insurance company mergers to infer that today’s proposed mergers—between Aetna/Humana and Anthem/Cigna—will likely have similar effects. Focusing on one influential study—Paying a Premium on Your Premium…—as a jumping off point, we highlight some of the many reasons that past is not prologue.

In short: extrapolated, long-term, cumulative, average effects drawn from 17-year-old data may grab headlines, but they really don’t tell us much of anything about the likely effects of a particular merger today, or about the effects of increased concentration in any particular product or geographic market.

While our analysis doesn’t necessarily undermine the paper’s limited, historical conclusions, it does counsel extreme caution for inferring the study’s applicability to today’s proposed mergers.

By way of reference, Dafny, et al. found average premium price increases from the 1999 Aetna/Prudential merger of only 0.25 percent per year for two years following the merger in the geographic markets they studied. “Health Insurance Mergers May Lead to 0.25 Percent Price Increases!” isn’t quite as compelling a claim as what critics have been saying, but it’s arguably more accurate (and more relevant) than the 7 percent price increase purportedly based on the paper that merger critics like to throw around.

Moreover, different markets and a changed regulatory environment alone aren’t the only things suggesting that past is not prologue. When we delve into the paper more closely we find even more significant limitations on the paper’s support for the claims made in its name, and its relevance to the current proposed mergers.

The full paper is available here.

Since the European Commission (EC) announced its first inquiry into Google’s business practices in 2010, the company has been the subject of lengthy investigations by courts and competition agencies around the globe. Regulatory authorities in the United States, France, the United Kingdom, Canada, Brazil, and South Korea have all opened and rejected similar antitrust claims.

And yet the EC marches on, bolstered by Google’s myriad competitors, who continue to agitate for further investigations and enforcement actions, even as we — companies and consumers alike — enjoy the benefits of an increasingly dynamic online marketplace.

Indeed, while the EC has spent more than half a decade casting about for some plausible antitrust claim, the online economy has thundered ahead. Since 2010, Facebook has tripled its active users and multiplied its revenue ninefold; the number of apps available in the Amazon app store has grown from less than 4000 to over 400,000 today; and there are almost 1.5 billion more Internet users globally than there were in 2010. And consumers are increasingly using new and different ways to search for information: Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, and Facebook’s Messenger are a few of the many new innovations challenging traditional search engines.

Advertisers have adapted to this evolution, moving increasingly online, and from search to display ads as mobile adoption has skyrocketedSocial networks like Twitter and Snapchat have come into their own, competing for the same (and ever-increasing) advertising dollars. For marketers, advertising on social networks is now just as important as advertising in search. No wonder e-commerce sales have more than doubled, to almost $2 trillion worldwide; for the first time, consumers purchased more online than in stores this past year.

To paraphrase Louis C.K.: Everything is amazing — and no one at the European Commission is happy.

The EC’s market definition is fatally flawed

Like its previous claims, the Commission’s most recent charges are rooted in the assertion that Google abuses its alleged dominance in “general search” advertising to unfairly benefit itself and to monopolize other markets. But European regulators continue to miss the critical paradigm shift among online advertisers and consumers that has upended this stale view of competition on the Internet. The reality is that Google’s competition may not, and need not, look exactly like Google itself, but it is competition nonetheless. And it’s happening in spades.

The key to understanding why the European Commission’s case is fundamentally flawed lies in an examination of how it defines the relevant market. Through a series of economically and factually unjustified assumptions, the Commission defines search as a distinct market in which Google faces limited competition and enjoys an 80% market share. In other words, for the EC, “general search” apparently means only nominal search providers like Google and Bing; it doesn’t mean companies like Amazon, Facebook and Twitter — Google’s biggest competitors.  

But the reality is that “general search” is just one technology among many for serving information and ads to consumers online. Defining the relevant market or limiting the definition of competition in terms of the particular mechanism that Google happens to use to match consumers and advertisers doesn’t reflect the substitutability of other mechanisms that do the same thing — merely because these mechanisms aren’t called “search.”

Properly defined, the market in which Google competes online is not search, but something more like online “matchmaking” between advertisers, retailers and consumers. And this market is enormously competitive.

Consumers today are increasingly using platforms like Amazon and Facebook as substitutes for the searches they might have run on Google or Bing. “Closed” platforms like the iTunes store and innumerable apps handle copious search traffic but also don’t figure in the EC’s market calculations. And so-called “dark social” interactions like email, text messages, and IMs, drive huge amounts of some of the most valuable traffic on the Internet. This, in turn, has led to a competitive scramble to roll out completely new technologies like chatbots to meet consumers’ informational (and merchants’ advertising) needs.

Properly construed, Google’s market position is precarious

Like Facebook and Twitter (and practically every other Internet platform), advertising is Google’s primary source of revenue. Instead of charging for fancy hardware or offering services to users for a fee, Google offers search, the Android operating system, and a near-endless array of other valuable services for free to users. The company’s very existence relies on attracting Internet users and consumers to its properties in order to effectively connect them with advertisers.

But being an online matchmaker is a difficult and competitive enterprise. Among other things, the ability to generate revenue turns crucially on the quality of the match: All else equal, an advertiser interested in selling widgets will pay more for an ad viewed by a user who can be reliably identified as being interested in buying widgets.

Google’s primary mechanism for attracting users to match with advertisers — general search — is substantially about information, not commerce, and the distinction between product and informational searches is crucially important to understanding Google’s market and the surprisingly limited and tenuous market power it possesses.

General informational queries aren’t nearly as valuable to advertisers: Significantly, only about 30 percent of Google’s searches even trigger any advertising at all. Meanwhile, as of 2012, one-third of product searches started on Amazon while only 13% started on a general search engine.

As economist Hal Singer aptly noted in 2012,

[the data] suggest that Google lacks market power in a critical segment of search — namely, product searches. Even though searches for items such as power tools or designer jeans account for only 10 to 20 percent of all searches, they are clearly some of the most important queries for search engines from a business perspective, as they are far easier to monetize than informational queries like “Kate Middleton.”

While Google Search clearly offers substantial value to advertisers, its ability to continue to do so is precarious when confronted with the diverse array of competitors that, like Facebook, offer a level of granularity in audience targeting that general search can’t match, or that, like Amazon, systematically offer up the most valuable searchers.

In order to compete in this market — one properly defined to include actual competitors — Google has had to constantly innovate to maintain its position. Unlike a complacent monopolist, it has evolved to meet changing consumer demand, shifting technology and inventive competitors. Thus, Google’s search algorithm has changed substantially over the years to make more effective use of the information available to ensure relevance; search results have evolved to give consumers answers to queries rather than just links, and to provide more-direct access to products and services; and, as users have shifted more and more of their time and attention to mobile devices, search has incorporated more-localized results.

Competitors want a free lunch

Critics complain, nevertheless, that these developments have made it harder, in one way or another, for rivals to compete. And the EC has provided a willing ear. According to Commissioner Vestager last week:

Google has come up with many innovative products that have made a difference to our lives. But that doesn’t give Google the right to deny other companies the chance to compete and innovate. Today, we have further strengthened our case that Google has unduly favoured its own comparison shopping service in its general search result pages…. (Emphasis added).

Implicit in this statement is the remarkable assertion that by favoring its own comparison shopping services, Google “den[ies] other companies the chance to compete and innovate.” Even assuming Google does “favor” its own results, this is an astounding claim.

First, it is not a violation of competition law simply to treat competitors’ offerings differently than one’s own, even for a dominant firm. Instead, conduct must actually exclude competitors from the market, without offering countervailing advantages to consumers. But Google’s conduct is not exclusionary, and there are many benefits to consumers.

As it has from the start of its investigations of Google, the EC begins with a flawed assumption: that Google’s competitors both require, and may be entitled to, unfettered access to Google’s property in order to compete. But this is patently absurd. Google is not an essential facility: Billions of users reach millions of companies everyday through direct browser navigation, apps, email links, review sites and blogs, and countless other means — all without once touching Google.com.

Google Search results do not exclude competitors, whether comparison shopping sites or others. For example, 72% of TripAdvisor’s U.S. traffic comes from search, and almost all of that from organic results; other specialized search sites see similar traffic volumes.

More important, however, in addition to continuing to reach rival sites through Google Search, billions of consumers access rival services directly through their mobile apps. In fact, for Yelp,

Approximately 21 million unique devices accessed Yelp via the mobile app on a monthly average basis in the first quarter of 2016, an increase of 32% compared to the same period in 2015. App users viewed approximately 70% of page views in the first quarter and were more than 10 times as engaged as website users, as measured by number of pages viewed. (Emphasis added).

And a staggering 40 percent of mobile browsing is now happening inside the Facebook app, competing with the browsers and search engines pre-loaded on smartphones.

Millions of consumers also directly navigate to Google’s rivals via their browser by simply typing, for example, “Yelp.com” in their address bar. And as noted above, consumers are increasingly using Google rivals’ new disruptive information engines like Alexa and Siri for their search needs. Even the traditional search engine space is competitive — in fact, according to Wired, as of July 2016:

Microsoft has now captured more than one-third of Internet searches. Microsoft’s transformation from a company that sells boxed software to one that sells services in the cloud is well underway. (Emphasis added).

With such numbers, it’s difficult to see how rivals are being foreclosed from reaching consumers in any meaningful way.

Meanwhile, the benefits to consumers are obvious: Google is directly answering questions for consumers rather than giving them a set of possible links to click through and further search. In some cases its results present entirely new and valuable forms of information (e.g., search trends and structured data); in others they serve to hone searches by suggesting further queries, or to help users determine which organic results (including those of its competitors) may be most useful. And, of course, consumers aren’t forced to endure these innovations if they don’t find them useful, as they can quickly switch to other providers.  

Nostalgia makes for bad regulatory policy

Google is not the unstoppable monopolist of the EU competition regulators’ imagining. Rather, it is a continual innovator, forced to adapt to shifting consumer demand, changing technology, and competitive industry dynamics. And, instead of trying to hamstring Google, if they are to survive, Google’s competitors (and complainants) must innovate as well.

Dominance in technology markets — especially online — has always been ephemeral. Once upon a time, MySpace, AOL, and Yahoo were the dominant Internet platforms. Kodak, once practically synonymous with “instant camera” let the digital revolution pass it by. The invincible Sony Walkman was upended by mp3s and the iPod. Staid, keyboard-operated Blackberries and Nokias simply couldn’t compete with app-driven, graphical platforms from Apple and Samsung. Even today, startups like Snapchat, Slack, and Spotify gain massive scale and upend entire industries with innovative new technology that can leave less-nimble incumbents in the dustbin of tech history.

Put differently, companies that innovate are able to thrive, while those that remain dependent on yesterday’s technology and outdated business models usually fail — and deservedly so. It should never be up to regulators to pick winners and losers in a highly dynamic and competitive market, particularly if doing so constrains the market’s very dynamism. As Alfonso Lamadrid has pointed out:

It is companies and not competition enforcers which will strive or fail in the adoption of their business models, and it is therefore companies and not competition enforcers who are to decide on what business models to use. Some will prove successful and others will not; some companies will thrive and some will disappear, but with experimentation with business models, success and failure are and have always been part of the game.

In other words, we should not forget that competition law is, or should be, business-model agnostic, and that regulators are – like anyone else – far from omniscient.

Like every other technology company before them, Google and its competitors must be willing and able to adapt in order to keep up with evolving markets — just as for Lewis Carroll’s Red Queen, “it takes all the running you can do, to keep in the same place.” Google confronts a near-constantly evolving marketplace and fierce competition from unanticipated quarters; companies that build their businesses around Google face a near-constantly evolving Google. In the face of such relentless market dynamism, neither consumers nor firms are well served by regulatory policy rooted in nostalgia.  

As regulatory review of the merger between Aetna and Humana hits the homestretch, merger critics have become increasingly vocal in their opposition to the deal. This is particularly true of a subset of healthcare providers concerned about losing bargaining power over insurers.

Fortunately for consumers, the merger appears to be well on its way to approval. California recently became the 16th of 20 state insurance commissions that will eventually review the merger to approve it. The U.S. Department of Justice is currently reviewing the merger and may issue its determination as early as July.

Only Missouri has issued a preliminary opinion that the merger might lead to competitive harm. But Missouri is almost certain to remain an outlier, and its analysis simply doesn’t hold up to scrutiny.

The Missouri opinion echoed the Missouri Hospital Association’s (MHA) concerns about the effect of the merger on Medicare Advantage (MA) plans. It’s important to remember, however, that hospital associations like the MHA are not consumer advocacy groups. They are trade organizations whose primary function is to protect the interests of their member hospitals.

In fact, the American Hospital Association (AHA) has mounted continuous opposition to the deal. This is itself a good indication that the merger will benefit consumers, in part by reducing hospital reimbursement costs under MA plans.

More generally, critics have argued that history proves that health insurance mergers lead to higher premiums, without any countervailing benefits. Merger opponents place great stock in a study by economist Leemore Dafny and co-authors that purports to show that insurance mergers have historically led to seven percent higher premiums.

But that study, which looked at a pre-Affordable Care Act (ACA) deal and assessed its effects only on premiums for traditional employer-provided plans, has little relevance today.

The Dafny study first performed a straightforward statistical analysis of overall changes in concentration (that is, the number of insurers in a given market) and price, and concluded that “there is no significant association between concentration levels and premium growth.” Critics never mention this finding.

The study’s secondary, more speculative, analysis took the observed effects of a single merger — the 1999 merger between Prudential and Aetna — and extrapolated for all changes in concentration (i.e., the number of insurers in a given market) and price over an eight-year period. It concluded that, on average, seven percent of the cumulative increase in premium prices between 1998 and 2006 was the result of a reduction in the number of insurers.

But what critics fail to mention is that when the authors looked at the actual consequences of the 1999 Prudential/Aetna merger, they found effects lasting only two years — and an average price increase of only one half of one percent. And these negligible effects were restricted to premiums paid under plans purchased by large employers, a critical limitation of the studies’ relevance to today’s proposed mergers.

Moreover, as the study notes in passing, over the same eight-year period, average premium prices increased in total by 54 percent. Yet the study offers no insights into what was driving the vast bulk of premium price increases — or whether those factors are still present today.  

Few sectors of the economy have changed more radically in the past few decades than healthcare has. While extrapolated effects drawn from 17-year-old data may grab headlines, they really don’t tell us much of anything about the likely effects of a particular merger today.

Indeed, the ACA and current trends in healthcare policy have dramatically altered the way health insurance markets work. Among other things, the advent of new technologies and the move to “value-based” care are redefining the relationship between insurers and healthcare providers. Nowhere is this more evident than in the Medicare and Medicare Advantage market at the heart of the Aetna/Humana merger.

In an effort to stop the merger on antitrust grounds, critics claim that Medicare and MA are distinct products, in distinct markets. But it is simply incorrect to claim that Medicare Advantage and traditional Medicare aren’t “genuine alternatives.”

In fact, as the Office of Insurance Regulation in Florida — a bellwether state for healthcare policy — concluded in approving the merger: “Medicare Advantage, the private market product, competes directly with Traditional Medicare.”

Consumers who search for plans at Medicare.gov are presented with a direct comparison between traditional Medicare and available MA plans. And the evidence suggests that they regularly switch between the two. Today, almost a third of eligible Medicare recipients choose MA plans, and the majority of current MA enrollees switched to MA from traditional Medicare.

True, Medicare and MA plans are not identical. But for antitrust purposes, substitutes need not be perfect to exert pricing discipline on each other. Take HMOs and PPOs, for example. No one disputes that they are substitutes, and that prices for one constrain prices for the other. But as anyone who has considered switching between an HMO and a PPO knows, price is not the only variable that influences consumers’ decisions.

The same is true for MA and traditional Medicare. For many consumers, Medicare’s standard benefits, more-expensive supplemental benefits, plus a wider range of provider options present a viable alternative to MA’s lower-cost expanded benefits and narrower, managed provider network.

The move away from a traditional fee-for-service model changes how insurers do business. It requires larger investments in technology, better tracking of preventive care and health outcomes, and more-holistic supervision of patient care by insurers. Arguably, all of this may be accomplished most efficiently by larger insurers with more resources and a greater ability to work with larger, more integrated providers.

This is exactly why many hospitals, which continue to profit from traditional, fee-for-service systems, are opposed to a merger that promises to expand these value-based plans. Significantly, healthcare providers like Encompass Medical Group, which have done the most to transition their services to the value-based care model, have offered letters of support for the merger.

Regardless of their rhetoric — whether about market definition or historic precedent — the most vocal merger critics are opposed to the deal for a very simple reason: They stand to lose money if the merger is approved. That may be a good reason for some hospitals to wish the merger would go away, but it is a terrible reason to actually stop it.

[This post was first published on June 27, 2016 in The Hill as “Don’t believe the critics, Aetna-Humana merger a good deal for consumers“]

A number of blockbuster mergers have received (often negative) attention from media and competition authorities in recent months. From the recently challenged Staples-Office Depot merger to the abandoned Comcast-Time Warner merger to the heavily scrutinized Aetna-Humana merger (among many others), there has been a wave of potential mega-mergers throughout the economy—many of them met with regulatory resistance. We’ve discussed several of these mergers at TOTM (see, e.g., here, here, here and here).

Many reporters, analysts, and even competition authorities have adopted various degrees of the usual stance that big is bad, and bigger is even badder. But worse yet, once this presumption applies, agencies have been skeptical of claimed efficiencies, placing a heightened burden on the merging parties to prove them and often ignoring them altogether. And, of course (and perhaps even worse still), there is the perennial problem of (often questionable) market definition — which tanked the Sysco/US Foods merger and which undergirds the FTC’s challenge of the Staples/Office Depot merger.

All of these issues are at play in the proposed acquisition of British aluminum can manufacturer Rexam PLC by American can manufacturer Ball Corp., which has likewise drawn the attention of competition authorities around the world — including those in Brazil, the European Union, and the United States.

But the Ball/Rexam merger has met with some important regulatory successes. Just recently the members of CADE, Brazil’s competition authority, unanimously approved the merger with limited divestitures. The most recent reports also indicate that the EU will likely approve it, as well. It’s now largely down to the FTC, which should approve the merger and not kill it or over-burden it with required divestitures on the basis of questionable antitrust economics.

The proposed merger raises a number of interesting issues in the surprisingly complex beverage container market. But this merger merits regulatory approval.

The International Center for Law & Economics recently released a research paper entitled, The Ball-Rexam Merger: The Case for a Competitive Can Market. The white paper offers an in-depth assessment of the economics of the beverage packaging industry; the place of the Ball-Rexam merger within this remarkably complex, global market; and the likely competitive effects of the deal.

The upshot is that the proposed merger is unlikely to have anticompetitive effects, and any competitive concerns that do arise can be readily addressed by a few targeted divestitures.

The bottom line

The production and distribution of aluminum cans is a surprisingly dynamic industry, characterized by evolving technology, shifting demand, complex bargaining dynamics, and significant changes in the costs of production and distribution. Despite the superficial appearance that the proposed merger will increase concentration in aluminum can manufacturing, we conclude that a proper understanding of the marketplace dynamics suggests that the merger is unlikely to have actual anticompetitive effects.

All told, and as we summarize in our Executive Summary, we found at least seven specific reasons for this conclusion:

  1. Because the appropriately defined product market includes not only stand-alone can manufacturers, but also vertically integrated beverage companies, as well as plastic and glass packaging manufacturers, the actual increase in concentration from the merger will be substantially less than suggested by the change in the number of nationwide aluminum can manufacturers.
  2. Moreover, in nearly all of the relevant geographic markets (which are much smaller than the typically nationwide markets from which concentration numbers are derived), the merger will not affect market concentration at all.
  3. While beverage packaging isn’t a typical, rapidly evolving, high-technology market, technological change is occurring. Coupled with shifting consumer demand (often driven by powerful beverage company marketing efforts), and considerable (and increasing) buyer power, historical beverage packaging market shares may have little predictive value going forward.
  4. The key importance of transportation costs and the effects of current input prices suggest that expanding demand can be effectively met only by expanding the geographic scope of production and by economizing on aluminum supply costs. These, in turn, suggest that increasing overall market concentration is consistent with increased, rather than decreased, competitiveness.
  5. The markets in which Ball and Rexam operate are dominated by a few large customers, who are themselves direct competitors in the upstream marketplace. These companies have shown a remarkable willingness and ability to invest in competing packaging supply capacity and to exert their substantial buyer power to discipline prices.
  6. For this same reason, complaints leveled against the proposed merger by these beverage giants — which are as much competitors as they are customers of the merging companies — should be viewed with skepticism.
  7. Finally, the merger should generate significant managerial and overhead efficiencies, and the merged firm’s expanded geographic footprint should allow it to service larger geographic areas for its multinational customers, thus lowering transaction costs and increasing its value to these customers.

Distinguishing Ardagh: The interchangeability of aluminum and glass

An important potential sticking point for the FTC’s review of the merger is its recent decision to challenge the Ardagh-Saint Gobain merger. The cases are superficially similar, in that they both involve beverage packaging. But Ardagh should not stand as a model for the Commission’s treatment of Ball/Rexam. The FTC made a number of mistakes in Ardagh (including market definition and the treatment of efficiencies — the latter of which brought out a strenuous dissent from Commissioner Wright). But even on its own (questionable) terms, Ardagh shouldn’t mean trouble for Ball/Rexam.

As we noted in our December 1st letter to the FTC on the Ball/Rexam merger, and as we discuss in detail in the paper, the situation in the aluminum can market is quite different than the (alleged) market for “(1) the manufacture and sale of glass containers to Brewers; and (2) the manufacture and sale of glass containers to Distillers” at issue in Ardagh.

Importantly, the FTC found (almost certainly incorrectly, at least for the brewers) that other container types (e.g., plastic bottles and aluminum cans) were not part of the relevant product market in Ardagh. But in the markets in which aluminum cans are a primary form of packaging (most notably, soda and beer), our research indicates that glass, plastic, and aluminum are most definitely substitutes.

The Big Four beverage companies (Coca-Cola, PepsiCo, Anheuser-Busch InBev, and MillerCoors), which collectively make up 80% of the U.S. market for Ball and Rexam, are all vertically integrated to some degree, and provide much of their own supply of containers (a situation significantly different than the distillers in Ardagh). These companies exert powerful price discipline on the aluminum packaging market by, among other things, increasing (or threatening to increase) their own container manufacturing capacity, sponsoring new entry, and shifting production (and, via marketing, consumer demand) to competing packaging types.

For soda, Ardagh is obviously inapposite, as soda packaging wasn’t at issue there. But the FTC’s conclusion in Ardagh that aluminum cans (which in fact make up 56% of the beer packaging market) don’t compete with glass bottles for beer packaging is also suspect.

For aluminum can manufacturers Ball and Rexam, aluminum can’t be excluded from the market (obviously), and much of the beer in the U.S. that is packaged in aluminum is quite clearly also packaged in glass. The FTC claimed in Ardagh that glass and aluminum are consumed in distinct situations, so they don’t exert price pressure on each other. But that ignores the considerable ability of beer manufacturers to influence consumption choices, as well as the reality that consumer preferences for each type of container (whether driven by beer company marketing efforts or not) are merging, with cost considerations dominating other factors.

In fact, consumers consume beer in both packaging types largely interchangeably (with a few limited exceptions — e.g., poolside drinking demands aluminum or plastic), and beer manufacturers readily switch between the two types of packaging as the relative production costs shift.

Craft brewers, to take one important example, are rapidly switching to aluminum from glass, despite a supposed stigma surrounding canned beers. Some craft brewers (particularly the larger ones) do package at least some of their beers in both types of containers, or simultaneously package some of their beers in glass and some of their beers in cans, while for many craft brewers it’s one or the other. Yet there’s no indication that craft beer consumption has fallen off because consumers won’t drink beer from cans in some situations — and obviously the prospect of this outcome hasn’t stopped craft brewers from abandoning bottles entirely in favor of more economical cans, nor has it induced them, as a general rule, to offer both types of packaging.

A very short time ago it might have seemed that aluminum wasn’t in the same market as glass for craft beer packaging. But, as recent trends have borne out, that differentiation wasn’t primarily a function of consumer preference (either at the brewer or end-consumer level). Rather, it was a function of bottling/canning costs (until recently the machinery required for canning was prohibitively expensive), materials costs (at various times glass has been cheaper than aluminum, depending on volume), and transportation costs (which cut against glass, but the relative attractiveness of different packaging materials is importantly a function of variable transportation costs). To be sure, consumer preference isn’t irrelevant, but the ease with which brewers have shifted consumer preferences suggests that it isn’t a strong constraint.

Transportation costs are key

Transportation costs, in fact, are a key part of the story — and of the conclusion that the Ball/Rexam merger is unlikely to have anticompetitive effects. First of all, transporting empty cans (or bottles, for that matter) is tremendously inefficient — which means that the relevant geographic markets for assessing the competitive effects of the Ball/Rexam merger are essentially the largely non-overlapping 200 mile circles around the companies’ manufacturing facilities. Because there are very few markets in which the two companies both have plants, the merger doesn’t change the extent of competition in the vast majority of relevant geographic markets.

But transportation costs are also relevant to the interchangeability of packaging materials. Glass is more expensive to transport than aluminum, and this is true not just for empty bottles, but for full ones, of course. So, among other things, by switching to cans (even if it entails up-front cost), smaller breweries can expand their geographic reach, potentially expanding sales enough to more than cover switching costs. The merger would further lower the costs of cans (and thus of geographic expansion) by enabling beverage companies to transact with a single company across a wider geographic range.

The reality is that the most important factor in packaging choice is cost, and that the packaging alternatives are functionally interchangeable. As a result, and given that the direct consumers of beverage packaging are beverage companies rather than end-consumers, relatively small cost changes readily spur changes in packaging choices. While there are some switching costs that might impede these shifts, they are readily overcome. For large beverage companies that already use multiple types and sizes of packaging for the same product, the costs are trivial: They already have packaging designs, marketing materials, distribution facilities and the like in place. For smaller companies, a shift can be more difficult, but innovations in labeling, mobile canning/bottling facilities, outsourced distribution and the like significantly reduce these costs.  

“There’s a great future in plastics”

All of this is even more true for plastic — even in the beer market. In fact, in 2010, 10% of the beer consumed in Europe was sold in plastic bottles, as was 15% of all beer consumed in South Korea. We weren’t able to find reliable numbers for the U.S., but particularly for cheaper beers, U.S. brewers are increasingly moving to plastic. And plastic bottles are the norm at stadiums and arenas. Whatever the exact numbers, clearly plastic holds a small fraction of the beer container market compared to glass and aluminum. But that number is just as clearly growing, and as cost considerations impel them (and technology enables them), giant, powerful brewers like AB InBev and MillerCoors are certainly willing and able to push consumers toward plastic.

Meanwhile soda companies like Coca-cola and Pepsi have successfully moved their markets so that today a majority of packaged soda is sold in plastic containers. There’s no evidence that this shift came about as a result of end-consumer demand, nor that the shift to plastic was delayed by a lack of demand elasticity; rather, it was primarily a function of these companies’ ability to realize bigger profits on sales in plastic containers (not least because they own their own plastic packaging production facilities).

And while it’s not at issue in Ball/Rexam because spirits are rarely sold in aluminum packaging, the FTC’s conclusion in Ardagh that

[n]on-glass packaging materials, such as plastic containers, are not in this relevant product market because not enough spirits customers would switch to non-glass packaging materials to make a SSNIP in glass containers to spirits customers unprofitable for a hypothetical monopolist

is highly suspect — which suggests the Commission may have gotten it wrong in other ways, too. For example, as one report notes:

But the most noteworthy inroads against glass have been made in distilled liquor. In terms of total units, plastic containers, almost all of them polyethylene terephthalate (PET), have surpassed glass and now hold a 56% share, which is projected to rise to 69% by 2017.

True, most of this must be tiny-volume airplane bottles, but by no means all of it is, and it’s clear that the cost advantages of plastic are driving a shift in distilled liquor packaging, as well. Some high-end brands are even moving to plastic. Whatever resistance (and this true for beer, too) that may have existed in the past because of glass’s “image,” is breaking down: Don’t forget that even high-quality wines are now often sold with screw-tops or even in boxes — something that was once thought impossible.

The overall point is that the beverage packaging market faced by can makers like Ball and Rexam is remarkably complex, and, crucially, the presence of powerful, vertically integrated customers means that past or current demand by end-users is a poor indicator of what the market will look like in the future as input costs and other considerations faced by these companies shift. Right now, for example, over 50% of the world’s soda is packaged in plastic bottles, and this margin is set to increase: The global plastic packaging market (not limited to just beverages) is expected to grow at a CAGR of 5.2% between 2014 and 2020, while aluminum packaging is expected to grow at just 2.9%.

A note on efficiencies

As noted above, the proposed Ball/Rexam merger also holds out the promise of substantial efficiencies (estimated at $300 million by the merging parties, due mainly to decreased transportation costs). There is a risk, however, that the FTC may effectively disregard those efficiencies, as it did in Ardagh (and in St. Luke’s before it), by saddling them with a higher burden of proof than it requires of its own prima facie claims. If the goal of antitrust law is to promote consumer welfare, competition authorities can’t ignore efficiencies in merger analysis.

In his Ardagh dissent, Commissioner Wright noted that:

Even when the same burden of proof is applied to anticompetitive effects and efficiencies, of course, reasonable minds can and often do differ when identifying and quantifying cognizable efficiencies as appears to have occurred in this case.  My own analysis of cognizable efficiencies in this matter indicates they are significant.   In my view, a critical issue highlighted by this case is whether, when, and to what extent the Commission will credit efficiencies generally, as well as whether the burden faced by the parties in establishing that proffered efficiencies are cognizable under the Merger Guidelines is higher than the burden of proof facing the agencies in establishing anticompetitive effects. After reviewing the record evidence on both anticompetitive effects and efficiencies in this case, my own view is that it would be impossible to come to the conclusions about each set forth in the Complaint and by the Commission — and particularly the conclusion that cognizable efficiencies are nearly zero — without applying asymmetric burdens.

The Commission shouldn’t make the same mistake here. In fact, here, where can manufacturers are squeezed between powerful companies both upstream (e.g., Alcoa) and downstream (e.g., AB InBev), and where transportation costs limit the opportunities for expanding the customer base of any particular plant, the ability to capitalize on economies of scale and geographic scope is essential to independent manufacturers’ abilities to efficiently meet rising demand.

Read our complete assessment of the merger’s effect here.

Last week, FCC General Counsel Jonathan Sallet pulled back the curtain on the FCC staff’s analysis behind its decision to block Comcast’s acquisition of Time Warner Cable. As the FCC staff sets out on its reported Rainbow Tour to reassure regulated companies that it’s not “hostile to the industries it regulates,” Sallet’s remarks suggest it will have an uphill climb. Unfortunately, the staff’s analysis appears to have been unduly speculative, disconnected from critical market realities, and decidedly biased — not characteristics in a regulator that tend to offer much reassurance.

Merger analysis is inherently speculative, but, as courts have repeatedly had occasion to find, the FCC has a penchant for stretching speculation beyond the breaking point, adopting theories of harm that are vaguely possible, even if unlikely and inconsistent with past practice, and poorly supported by empirical evidence. The FCC’s approach here seems to fit this description.

The FCC’s fundamental theory of anticompetitive harm

To begin with, as he must, Sallet acknowledged that there was no direct competitive overlap in the areas served by Comcast and Time Warner Cable, and no consumer would have seen the number of providers available to her changed by the deal.

But the FCC staff viewed this critical fact as “not outcome determinative.” Instead, Sallet explained that the staff’s opposition was based primarily on a concern that the deal might enable Comcast to harm “nascent” OVD competitors in order to protect its video (MVPD) business:

Simply put, the core concern came down to whether the merged firm would have an increased incentive and ability to safeguard its integrated Pay TV business model and video revenues by limiting the ability of OVDs to compete effectively, especially through the use of new business models.

The justification for the concern boiled down to an assumption that the addition of TWC’s subscriber base would be sufficient to render an otherwise too-costly anticompetitive campaign against OVDs worthwhile:

Without the merger, a company taking action against OVDs for the benefit of the Pay TV system as a whole would incur costs but gain additional sales – or protect existing sales — only within its footprint. But the combined entity, having a larger footprint, would internalize more of the external “benefits” provided to other industry members.

The FCC theorized that, by acquiring a larger footprint, Comcast would gain enough bargaining power and leverage, as well as the means to profit from an exclusionary strategy, leading it to employ a range of harmful tactics — such as impairing the quality/speed of OVD streams, imposing data caps, limiting OVD access to TV-connected devices, imposing higher interconnection fees, and saddling OVDs with higher programming costs. It’s difficult to see how such conduct would be permitted under the FCC’s Open Internet Order/Title II regime, but, nevertheless, the staff apparently believed that Comcast would possess a powerful “toolkit” with which to harm OVDs post-transaction.

Comcast’s share of the MVPD market wouldn’t have changed enough to justify the FCC’s purported fears

First, the analysis turned on what Comcast could and would do if it were larger. But Comcast was already the largest ISP and MVPD (now second largest MVPD, post AT&T/DIRECTV) in the nation, and presumably it has approximately the same incentives and ability to disadvantage OVDs today.

In fact, there’s no reason to believe that the growth of Comcast’s MVPD business would cause any material change in its incentives with respect to OVDs. Whatever nefarious incentives the merger allegedly would have created by increasing Comcast’s share of the MVPD market (which is where the purported benefits in the FCC staff’s anticompetitive story would be realized), those incentives would be proportional to the size of increase in Comcast’s national MVPD market share — which, here, would be about eight percentage points: from 22% to under 30% of the national market.

It’s difficult to believe that Comcast would gain the wherewithal to engage in this costly strategy by adding such a relatively small fraction of the MVPD market (which would still leave other MVPDs serving fully 70% of the market to reap the purported benefits instead of Comcast), but wouldn’t have it at its current size – and there’s no evidence that it has ever employed such strategies with its current market share.

It bears highlighting that the D.C. Circuit has already twice rejected FCC efforts to impose a 30% market cap on MVPDs, based on the Commission’s inability to demonstrate that a greater-than-30% share would create competitive problems, especially given the highly dynamic nature of the MVPD market. In vacating the FCC’s most recent effort to do so in 2009, the D.C. Circuit was resolute in its condemnation of the agency, noting:

In sum, the Commission has failed to demonstrate that allowing a cable operator to serve more than 30% of all [MVPD] subscribers would threaten to reduce either competition or diversity in programming.

The extent of competition and the amount of available programming (including original programming distributed by OVDs themselves) has increased substantially since 2009; this makes the FCC’s competitive claims even less sustainable today.

It’s damning enough to the FCC’s case that there is no marketplace evidence of such conduct or its anticompetitive effects in today’s market. But it’s truly impossible to square the FCC’s assertions about Comcast’s anticompetitive incentives with the fact that, over the past decade, Comcast has made massive investments in broadband, steadily increased broadband speeds, and freely licensed its programming, among other things that have served to enhance OVDs’ long-term viability and growth. Chalk it up to the threat of regulatory intervention or corporate incompetence if you can’t believe that competition alone could be responsible for this largesse, but, whatever the reason, the FCC staff’s fears appear completely unfounded in a marketplace not significantly different than the landscape that would have existed post-merger.

OVDs aren’t vulnerable, and don’t need the FCC’s “help”

After describing the “new entrants” in the market — such unfamiliar and powerless players as Dish, Sony, HBO, and CBS — Sallet claimed that the staff was principally animated by the understanding that

Entrants are particularly vulnerable when competition is nascent. Thus, staff was particularly concerned that this transaction could damage competition in the video distribution industry.

Sallet’s description of OVDs makes them sound like struggling entrepreneurs working in garages. But, in fact, OVDs have radically reshaped the media business and wield enormous clout in the marketplace.

Netflix, for example, describes itself as “the world’s leading Internet television network with over 65 million members in over 50 countries.” New services like Sony Vue and Sling TV are affiliated with giant, well-established media conglomerates. And whatever new offerings emerge from the FCC-approved AT&T/DIRECTV merger will be as well-positioned as any in the market.

In fact, we already know that the concerns of the FCC are off-base because they are of a piece with the misguided assumptions that underlie the Chairman’s recent NPRM to rewrite the MVPD rules to “protect” just these sorts of companies. But the OVDs themselves — the ones with real money and their competitive futures on the line — don’t see the world the way the FCC does, and they’ve resolutely rejected the Chairman’s proposal. Notably, the proposed rules would “protect” these services from exactly the sort of conduct that Sallet claims would have been a consequence of the Comcast-TWC merger.

If they don’t want or need broad protection from such “harms” in the form of revised industry-wide rules, there is surely no justification for the FCC to throttle a merger based on speculation that the same conduct could conceivably arise in the future.

The realities of the broadband market post-merger wouldn’t have supported the FCC’s argument, either

While a larger Comcast might be in a position to realize more of the benefits from the exclusionary strategy Sallet described, it would also incur more of the costs — likely in direct proportion to the increased size of its subscriber base.

Think of it this way: To the extent that an MVPD can possibly constrain an OVD’s scope of distribution for programming, doing so also necessarily makes the MVPD’s own broadband offering less attractive, forcing it to incur a cost that would increase in proportion to the size of the distributor’s broadband market. In this case, as noted, Comcast would have gained MVPD subscribers — but it would have also gained broadband subscribers. In a world where cable is consistently losing video subscribers (as Sallet acknowledged), and where broadband offers higher margins and faster growth, it makes no economic sense that Comcast would have valued the trade-off the way the FCC claims it would have.

Moreover, in light of the existing conditions imposed on Comcast under the Comcast/NBCU merger order from 2011 (which last for a few more years) and the restrictions adopted in the Open Internet Order, Comcast’s ability to engage in the sort of exclusionary conduct described by Sallet would be severely limited, if not non-existent. Nor, of course, is there any guarantee that former or would-be OVD subscribers would choose to subscribe to, or pay more for, any MVPD in lieu of OVDs. Meanwhile, many of the relevant substitutes in the MVPD market (like AT&T and Verizon FiOS) also offer broadband services – thereby increasing the costs that would be incurred in the broadband market even more, as many subscribers would shift not only their MVPD, but also their broadband service, in response to Comcast degrading OVDs.

And speaking of the Open Internet Order — wasn’t that supposed to prevent ISPs like Comcast from acting on their alleged incentives to impede the quality of, or access to, edge providers like OVDs? Why is merger enforcement necessary to accomplish the same thing once Title II and the rest of the Open Internet Order are in place? And if the argument is that the Open Internet Order might be defeated, aside from the completely speculative nature of such a claim, why wouldn’t a merger condition that imposed the same constraints on Comcast – as was done in the Comcast/NBCU merger order by imposing the former net neutrality rules on Comcast – be perfectly sufficient?

While the FCC staff analysis accepted as true (again, contrary to current marketplace evidence) that a bigger Comcast would have more incentive to harm OVDs post-merger, it rejected arguments that there could be countervailing benefits to OVDs and others from this same increase in scale. Thus, things like incremental broadband investments and speed increases, a larger Wi-Fi network, and greater business services market competition – things that Comcast is already doing and would have done on a greater and more-accelerated scale in the acquired territories post-transaction – were deemed insufficient to outweigh the expected costs of the staff’s entirely speculative anticompetitive theory.

In reality, however, not only OVDs, but consumers – and especially TWC subscribers – would have benefitted from the merger by access to Comcast’s faster broadband speeds, its new investments, and its superior video offerings on the X1 platform, among other things. Many low-income families would have benefitted from expansion of Comcast’s Internet Essentials program, and many businesses would have benefited from the addition of a more effective competitor to the incumbent providers that currently dominate the business services market. Yet these and other verifiable benefits were given short shrift in the agency’s analysis because they “were viewed by staff as incapable of outweighing the potential harms.”

The assumptions underlying the FCC staff’s analysis of the broadband market are arbitrary and unsupportable

Sallet’s claim that the combined firm would have 60% of all high-speed broadband subscribers in the U.S. necessarily assumes a national broadband market measured at 25 Mbps or higher, which is a red herring.

The FCC has not explained why 25 Mbps is a meaningful benchmark for antitrust analysis. The FCC itself endorsed a 10 Mbps baseline for its Connect America fund last December, noting that over 70% of current broadband users subscribe to speeds less than 25 Mbps, even in areas where faster speeds are available. And streaming online video, the most oft-cited reason for needing high bandwidth, doesn’t require 25 Mbps: Netflix says that 5 Mbps is the most that’s required for an HD stream, and the same goes for Amazon (3.5 Mbps) and Hulu (1.5 Mbps).

What’s more, by choosing an arbitrary, faster speed to define the scope of the broadband market (in an effort to assert the non-competitiveness of the market, and thereby justify its broadband regulations), the agency has – without proper analysis or grounding, in my view – unjustifiably shrunk the size of the relevant market. But, as it happens, doing so also shrinks the size of the increase in “national market share” that the merger would have brought about.

Recall that the staff’s theory was premised on the idea that the merger would give Comcast control over enough of the broadband market that it could unilaterally impose costs on OVDs sufficient to impair their ability to reach or sustain minimum viable scale. But Comcast would have added only one percent of this invented “market” as a result of the merger. It strains credulity to assert that there could be any transaction-specific harm from an increase in market share equivalent to a rounding error.

In any case, basing its rejection of the merger on a manufactured 25 Mbps relevant market creates perverse incentives and will likely do far more to harm OVDs than realization of even the staff’s worst fears about the merger ever could have.

The FCC says it wants higher speeds, and it wants firms to invest in faster broadband. But here Comcast did just that, and then was punished for it. Rather than acknowledging Comcast’s ongoing broadband investments as strong indication that the FCC staff’s analysis might be on the wrong track, the FCC leadership simply sidestepped that inconvenient truth by redefining the market.

The lesson is that if you make your product too good, you’ll end up with an impermissibly high share of the market you create and be punished for it. This can’t possibly promote the public interest.

Furthermore, the staff’s analysis of competitive effects even in this ersatz market aren’t likely supportable. As noted, most subscribers access OVDs on connections that deliver content at speeds well below the invented 25 Mbps benchmark, and they pay the same prices for OVD subscriptions as subscribers who receive their content at 25 Mbps. Confronted with the choice to consume content at 25 Mbps or 10 Mbps (or less), the majority of consumers voluntarily opt for slower speeds — and they purchase service from Netflix and other OVDs in droves, nonetheless.

The upshot? Contrary to the implications on which the staff’s analysis rests, if Comcast were to somehow “degrade” OVD content on the 25 Mbps networks so that it was delivered with characteristics of video content delivered over a 10-Mbps network, real-world, observed consumer preferences suggest it wouldn’t harm OVDs’ access to consumers at all. This is especially true given that OVDs often have a global focus and reach (again, Netflix has 65 million subscribers in over 50 countries), making any claims that Comcast could successfully foreclose them from the relevant market even more suspect.

At the same time, while the staff apparently viewed the broadband alternatives as “limited,” the reality is that Comcast, as well as other broadband providers, are surrounded by capable competitors, including, among others, AT&T, Verizon, CenturyLink, Google Fiber, many advanced VDSL and fiber-based Internet service providers, and high-speed mobile wireless providers. The FCC understated the complex impact of this robust, dynamic, and ever-increasing competition, and its analysis entirely ignored rapidly growing mobile wireless broadband competition.

Finally, as noted, Sallet claimed that the staff determined that merger conditions would be insufficient to remedy its concerns, without any further explanation. Yet the Commission identified similar concerns about OVDs in both the Comcast/NBCUniversal and AT&T/DIRECTV transactions, and adopted remedies to address those concerns. We know the agency is capable of drafting behavioral conditions, and we know they have teeth, as demonstrated by prior FCC enforcement actions. It’s hard to understand why similar, adequate conditions could not have been fashioned for this transaction.

In the end, while I appreciate Sallet’s attempt to explain the FCC’s decision to reject the Comcast/TWC merger, based on the foregoing I’m not sure that Comcast could have made any argument or showing that would have dissuaded the FCC from challenging the merger. Comcast presented a strong economic analysis answering the staff’s concerns discussed above, all to no avail. It’s difficult to escape the conclusion that this was a politically-driven result, and not one rigorously based on the facts or marketplace reality.