Archives For

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Geoffrey A. Manne, (President, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics).]

There has been much (admittedly important) discussion of the economic woes of mass quarantine to thwart the spread and “flatten the curve” of the virus and its health burdens — as well as some extremely interesting discussion of the long-term health woes of quarantine and the resulting economic downturn: see, e.g., previous work by Christopher Ruhm suggesting mortality rates may improve during economic downturns, and this thread on how that might play out differently in the current health crisis.

But there is perhaps insufficient attention being paid to the more immediate problem of medical resource scarcity to treat large, localized populations of acutely sick people — something that will remain a problem for some time in places like New York, no matter how successful we are at flattening the curve. 

Yet the fact that we may have failed to prepare adequately for the current emergency does not mean that we can’t improve our ability to respond to the current emergency and build up our ability to respond to subsequent emergencies — both in terms of future, localized outbreaks of COVID-19, as well as for other medical emergencies more broadly.

In what follows I lay out the outlines of a proposal for an OPTN (Organ Procurement and Transplantation Network) analogue for allocating emergency medical resources. In order to make the idea more concrete (and because no doubt there is a limit to the types of medical resources for which such a program would be useful or necessary), let’s call it the VPAN — Ventilator Procurement and Allocation Network.

As quickly as possible in order to address the current crisis — and definitely with enough speed to address the next crisis — we should develop a program to collect relevant data and enable deployment of medical resources where they are most needed, using such data, wherever possible, to enable deployment before shortages become the enormous problem they are today

Data and information are important tools for mitigating emergencies

Hal’s post, especially in combination with Julian’s, offers a really useful suggestion for using modern information technology to help mitigate one of the biggest problems of the current crisis: The ability to return to economic activity (and a semblance of normalcy) as quickly as possible.

What I like most about his idea (and, again, Julian’s) is its incremental approach: We don’t have to wait until it’s safe for everyone to come outside in order for some people to do so. And, properly collected, assessed, and deployed, information is a key part of making that possible for more and more people every day.

Here I want to build on Hal’s idea to suggest another — perhaps even more immediately crucial — use of data to alleviate the COVID-19 crisis: The allocation of scarce medical resources.

In the current crisis, the “what” of this data is apparent: it is the testing data described by Julian in his post, and implemented in digital form by Hal in his. Thus, whereas Hal’s proposal contemplates using this data solely to allow proprietors (public transportation, restaurants, etc.) to admit entry to users, my proposal contemplates something more expansive: the provision of Hal’s test-verification vendors’ data to a centralized database in order to use it to assess current medical resource needs and to predict future needs.

The apparent ventilator availability crisis

As I have learned at great length from a friend whose spouse is an ICU doctor on the front lines, the current ventilator scarcity in New York City is worrisome (from a personal email, edited slightly for clarity):

When doctors talk about overwhelming a medical system, and talk about making life/death decisions, often they are talking about ventilators. A ventilator costs somewhere between $25K to $50K. Not cheap, but not crazy expensive. Most of the time these go unused, so hospitals have not stocked up on them, even in first-rate medical systems. Certainly not in the US, where equipment has to get used or the hospital does not get reimbursed for the purchase.

With a bad case of this virus you can put somebody — the sickest of the sickest — on one of those for three days and many of them don’t die. That frames a brutal capacity issue in a local area. And that is what has happened in Italy. They did not have enough ventilators in specific cities where the cases spiked. The mortality rates were much higher solely due to lack of these machines. Doctors had to choose who got on the machine and who did not. When you read these stories about a choice of life and death, that could be one reason for it.

Now the brutal part: This is what NYC might face soon. Faster than expected, by the way. Maybe they will ship patients to hospitals in other parts of NY state, and in NJ and CT. Maybe they can send them to the V.A. hospitals. Those are the options for how they hope to avoid this particular capacity issue. Maybe they will flatten the curve just enough with all the social distancing. Hard to know just now. But right now the doctors are pretty scared, and they are planning for the worst.

A recent PBS Report describes the current ventilator situation in the US:

A 2018 analysis from the Johns Hopkins University Center for Health Security estimated we have around 160,000 ventilators in the U.S. If the “worst-case scenario” were to come to pass in the U.S., “there might not be” enough ventilators, Dr. Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases, told CNN on March 15.

“If you don’t have enough ventilators, that means [obviously] that people who need it will not be able to get it,” Fauci said. He stressed that it was most important to mitigate the virus’ spread before it could overwhelm American health infrastructure.

Reports say that the American Hospital Association believes almost 1 million COVID-19 patients in the country will require a ventilator. Not every patient will require ventilation at the same time, but the numbers are still concerning. Dr. Daniel Horn, a physician at Massachusetts General Hospital in Boston, warned in a March 22 editorial in The New York Times that “There simply will not be enough of these machines, especially in major cities.”

The recent report of 9,000 COVID-19-related deaths in Italy brings the ventilator scarcity crisis into stark relief: There is little doubt that a substantial number of these deaths stem from the unavailability of key medical resources, including, most importantly, ventilators.  

Medical resource scarcity in the current crisis is a drastic problem. And without significant efforts to ameliorate it it is likely to get worse before it gets better. 

Using data to allocate scarce resources: The basic outlines of a proposed “Ventilator Procurement and Allocation Network”

But that doesn’t mean that the scarce resources we do have can’t be better allocated. As the PBS story quoted above notes, there are some 160,000 ventilators in the US. While that may not be enough in the aggregate, it’s considerably more than are currently needed in, say, New York City — and a great number of them are surely not currently being used, nor likely immediately to need to be used. 

The basic outline of the idea for redistributing these resources is fairly simple: 

  1. First, register all of the US’s existing ventilators in a centralized database. 
  2. Second (using a system like the one Hal describes), collect and update in real time the relevant test results, contact tracing, demographic, and other epidemiological data and input it into a database.
  3. Third, analyze this data using one or more compartmental models (or more targeted, virus-specific models) — (NB: I am the furthest thing from an epidemiologist, so I make no claims about how best to do this; the link above, e.g., is merely meant to be illustrative and not a recommendation) — to predict the demand for ventilators at various geographic levels, ranging from specific hospitals to counties or states. In much the same way, allocation of organs in the OPTN is based on a set of “allocation calculators” (which in turn are intended to implement the “Final Rule” adopted by HHS to govern transplant organ allocation decisions).   
  4. Fourth, ask facilities in low-expected-demand areas to send their unused (or excess above the level required to address “normal” demand) ventilators to those in high-expected-demand areas, with the expectation that they will be consistently reallocated across all hospitals and emergency care facilities according to the agreed-upon criteria. Of course, the allocation “algorithm” would be more complicated than this (as is the HHS Final Rule for organ allocation). But in principle this would be the primary basis for allocation. 

Not surprisingly, some guidelines for the allocation of ventilators in such emergencies already exist — like New York’s Ventilator Allocation Guidelines for triaging ventilators during an influenza pandemic. But such guidelines address the protocols for each facility to use in determining how to allocate its own scarce resources; they do not contemplate the ability to alleviate shortages in the first place by redistributing ventilators across facilities (or cities, states, etc.).

I believe that such a system — like the OPTN — could largely work on a voluntary basis. Of course, I’m quick to point out that the OPTN is a function of a massive involuntary and distortionary constraint: the illegality of organ sales. But I suspect that a crisis like the one we’re currently facing is enough to engender much the same sort of shortage (as if such a constraint were in place with respect to the use of ventilators), and thus that a similar system would be similarly useful. If not, of course, it’s possible that the government could, in emergency situations, actually commandeer privately-owned ventilators in order to effectuate the system. I leave for another day the consideration of the merits and defects of such a regime.

Of course, it need not rely on voluntary participation. There could be any number of feasible means of inducing hospitals that have unused ventilators to put their surpluses into the allocation network, presumably involving some sort of cash or other compensation. Or perhaps, if and when such a system were expanded to include other medical resources, it might involve moving donor hospitals up the queue for some other scarce resources they need that don’t face a current crisis. Surely there must be equipment that a New York City hospital has in relative surplus that a small town hospital covets.

But the key point is this: It doesn’t make sense to produce and purchase enough ventilators so that every hospital in the country can simultaneously address extremely rare peak demands. Doing so would be extraordinarily — and almost always needlessly — expensive. And emergency preparedness is never about ensuring that there are no shortages in the worst-case scenario; it’s about making a minimax calculation (as odious as those are) — i.e., minimizing the maximal cost/risk, not mitigating risk entirely. (For a literature review of emergency logistics in the context of large-scale disasters, see, e.g., here)

But nor does it make sense — as a policy matter — to allocate the new ventilators that will be produced in response to current demand solely on the basis of current demand. The epidemiological externalities of the current pandemic are substantial, and there is little reason to think that currently over-taxed emergency facilities — or even those preparing for their own expected demand — will make procurement decisions that reflect the optimal national (let alone global) allocation of such resources. A system like the one I outline here would effectively enable the conversion of private, constrained decisions to serve the broader demands required for optimal allocation of scarce resources in the face of epidemiological externalities

Indeed — and importantly — such a program allows the government to supplement existing and future public and private procurement decisions to ensure an overall optimal level of supply (and, of course, government-owned ventilators — 10,000 of which already exist in the Strategic National Stockpile — would similarly be put into the registry and deployed using the same criteria). Meanwhile, it would allow private facilities to confront emergency scenarios like the current one with far more resources than it would ever make sense for any given facility to have on hand in normal times.

Some caveats

There are, as always, caveats. First, such a program relies on the continued, effective functioning of transportation networks. If any given emergency were to disrupt these — and surely some would — the program would not necessarily function as planned. Of course, some of this can be mitigated by caching emergency equipment in key locations, and, over the course of an emergency, regularly redistributing those caches to facilitate expected deployments as the relevant data comes in. But, to be sure, at the end of the day such a program depends on the ability to transport ventilators.

In addition, there will always be the risk that emergency needs swamp even the aggregate available resources simultaneously (as may yet occur during the current crisis). But at the limit there is nothing that can be done about such an eventuality: Short of having enough ventilators on hand so that every needy person in the country can use one essentially simultaneously, there will always be the possibility that some level of demand will outpace our resources. But even in such a situation — where allocation of resources is collectively guided by epidemiological (or, in the case of other emergencies, other relevant) criteria — the system will work to mitigate the likely overburdening of resources, and ensure that overall resource allocation is guided by medically relevant criteria, rather than merely the happenstance of geography, budget constraints, storage space, or the like.     

Finally, no doubt a host of existing regulations make such a program difficult or impossible. Obviously, these should be rescinded. One set of policy concerns is worth noting: privacy concerns. There is an inherent conflict between strong data privacy, in which decisions about the sharing of information belong to each individual, and the data needs to combat an epidemic, in which each person’s privately optimal level of data sharing may result in a socially sub-optimal level of shared data. To the extent that HIPAA or other privacy regulations would stand in the way of a program like this, it seems singularly important to relax them. Much of the relevant data cannot be efficiently collected on an opt-in basis (as is easily done, by contrast, for the OPTN). Certainly appropriate safeguards should be put in place (particularly with respect to the ability of government agencies/law enforcement to access the data). But an individual’s idiosyncratic desire to constrain the sharing of personal data in this context seems manifestly less important than the benefits of, at the very least, a default rule that the relevant data be shared for these purposes.

Appropriate standards for emergency preparedness policy generally

Importantly, such a plan would have broader applicability beyond ventilators and the current crisis. And this is a key aspect of addressing the problem: avoiding a myopic focus on the current emergency in lieu of more clear-eyed emergency preparedness plan

It’s important to be thinking not only about the current crisis but also about the next emergency. But it’s equally important not to let political point-scoring and a bias in favor of focusing on the seen over the unseen coopt any such efforts. A proper assessment entails the following considerations (surely among others) (and hat tip to Ron Cass for bringing to my attention most of the following insights):

  1. Arguably we are overweighting health and safety concerns with respect to COVID-19 compared to our assessments in other areas (such as ordinary flu (on which see this informative thread by Anup Malani), highway safety, heart & coronary artery diseases, etc.). That’s inevitable when one particular concern is currently so omnipresent and so disruptive. But it is important that we not let our preparations for future problems focus myopically on this cause, because the next crisis may be something entirely different. 
  2. Nor is it reasonable to expect that we would ever have been (or be in the future) fully prepared for a global pandemic. It may not be an “unknown unknown,” but it is impossible to prepare for all possible contingencies, and simply not sensible to prepare fully for such rare and difficult-to-predict events.
  3. That said, we also shouldn’t be surprised that we’re seeing more frequent global pandemics (a function of broader globalization), and there’s little reason to think that we won’t continue to do so. It makes sense to be optimally prepared for such eventualities, and if this one has shown us anything, it’s that our ability to allocate medical resources that are made suddenly scarce by a widespread emergency is insufficient. 
  4. But rather than overreact to such crises — which is difficult, given that overreaction typically aligns with the private incentives of key decision makers, the media, and many in the “chattering class” — we should take a broader, more public-focused view of our response. Moreover, political and bureaucratic incentives not only produce overreactions to visible crises, they also undermine the appropriate preparation for such crises in the future.
  5. Thus, we should create programs that identify and mobilize generically useful emergency equipment not likely to be made obsolete within a short period and likely to be needed whatever the source of the next emergency. In other words, we should continue to focus the bulk of our preparedness on things like quickly deployable ICU facilities, ventilators, and clean blood supplies — not, as we may be wrongly inclined to do given the salience of the current crisis, primarily on specially targeted drugs and test kits. Our predictive capacity for our future demand of more narrowly useful products is too poor to justify substantial investment.
  6. Given the relative likelihood of another pandemic, generic preparedness certainly includes the ability to inhibit overly fast spread of a disease that can clog critical health care facilities. This isn’t disease-specific (or, that is, while the specific rate and contours of infection are specific to each disease, relatively fast and widespread contagion is what causes any such disease to overtax our medical resources, so if we’re preparing for a future virus-related emergency, we’re necessarily preparing for a disease that spreads quickly and widely).

Because the next emergency isn’t necessarily going to be — and perhaps isn’t even likely to be — a pandemic, our preparedness should not be limited to pandemic preparedness. This means, as noted above, overcoming the political and other incentives to focus myopically on the current problem even when nominally preparing for the next one. But doing so is difficult, and requires considerable political will and leadership. It’s hard to conceive of our current federal leadership being up to the task, but it’s certainly not the case that our current problems are entirely the makings of this administration. All governments spend too much time and attention solving — and regulating — the most visible problems, whether doing so is socially optimal or not.   

Thus, in addition to (1) providing for the efficient and effective use of data to allocate emergency medical resources (e.g., as described above), and (2) ensuring that our preparedness centers primarily on generically useful emergency equipment, our overall response should also (3) recognize and correct the way current regulatory regimes also overweight visible adverse health effects and inhibit competition and adaptation by industry and those utilizing health services, and (4) make sure that the economic and health consequences of emergency and regulatory programs (such as the current quarantine) are fully justified and optimized.

A proposal like the one I outline above would, I believe, be consistent with these considerations and enable more effective medical crisis response in general.

The 2020 Draft Joint Vertical Merger Guidelines:

What’s in, what’s out — and do we need them anyway?

February 6 & 7, 2020

Welcome! We’re delighted to kick off our two-day blog symposium on the recently released Draft Joint Vertical Merger Guidelines from the DOJ Antitrust Division and the Federal Trade Commission. 

If adopted by the agencies, the guidelines would mark the first time since 1984 that U.S. federal antitrust enforcers have provided official, public guidance on their approach to the increasingly important issue of vertical merger enforcement. 

As previously noted, the release of the draft guidelines was controversial from the outset: The FTC vote to issue the draft was mixed, with a dissent from Commissioner Slaughter, an abstention from Commissioner Chopra, and a concurring statement from Commissioner Wilson.

As the antitrust community gears up to debate the draft guidelines, we have assembled an outstanding group of antitrust experts to weigh in with their initial thoughts on the guidelines here at Truth on the Market. We hope this symposium will provide important insights and stand as a useful resource for the ongoing discussion.

The scholars and practitioners who will participate in the symposium are:

  • Timothy J. Brennan (Professor, Public Policy and Economics, University of Maryland; former Chief Economist, FCC; former economist, DOJ Antitrust Division)
  • Steven Cernak (Partner, Bona Law PC; former antitrust counsel, GM)
  • Eric Fruits (Chief Economist, ICLE; Professor of Economics, Portland State University)
  • Herbert Hovenkamp (James G. Dinan University Professor of Law, University of Pennsylvania)
  • Jonathan M. Jacobson (Partner, Wilson Sonsini Goodrich & Rosati) and Kenneth Edelson (Associate, Wilson Sonsini Goodrich & Rosati)
  • William J. Kolasky (Partner, Hughes Hubbard & Reed; former Deputy Assistant Attorney General, DOJ Antitrust Division) and Philip A. Giordano (Partner, Hughes Hubbard & Reed LLP)
  • Geoffrey A. Manne (President & Founder, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics) and Kristian Stout (Associate Director, ICLE)
  • Jonathan E. Nuechterlein (Partner, Sidley Austin LLP; former General Counsel, FTC; former Deputy General Counsel, FCC)
  • Sharis A. Pozen (Partner, Clifford Chance; former Vice President of Global Competition Law and Policy, GE; former Acting Assistant Attorney General, DOJ Antitrust Division), Timothy Cornell (Partner, Clifford Chance), Brian Concklin (Counsel, Clifford Chance), and Michael Van Arsdall (Counsel, Clifford Chance)
  • Jan Rybnicek (Counsel, Freshfields Bruckhaus Deringer; former attorney adviser to Commissioner Joshua D. Wright, FTC)
  • Steven C. Salop (tent.) (Professor of Economics and Law, Georgetown University; former Associate Director, FTC Bureau of Economics)
  • Scott A. Sher (Partner, Wilson Sonsini Goodrich & Rosati) and Matthew McDonald (Associate, Wilson Sonsini Goodrich & Rosati)
  • Margaret Slade (Professor Emeritus, Vancouver School of Economics, University of British Columbia)
  • Gregory Werden (former Senior Economic Counsel, DOJ Antitrust Division) and Luke M. Froeb (William C. Oehmig Chair in Free Enterprise and Entrepreneurship, Vanderbilt University; former Chief Economist, DOJ Antitrust Division; former Chief Economist, FTC)
  • Lawrence J. White (Robert Kavesh Professor of Economics, New York University; former Chief Economist, DOJ Antitrust Division)
  • Joshua D. Wright (University Professor of Law, George Mason University; former Commissioner, FTC), Douglas H. Ginsburg (Senior Circuit Judge, US Court of Appeals for the DC Circuit; Professor of Law, George Mason University; former Assistant Attorney General, DOJ Antitrust Division), Tad Lipsky (Assistant Professor of Law, George Mason University; former Acting Director, FTC Bureau of Competition; former chief antitrust counsel, Coca-Cola; former Deputy Assistant Attorney General, DOJ Antitrust Division), and John M. Yun (Associate Professor of Law, George Mason University; former Acting Deputy Assistant Director, FTC Bureau of Economics)

The first of the participants’ initial posts will appear momentarily, with additional posts appearing throughout the day today and tomorrow. We hope to generate a lively discussion, and expect some of the participants to offer follow up posts and/or comments on their fellow participants’ posts — please be sure to check back throughout the day and be sure to check the comments. We hope our readers will join us in the comments, as well.

Once again, welcome!

Truth on the Market is pleased to announce its next blog symposium:

The 2020 Draft Joint Vertical Merger Guidelines: What’s in, what’s out — and do we need them anyway?

February 6 & 7, 2020

Symposium background

On January 10, 2020, the DOJ Antitrust Division and the Federal Trade Commission released Draft Joint Vertical Merger Guidelines for public comment. If adopted by the agencies, the guidelines would mark the first time since 1984 that U.S. federal antitrust enforcers have provided official, public guidance on their approach to the increasingly important issue of vertical merger enforcement: 

“Challenging anticompetitive vertical mergers is essential to vigorous enforcement. The agencies’ vertical merger policy has evolved substantially since the issuance of the 1984 Non-Horizontal Merger Guidelines, and our guidelines should reflect the current enforcement approach. Greater transparency about the complex issues surrounding vertical mergers will benefit the business community, practitioners, and the courts,” said FTC Chairman Joseph J. Simons.

As evidenced by FTC Commissioner Slaughter’s dissent and FTC Commissioner Chopra’s abstention from the FTC’s vote to issue the draft guidelines, the topic is a contentious one. Similarly, as FTC Commissioner Wilson noted in her concurring statement, the recent FTC hearing on vertical mergers demonstrated that there is a vigorous dispute over what new guidelines should look like (or even if the 1984 Non-Horizontal Guidelines should be updated at all).

The agencies have announced two upcoming workshops to discuss the draft guidelines and have extended the comment period on the draft until February 26.

In advance of the workshops and the imminent discussions over the draft guidelines, we have asked a number of antitrust experts to weigh in here at Truth on the Market: to preview the coming debate by exploring the economic underpinnings of the draft guidelines and their likely role in the future of merger enforcement at the agencies, as well as what is in the guidelines and — perhaps more important — what is left out.  

Beginning the morning of Thursday, February 6, and continuing during business hours through Friday, February 7, Truth on the Market (TOTM) and the International Center for Law & Economics (ICLE) will host a blog symposium on the draft guidelines. 

Symposium participants

As in the past (see examples of previous TOTM blog symposia here), we’ve lined up an outstanding and diverse group of scholars to discuss these issues, including:

  • Timothy J. Brennan (Professor, Public Policy and Economics, University of Maryland; former Chief Economist, FCC; former economist, DOJ Antitrust Division)
  • Steven Cernak (Partner, Bona Law PC; former antitrust counsel, GM)
  • Luke M. Froeb (William C. Oehmig Chair in Free Enterprise and Entrepreneurship, Vanderbilt University; former Chief Economist, DOJ Antitrust Division; former Chief Economist, FTC)
  • Eric Fruits (Chief Economist, ICLE; Professor of Economics, Portland State University)
  • Douglas H. Ginsburg (Senior Circuit Judge, US Court of Appeals for the DC Circuit; Professor of Law, George Mason University; former Assistant Attorney General, DOJ Antitrust Division)
  • Herbert Hovenkamp (James G. Dinan University Professor of Law, University of Pennsylvania)
  • Jonathan M. Jacobson (Partner, Wilson Sonsini Goodrich & Rosati)
  • William J. Kolasky (Partner, Hughes Hubbard & Reed; former Deputy Assistant Attorney General, DOJ Antitrust Division)
  • Tad Lipsky (Assistant Professor of Law, George Mason University; former Acting Director, FTC Bureau of Competition; former chief antitrust counsel, Coca-Cola; former Deputy Assistant Attorney General, DOJ Antitrust Division) 
  • Geoffrey A. Manne (President & Founder, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics)
  • Jonathan E. Nuechterlein (Partner, Sidley Austin LLP; former General Counsel, FTC; former Deputy General Counsel, FCC)
  • Sharis A. Pozen (Partner, Clifford Chance; former Vice President of Global Competition Law and Policy, GE; former Acting Assistant Attorney General, DOJ Antitrust Division) 
  • Jan Rybnicek (Counsel, Freshfields Bruckhaus Deringer; former attorney adviser to Commissioner Joshua D. Wright, FTC)
  • Steven C. Salop (tent.) (Professor of Economics and Law, Georgetown University; former Associate Director, FTC Bureau of Economics)
  • Scott A. Sher (Partner, Wilson Sonsini Goodrich & Rosati)
  • Margaret Slade (Professor Emeritus, Vancouver School of Economics, University of British Columbia)
  • Kristian Stout (Associate Director, ICLE)
  • Gregory Werden (former Senior Economic Counsel, DOJ Antitrust Division)
  • Lawrence J. White (Robert Kavesh Professor of Economics, New York University; former Chief Economist, DOJ Antitrust Division)
  • Joshua D. Wright (University Professor of Law, George Mason University; former Commissioner, FTC)
  • John M. Yun (Associate Professor of Law, George Mason University; former Acting Deputy Assistant Director, FTC Bureau of Economics)

We want to thank all of these excellent panelists for agreeing to take time away from their busy schedules to participate in this symposium. We are hopeful that this discussion will provide invaluable insight and perspective on the Draft Joint Vertical Merger Guidelines.

Look for the first posts starting Thursday, February 6!

An oft-repeated claim of conferences, media, and left-wing think tanks is that lax antitrust enforcement has led to a substantial increase in concentration in the US economy of late, strangling the economy, harming workers, and saddling consumers with greater markups in the process. But what if rising concentration (and the current level of antitrust enforcement) were an indication of more competition, not less?

By now the concentration-as-antitrust-bogeyman story is virtually conventional wisdom, echoed, of course, by political candidates such as Elizabeth Warren trying to cash in on the need for a government response to such dire circumstances:

In industry after industry — airlines, banking, health care, agriculture, tech — a handful of corporate giants control more and more. The big guys are locking out smaller, newer competitors. They are crushing innovation. Even if you don’t see the gears turning, this massive concentration means prices go up and quality goes down for everything from air travel to internet service.  

But the claim that lax antitrust enforcement has led to increased concentration in the US and that it has caused economic harm has been debunked several times (for some of our own debunking, see Eric Fruits’ posts here, here, and here). Or, more charitably to those who tirelessly repeat the claim as if it is “settled science,” it has been significantly called into question

Most recently, several working papers looking at the data on concentration in detail and attempting to identify the likely cause for the observed data, show precisely the opposite relationship. The reason for increased concentration appears to be technological, not anticompetitive. And, as might be expected from that cause, its effects are beneficial. Indeed, the story is both intuitive and positive.

What’s more, while national concentration does appear to be increasing in some sectors of the economy, it’s not actually so clear that the same is true for local concentration — which is often the relevant antitrust market.

The most recent — and, I believe, most significant — corrective to the conventional story comes from economists Chang-Tai Hsieh of the University of Chicago and Esteban Rossi-Hansberg of Princeton University. As they write in a recent paper titled, “The Industrial Revolution in Services”: 

We show that new technologies have enabled firms that adopt them to scale production over a large number of establishments dispersed across space. Firms that adopt this technology grow by increasing the number of local markets that they serve, but on average are smaller in the markets that they do serve. Unlike Henry Ford’s revolution in manufacturing more than a hundred years ago when manufacturing firms grew by concentrating production in a given location, the new industrial revolution in non-traded sectors takes the form of horizontal expansion across more locations. At the same time, multi-product firms are forced to exit industries where their productivity is low or where the new technology has had no effect. Empirically we see that top firms in the overall economy are more focused and have larger market shares in their chosen sectors, but their size as a share of employment in the overall economy has not changed. (pp. 42-43) (emphasis added).

This makes perfect sense. And it has the benefit of not second-guessing structural changes made in response to technological change. Rather, it points to technological change as doing what it regularly does: improving productivity.

The implementation of new technology seems to be conferring benefits — it’s just that these benefits are not evenly distributed across all firms and industries. But the assumption that larger firms are causing harm (or even that there is any harm in the first place, whatever the cause) is unmerited. 

What the authors find is that the apparent rise in national concentration doesn’t tell the relevant story, and the data certainly aren’t consistent with assumptions that anticompetitive conduct is either a cause or a result of structural changes in the economy.

Hsieh and Rossi-Hansberg point out that increased concentration is not happening everywhere, but is being driven by just three industries:

First, we show that the phenomena of rising concentration . . . is only seen in three broad sectors – services, wholesale, and retail. . . . [T]op firms have become more efficient over time, but our evidence indicates that this is only true for top firms in these three sectors. In manufacturing, for example, concentration has fallen.

Second, rising concentration in these sectors is entirely driven by an increase [in] the number of local markets served by the top firms. (p. 4) (emphasis added).

These findings are a gloss on a (then) working paper — The Fall of the Labor Share and the Rise of Superstar Firms — by David Autor, David Dorn, Lawrence F. Katz, Christina Patterson, and John Van Reenan (now forthcoming in the QJE). Autor et al. (2019) finds that concentration is rising, and that it is the result of increased productivity:

If globalization or technological changes push sales towards the most productive firms in each industry, product market concentration will rise as industries become increasingly dominated by superstar firms, which have high markups and a low labor share of value-added.

We empirically assess seven predictions of this hypothesis: (i) industry sales will increasingly concentrate in a small number of firms; (ii) industries where concentration rises most will have the largest declines in the labor share; (iii) the fall in the labor share will be driven largely by reallocation rather than a fall in the unweighted mean labor share across all firms; (iv) the between-firm reallocation component of the fall in the labor share will be greatest in the sectors with the largest increases in market concentration; (v) the industries that are becoming more concentrated will exhibit faster growth of productivity; (vi) the aggregate markup will rise more than the typical firm’s markup; and (vii) these patterns should be observed not only in U.S. firms, but also internationally. We find support for all of these predictions. (emphasis added).

This is alone is quite important (and seemingly often overlooked). Autor et al. (2019) finds that rising concentration is a result of increased productivity that weeds out less-efficient producers. This is a good thing. 

But Hsieh & Rossi-Hansberg drill down into the data to find something perhaps even more significant: the rise in concentration itself is limited to just a few sectors, and, where it is observed, it is predominantly a function of more efficient firms competing in more — and more localized — markets. This means that competition is increasing, not decreasing, whether it is accompanied by an increase in concentration or not. 

No matter how may times and under how many monikers the antitrust populists try to revive it, the Structure-Conduct-Performance paradigm remains as moribund as ever. Indeed, on this point, as one of the new antitrust agonists’ own, Fiona Scott Morton, has written (along with co-authors Martin Gaynor and Steven Berry):

In short, there is no well-defined “causal effect of concentration on price,” but rather a set of hypotheses that can explain observed correlations of the joint outcomes of price, measured markups, market share, and concentration. As Bresnahan (1989) argued three decades ago, no clear interpretation of the impact of concentration is possible without a clear focus on equilibrium oligopoly demand and “supply,” where supply includes the list of the marginal cost functions of the firms and the nature of oligopoly competition. 

Some of the recent literature on concentration, profits, and markups has simply reasserted the relevance of the old-style structure-conduct-performance correlations. For economists trained in subfields outside industrial organization, such correlations can be attractive. 

Our own view, based on the well-established mainstream wisdom in the field of industrial organization for several decades, is that regressions of market outcomes on measures of industry structure like the Herfindahl-Hirschman Index should be given little weight in policy debates. Such correlations will not produce information about the causal estimates that policy demands. It is these causal relationships that will help us understand what, if anything, may be causing markups to rise. (emphasis added).

Indeed! And one reason for the enduring irrelevance of market concentration measures is well laid out in Hsieh and Rossi-Hansberg’s paper:

This evidence is consistent with our view that increasing concentration is driven by new ICT-enabled technologies that ultimately raise aggregate industry TFP. It is not consistent with the view that concentration is due to declining competition or entry barriers . . . , as these forces will result in a decline in industry employment. (pp. 4-5) (emphasis added)

The net effect is that there is essentially no change in concentration by the top firms in the economy as a whole. The “super-star” firms of today’s economy are larger in their chosen sectors and have unleashed productivity growth in these sectors, but they are not any larger as a share of the aggregate economy. (p. 5) (emphasis added)

Thus, to begin with, the claim that increased concentration leads to monopsony in labor markets (and thus unemployment) appears to be false. Hsieh and Rossi-Hansberg again:

[W]e find that total employment rises substantially in industries with rising concentration. This is true even when we look at total employment of the smaller firms in these industries. (p. 4)

[S]ectors with more top firm concentration are the ones where total industry employment (as a share of aggregate employment) has also grown. The employment share of industries with increased top firm concentration grew from 70% in 1977 to 85% in 2013. (p. 9)

Firms throughout the size distribution increase employment in sectors with increasing concentration, not only the top 10% firms in the industry, although by definition the increase is larger among the top firms. (p. 10) (emphasis added)

Again, what actually appears to be happening is that national-level growth in concentration is actually being driven by increased competition in certain industries at the local level:

93% of the growth in concentration comes from growth in the number of cities served by top firms, and only 7% comes from increased employment per city. . . . [A]verage employment per county and per establishment of top firms falls. So necessarily more than 100% of concentration growth has to come from the increase in the number of counties and establishments served by the top firms. (p.13)

The net effect is a decrease in the power of top firms relative to the economy as a whole, as the largest firms specialize more, and are dominant in fewer industries:

Top firms produce in more industries than the average firm, but less so in 2013 compared to 1977. The number of industries of a top 0.001% firm (relative to the average firm) fell from 35 in 1977 to 17 in 2013. The corresponding number for a top 0.01% firm is 21 industries in 1977 and 9 industries in 2013. (p. 17)

Thus, summing up, technology has led to increased productivity as well as greater specialization by large firms, especially in relatively concentrated industries (exactly the opposite of the pessimistic stories):  

[T]op firms are now more specialized, are larger in the chosen industries, and these are precisely the industries that have experienced concentration growth. (p. 18)

Unsurprisingly (except to some…), the increase in concentration in certain industries does not translate into an increase in concentration in the economy as a whole. In other words, workers can shift jobs between industries, and there is enough geographic and firm mobility to prevent monopsony. (Despite rampant assumptions that increased concentration is constraining labor competition everywhere…).

Although the employment share of top firms in an average industry has increased substantially, the employment share of the top firms in the aggregate economy has not. (p. 15)

It is also simply not clearly the case that concentration is causing prices to rise or otherwise causing any harm. As Hsieh and Rossi-Hansberg note:

[T]he magnitude of the overall trend in markups is still controversial . . . and . . . the geographic expansion of top firms leads to declines in local concentration . . . that could enhance competition. (p. 37)

Indeed, recent papers such as Traina (2018), Gutiérrez and Philippon (2017), and the IMF (2019) have found increasing markups over the last few decades but at much more moderate rates than the famous De Loecker and Eeckhout (2017) study. Other parts of the anticompetitive narrative have been challenged as well. Karabarbounis and Neiman (2018) finds that profits have increased, but are still within their historical range. Rinz (2018) shows decreased wages in concentrated markets but also points out that local concentration has been decreasing over the relevant time period.

None of this should be so surprising. Has antitrust enforcement gotten more lax, leading to greater concentration? According to Vita and Osinski (2018), not so much. And how about the stagnant rate of new firms? Are incumbent monopolists killing off new startups? The more likely — albeit mundane — explanation, according to Hopenhayn et al. (2018), is that increased average firm age is due to an aging labor force. Lastly, the paper from Hsieh and Rossi-Hansberg discussed above is only the latest in a series of papers, including Bessen (2017), Van Reenen (2018), and Autor et al. (2019), that shows a rise in fixed costs due to investments in proprietary information technology, which correlates with increased concentration. 

So what is the upshot of all this?

  • First, as noted, employment has not decreased because of increased concentration; quite the opposite. Employment has increased in the industries that have experienced the most concentration at the national level.
  • Second, this result suggests that the rise in concentrated industries has not led to increased market power over labor.
  • Third, concentration itself needs to be understood more precisely. It is not explained by a simple narrative that the economy as a whole has experienced a great deal of concentration and this has been detrimental for consumers and workers. Specific industries have experienced national level concentration, but simultaneously those same industries have become more specialized and expanded competition into local markets. 

Surprisingly (because their paper has been around for a while and yet this conclusion is rarely recited by advocates for more intervention — although they happily use the paper to support claims of rising concentration), Autor et al. (2019) finds the same thing:

Our formal model, detailed below, generates superstar effects from increases in the toughness of product market competition that raise the market share of the most productive firms in each sector at the expense of less productive competitors. . . . An alternative perspective on the rise of superstar firms is that they reflect a diminution of competition, due to a weakening of U.S. antitrust enforcement (Dottling, Gutierrez and Philippon, 2018). Our findings on the similarity of trends in the U.S. and Europe, where antitrust authorities have acted more aggressively on large firms (Gutierrez and Philippon, 2018), combined with the fact that the concentrating sectors appear to be growing more productive and innovative, suggests that this is unlikely to be the primary explanation, although it may important in some specific industries (see Cooper et al, 2019, on healthcare for example). (emphasis added).

The popular narrative among Neo-Brandeisian antitrust scholars that lax antitrust enforcement has led to concentration detrimental to society is at base an empirical one. The findings of these empirical papers severely undermine the persuasiveness of that story.

Neither side in the debate over Section 230 is blameless for the current state of affairs. Reform/repeal proponents have tended to offer ill-considered, irrelevant, or often simply incorrect justifications for amending or tossing Section 230. Meanwhile, many supporters of the law in its current form are reflexively resistant to any change and too quick to dismiss the more reasonable concerns that have been voiced.

Most of all, the urge to politicize this issue — on all sides — stands squarely in the way of any sensible discussion and thus of any sensible reform.

Continue Reading...

[TOTM: The following is the fourth in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here. This post originally appeared on the Federalist Society Blog.]

The courtroom trial in the Federal Trade Commission’s (FTC’s) antitrust case against Qualcomm ended in January with a promise from the judge in the case, Judge Lucy Koh, to issue a ruling as quickly as possible — caveated by her acknowledgement that the case is complicated and the evidence voluminous. Well, things have only gotten more complicated since the end of the trial. Not only did Apple and Qualcomm reach a settlement in the antitrust case against Qualcomm that Apple filed just three days after the FTC brought its suit, but the abbreviated trial in that case saw the presentation by Qualcomm of some damning evidence that, if accurate, seriously calls into (further) question the merits of the FTC’s case.

Apple v. Qualcomm settles — and the DOJ takes notice

The Apple v. Qualcomm case, which was based on substantially the same arguments brought by the FTC in its case, ended abruptly last month after only a day and a half of trial — just enough time for the parties to make their opening statements — when Apple and Qualcomm reached an out-of-court settlement. The settlement includes a six-year global patent licensing deal, a multi-year chip supplier agreement, an end to all of the patent disputes around the world between the two companies, and a $4.5 billion settlement payment from Apple to Qualcomm.

That alone complicates the economic environment into which Judge Koh will issue her ruling. But the Apple v. Qualcomm trial also appears to have induced the Department of Justice Antitrust Division (DOJ) to weigh in on the FTC’s case with a Statement of Interest requesting Judge Koh to use caution in fashioning a remedy in the case should she side with the FTC, followed by a somewhat snarky Reply from the FTC arguing the DOJ’s filing was untimely (and, reading the not-so-hidden subtext, unwelcome).

But buried in the DOJ’s Statement is an important indication of why it filed its Statement when it did, just about a week after the end of the Apple v. Qualcomm case, and a pointer to a much larger issue that calls the FTC’s case against Qualcomm even further into question (I previously wrote about the lack of theoretical and evidentiary merit in the FTC’s case here).

Footnote 6 of the DOJ’s Statement reads:

Internal Apple documents that recently became public describe how, in an effort to “[r]educe Apple’s net royalty to Qualcomm,” Apple planned to “[h]urt Qualcomm financially” and “[p]ut Qualcomm’s licensing model at risk,” including by filing lawsuits raising claims similar to the FTC’s claims in this case …. One commentator has observed that these documents “potentially reveal[] that Apple was engaging in a bad faith argument both in front of antitrust enforcers as well as the legal courts about the actual value and nature of Qualcomm’s patented innovation.” (Emphasis added).

Indeed, the slides presented by Qualcomm during that single day of trial in Apple v. Qualcomm are significant, not only for what they say about Apple’s conduct, but, more importantly, for what they say about the evidentiary basis for the FTC’s claims against the company.

The evidence presented by Qualcomm in its opening statement suggests some troubling conduct by Apple

Others have pointed to Qualcomm’s opening slides and the Apple internal documents they present to note Apple’s apparent bad conduct. As one commentator sums it up:

Although we really only managed to get a small glimpse of Qualcomm’s evidence demonstrating the extent of Apple’s coordinated strategy to manipulate the FRAND license rate, that glimpse was particularly enlightening. It demonstrated a decade-long coordinated effort within Apple to systematically engage in what can only fairly be described as manipulation (if not creation of evidence) and classic holdout.

Qualcomm showed during opening arguments that, dating back to at least 2009, Apple had been laying the foundation for challenging its longstanding relationship with Qualcomm. (Emphasis added).

The internal Apple documents presented by Qualcomm to corroborate this claim appear quite damning. Of course, absent explanation and cross-examination, it’s impossible to know for certain what the documents mean. But on their face they suggest Apple knowingly undertook a deliberate scheme (and knowingly took upon itself significant legal risk in doing so) to devalue comparable patent portfolios to Qualcomm’s:

The apparent purpose of this scheme was to devalue comparable patent licensing agreements where Apple had the power to do so (through litigation or the threat of litigation) in order to then use those agreements to argue that Qualcomm’s royalty rates were above the allowable, FRAND level, and to undermine the royalties Qualcomm would be awarded in courts adjudicating its FRAND disputes with the company. As one commentator put it:

Apple embarked upon a coordinated scheme to challenge weaker patents in order to beat down licensing prices. Once the challenges to those weaker patents were successful, and the licensing rates paid to those with weaker patent portfolios were minimized, Apple would use the lower prices paid for weaker patent portfolios as proof that Qualcomm was charging a super-competitive licensing price; a licensing price that violated Qualcomm’s FRAND obligations. (Emphasis added).

That alone is a startling revelation, if accurate, and one that would seem to undermine claims that patent holdout isn’t a real problem. It also would undermine Apple’s claims that it is a “willing licensee,” engaging with SEP licensors in good faith. (Indeed, this has been called into question before, and one Federal Circuit judge has noted in dissent that “[t]he record in this case shows evidence that Apple may have been a hold out.”). If the implications drawn from the Apple documents shown in Qualcomm’s opening statement are accurate, there is good reason to doubt that Apple has been acting in good faith.

Even more troubling is what it means for the strength of the FTC’s case

But the evidence offered in Qualcomm’s opening argument point to another, more troubling implication, as well. We know that Apple has been coordinating with the FTC and was likely an important impetus for the FTC’s decision to bring an action in the first place. It seems reasonable to assume that Apple used these “manipulated” agreements to help make its case.

But what is most troubling is the extent to which it appears to have worked.

The FTC’s action against Qualcomm rested in substantial part on arguments that Qualcomm’s rates were too high (even though the FTC constructed its case without coming right out and saying this, at least until trial). In its opening statement the FTC said:

Qualcomm’s practices, including no license, no chips, skewed negotiations towards the outcomes that favor Qualcomm and lead to higher royalties. Qualcomm is committed to license its standard essential patents on fair, reasonable, and non-discriminatory terms. But even before doing market comparison, we know that the license rates charged by Qualcomm are too high and above FRAND because Qualcomm uses its chip power to require a license.

* * *

Mr. Michael Lasinski [the FTC’s patent valuation expert] compared the royalty rates received by Qualcomm to … the range of FRAND rates that ordinarily would form the boundaries of a negotiation … Mr. Lasinski’s expert opinion … is that Qualcomm’s royalty rates are far above any indicators of fair and reasonable rates. (Emphasis added).

The key question is what constitutes the “range of FRAND rates that ordinarily would form the boundaries of a negotiation”?

Because they were discussed under seal, we don’t know the precise agreements that the FTC’s expert, Mr. Lasinski, used for his analysis. But we do know something about them: His analysis entailed a study of only eight licensing agreements; in six of them, the licensee was either Apple or Samsung; and in all of them the licensor was either Interdigital, Nokia, or Ericsson. We also know that Mr. Lasinski’s valuation study did not include any Qualcomm licenses, and that the eight agreements he looked at were all executed after the district court’s decision in Microsoft vs. Motorola in 2013.

A curiously small number of agreements

Right off the bat there is a curiosity in the FTC’s valuation analysis. Even though there are hundreds of SEP license agreements involving the relevant standards, the FTC’s analysis relied on only eight, three-quarters of which involved licensing by only two companies: Apple and Samsung.

Indeed, even since 2013 (a date to which we will return) there have been scads of licenses (see, e.g., herehere, and here). Not only Apple and Samsung make CDMA and LTE devices; there are — quite literally — hundreds of other manufacturers out there, all of them licensing essentially the same technology — including global giants like LG, Huawei, HTC, Oppo, Lenovo, and Xiaomi. Why were none of their licenses included in the analysis? 

At the same time, while Interdigital, Nokia, and Ericsson are among the largest holders of CDMA and LTE SEPs, several dozen companies have declared such patents, including Motorola (Alphabet), NEC, Huawei, Samsung, ZTE, NTT DOCOMO, etc. Again — why were none of their licenses included in the analysis?

All else equal, more data yields better results. This is particularly true where the data are complex license agreements which are often embedded in larger, even-more-complex commercial agreements and which incorporate widely varying patent portfolios, patent implementers, and terms.

Yet the FTC relied on just eight agreements in its comparability study, covering a tiny fraction of the industry’s licensors and licensees, and, notably, including primarily licenses taken by the two companies (Samsung and Apple) that have most aggressively litigated their way to lower royalty rates.

A curiously crabbed selection of licensors

And it is not just that the selected licensees represent a weirdly small and biased sample; it is also not necessarily even a particularly comparable sample.

One thing we can be fairly confident of, given what we know of the agreements used, is that at least one of the license agreements involved Nokia licensing to Apple, and another involved InterDigital licensing to Apple. But these companies’ patent portfolios are not exactly comparable to Qualcomm’s. About Nokia’s patents, Apple said:

And about InterDigital’s:

Meanwhile, Apple’s view of Qualcomm’s patent portfolio (despite its public comments to the contrary) was that it was considerably better than the others’:

The FTC’s choice of such a limited range of comparable license agreements is curious for another reason, as well: It includes no Qualcomm agreements. Qualcomm is certainly one of the biggest players in the cellular licensing space, and no doubt more than a few license agreements involve Qualcomm. While it might not make sense to include Qualcomm licenses that the FTC claims incorporate anticompetitive terms, that doesn’t describe the huge range of Qualcomm licenses with which the FTC has no quarrel. Among other things, Qualcomm licenses from before it began selling chips would not have been affected by its alleged “no license, no chips” scheme, nor would licenses granted to companies that didn’t also purchase Qualcomm chips. Furthermore, its licenses for technology reading on the WCDMA standard are not claimed to be anticompetitive by the FTC.

And yet none of these licenses were deemed “comparable” by the FTC’s expert, even though, on many dimensions — most notably, with respect to the underlying patent portfolio being valued — they would have been the most comparable (i.e., identical).

A curiously circumscribed timeframe

That the FTC’s expert should use the 2013 cut-off date is also questionable. According to Lasinski, he chose to use agreements after 2013 because it was in 2013 that the U.S. District Court for the Western District of Washington decided the Microsoft v. Motorola case. Among other things, the court in Microsoft v Motorola held that the proper value of a SEP is its “intrinsic” patent value, including its value to the standard, but not including the additional value it derives from being incorporated into a widely used standard.

According to the FTC’s expert,

prior to [Microsoft v. Motorola], people were trying to value … the standard and the license based on the value of the standard, not the value of the patents ….

Asked by Qualcomm’s counsel if his concern was that the “royalty rates derived in license agreements for cellular SEPs [before Microsoft v. Motorola] could very well have been above FRAND,” Mr. Lasinski concurred.

The problem with this approach is that it’s little better than arbitrary. The Motorola decision was an important one, to be sure, but the notion that sophisticated parties in a multi-billion dollar industry were systematically agreeing to improper terms until a single court in Washington suggested otherwise is absurd. To be sure, such agreements are negotiated in “the shadow of the law,” and judicial decisions like the one in Washington (later upheld by the Ninth Circuit) can affect the parties’ bargaining positions.

But even if it were true that the court’s decision had some effect on licensing rates, the decision would still have been only one of myriad factors determining parties’ relative bargaining  power and their assessment of the proper valuation of SEPs. There is no basis to support the assertion that the Motorola decision marked a sea-change between “improper” and “proper” patent valuations. And, even if it did, it was certainly not alone in doing so, and the FTC’s expert offers no justification for determining that agreements reached before, say, the European Commission’s decision against Qualcomm in 2018 were “proper,” or that the Korea FTC’s decision against Qualcomm in 2009 didn’t have the same sort of corrective effect as the Motorola court’s decision in 2013. 

At the same time, a review of a wider range of agreements suggested that Qualcomm’s licensing royalties weren’t inflated

Meanwhile, one of Qualcomm’s experts in the FTC case, former DOJ Chief Economist Aviv Nevo, looked at whether the FTC’s theory of anticompetitive harm was borne out by the data by looking at Qualcomm’s royalty rates across time periods and standards, and using a much larger set of agreements. Although his remit was different than Mr. Lasinski’s, and although he analyzed only Qualcomm licenses, his analysis still sheds light on Mr. Lasinski’s conclusions:

[S]pecifically what I looked at was the predictions from the theory to see if they’re actually borne in the data….

[O]ne of the clear predictions from the theory is that during periods of alleged market power, the theory predicts that we should see higher royalty rates.

So that’s a very clear prediction that you can take to data. You can look at the alleged market power period, you can look at the royalty rates and the agreements that were signed during that period and compare to other periods to see whether we actually see a difference in the rates.

Dr. Nevo’s analysis, which looked at royalty rates in Qualcomm’s SEP license agreements for CDMA, WCDMA, and LTE ranging from 1990 to 2017, found no differences in rates between periods when Qualcomm was alleged to have market power and when it was not alleged to have market power (or could not have market power, on the FTC’s theory, because it did not sell corresponding chips).

The reason this is relevant is that Mr. Lasinski’s assessment implies that Qualcomm’s higher royalty rates weren’t attributable to its superior patent portfolio, leaving either anticompetitive conduct or non-anticompetitive, superior bargaining ability as the explanation. No one thinks Qualcomm has cornered the market on exceptional negotiators, so really the only proffered explanation for the results of Mr. Lasinski’s analysis is anticompetitive conduct. But this assumes that his analysis is actually reliable. Prof. Nevo’s analysis offers some reason to think that it is not.

All of the agreements studied by Mr. Lasinski were drawn from the period when Qualcomm is alleged to have employed anticompetitive conduct to elevate its royalty rates above FRAND. But when the actual royalties charged by Qualcomm during its alleged exercise of market power are compared to those charged when and where it did not have market power, the evidence shows it received identical rates. Mr Lasinki’s results, then, would imply that Qualcomm’s royalties were “too high” not only while it was allegedly acting anticompetitively, but also when it was not. That simple fact suggests on its face that Mr. Lasinski’s analysis may have been flawed, and that it systematically under-valued Qualcomm’s patents.

Connecting the dots and calling into question the strength of the FTC’s case

In its closing argument, the FTC pulled together the implications of its allegations of anticompetitive conduct by pointing to Mr. Lasinski’s testimony:

Now, looking at the effect of all of this conduct, Qualcomm’s own documents show that it earned many times the licensing revenue of other major licensors, like Ericsson.

* * *

Mr. Lasinski analyzed whether this enormous difference in royalties could be explained by the relative quality and size of Qualcomm’s portfolio, but that massive disparity was not explained.

Qualcomm’s royalties are disproportionate to those of other SEP licensors and many times higher than any plausible calculation of a FRAND rate.

* * *

The overwhelming direct evidence, some of which is cited here, shows that Qualcomm’s conduct led licensees to pay higher royalties than they would have in fair negotiations.

It is possible, of course, that Lasinki’s methodology was flawed; indeed, at trial Qualcomm argued exactly this in challenging his testimony. But it is also possible that, whether his methodology was flawed or not, his underlying data was flawed.

It is impossible from the publicly available evidence to definitively draw this conclusion, but the subsequent revelation that Apple may well have manipulated at least a significant share of the eight agreements that constituted Mr. Lasinski’s data certainly increases the plausibility of this conclusion: We now know, following Qualcomm’s opening statement in Apple v. Qualcomm, that that stilted set of comparable agreements studied by the FTC’s expert also happens to be tailor-made to be dominated by agreements that Apple may have manipulated to reflect lower-than-FRAND rates.

What is most concerning is that the FTC may have built up its case on such questionable evidence, either by intentionally cherry picking the evidence upon which it relied, or inadvertently because it rested on such a needlessly limited range of data, some of which may have been tainted.

Intentionally or not, the FTC appears to have performed its valuation analysis using a needlessly circumscribed range of comparable agreements and justified its decision to do so using questionable assumptions. This seriously calls into question the strength of the FTC’s case.

The German Bundeskartellamt’s Facebook decision is unsound from either a competition or privacy policy perspective, and will only make the fraught privacy/antitrust relationship worse.

Continue Reading...

[TOTM: The following is the first in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here. This post originally appeared on the Federalist Society Blog.]

Just days before leaving office, the outgoing Obama FTC left what should have been an unwelcome parting gift for the incoming Commission: an antitrust suit against Qualcomm. This week the FTC — under a new Chairman and with an entirely new set of Commissioners — finished unwrapping its present, and rested its case in the trial begun earlier this month in FTC v Qualcomm.

This complex case is about an overreaching federal agency seeking to set prices and dictate the business model of one of the world’s most innovative technology companies. As soon-to-be Acting FTC Chairwoman, Maureen Ohlhausen, noted in her dissent from the FTC’s decision to bring the case, it is “an enforcement action based on a flawed legal theory… that lacks economic and evidentiary support…, and that, by its mere issuance, will undermine U.S. intellectual property rights… worldwide.”

Implicit in the FTC’s case is the assumption that Qualcomm charges smartphone makers “too much” for its wireless communications patents — patents that are essential to many smartphones. But, as former FTC and DOJ chief economist, Luke Froeb, puts it, “[n]othing is more alien to antitrust than enquiring into the reasonableness of prices.” Even if Qualcomm’s royalty rates could somehow be deemed “too high” (according to whom?), excessive pricing on its own is not an antitrust violation under U.S. law.

Knowing this, the FTC “dances around that essential element” (in Ohlhausen’s words) and offers instead a convoluted argument that Qualcomm’s business model is anticompetitive. Qualcomm both sells wireless communications chipsets used in mobile phones, as well as licenses the technology on which those chips rely. According to the complaint, by licensing its patents only to end-users (mobile device makers) instead of to chip makers further up the supply chain, Qualcomm is able to threaten to withhold the supply of its chipsets to its licensees and thereby extract onerous terms in its patent license agreements.

There are numerous problems with the FTC’s case. Most fundamental among them is the “no duh” problem: Of course Qualcomm conditions the purchase of its chips on the licensing of its intellectual property; how could it be any other way? The alternative would require Qualcomm to actually facilitate the violation of its property rights by forcing it to sell its chips to device makers even if they refuse its patent license terms. In that world, what device maker would ever agree to pay more than a pittance for a patent license? The likely outcome is that Qualcomm charges more for its chips to compensate (or simply stops making them). Great, the FTC says; then competitors can fill the gap and — voila: the market is more competitive, prices will actually fall, and consumers will reap the benefits.

Except it doesn’t work that way. As many economists, including both the current and a prominent former chief economist of the FTC, have demonstrated, forcing royalty rates lower in such situations is at least as likely to harm competition as to benefit it. There is no sound theoretical or empirical basis for concluding that using antitrust to move royalty rates closer to some theoretical ideal will actually increase consumer welfare. All it does for certain is undermine patent holders’ property rights, virtually ensuring there will be less innovation.

In fact, given this inescapable reality, it is unclear why the current Commission is continuing to pursue the case at all. The bottom line is that, if it wins the case, the current FTC will have done more to undermine intellectual property rights than any other administration’s Commission has been able to accomplish.

It is not difficult to identify the frailties of the case that would readily support the agency backing away from pursuing it further. To begin with, the claim that device makers cannot refuse Qualcomm’s terms because the company effectively controls the market’s supply of mobile broadband modem chips is fanciful. While it’s true that Qualcomm is the largest supplier of these chipsets, it’s an absurdity to claim that device makers have no alternatives. In fact, Qualcomm has faced stiff competition from some of the world’s other most successful companies since well before the FTC brought its case. Samsung — the largest maker of Android phones — developed its own chip to replace Qualcomm’s in 2015, for example. More recently, Intel has provided Apple with all of the chips for its 2018 iPhones, and Apple is rumored to be developing its own 5G cellular chips in-house. In any case, the fact that most device makers have preferred to use Qualcomm’s chips in the past says nothing about the ability of other firms to take business from it.

The possibility (and actuality) of entry from competitors like Intel ensures that sophisticated purchasers like Apple have bargaining leverage. Yet, ironically, the FTC points to Apple’s claimthat Qualcomm “forced” it to use Intel modems in its latest iPhones as evidence of Qualcomm’s dominance. Think about that: Qualcomm “forced” a company worth many times its own value to use a competitor’s chips in its new iPhones — and that shows Qualcomm has a stranglehold on the market?

The FTC implies that Qualcomm’s refusal to license its patents to competing chip makers means that competitors cannot reliably supply the market. Yet Qualcomm has never asserted its patents against a competing chip maker, every one of which uses Qualcomm’s technology without paying any royalties to do so. The FTC nevertheless paints the decision to license only to device makers as the aberrant choice of an exploitative, dominant firm. The reality, however, is that device-level licensing is the norm practiced by every company in the industry — and has been since the 1980s.

Not only that, but Qualcomm has not altered its licensing terms or practices since it was decidedly an upstart challenger in the market — indeed, since before it even started producing chips, and thus before it even had the supposed means to leverage its chip sales to extract anticompetitive licensing terms. It would be a remarkable coincidence if precisely the same licensing structure and the exact same royalty rate served the company’s interests both as a struggling startup and as an alleged rapacious monopolist. Yet that is the implication of the FTC’s theory.

When Qualcomm introduced CDMA technology to the mobile phone industry in 1989, it was a promising but unproven new technology in an industry dominated by different standards. Qualcomm happily encouraged chip makers to promote the standard by enabling them to produce compliant components without paying any royalties; and it willingly licensed its patents to device makers based on a percentage of sales of the handsets that incorporated CDMA chips. Qualcomm thus shared both the financial benefits and the financial risk associated with the development and sales of devices implementing its new technology.

Qualcomm’s favorable (to handset makers) licensing terms may have helped CDMA become one of the industry standards for 2G and 3G devices. But it’s an unsupportable assertion to say that those identical terms are suddenly the source of anticompetitive power, particularly as 2G and 3G are rapidly disappearing from the market and as competing patent holders gain prominence with each successive cellular technology standard.

To be sure, successful handset makers like Apple that sell their devices at a significant premium would prefer to share less of their revenue with Qualcomm. But their success was built in large part on Qualcomm’s technology. They may regret the terms of the deal that propelled CDMA technology to prominence, but Apple’s regret is not the basis of a sound antitrust case.

And although it’s unsurprising that manufacturers of premium handsets would like to use antitrust law to extract better terms from their negotiations with standard-essential patent holders, it is astonishing that the current FTC is carrying on the Obama FTC’s willingness to do it for them.

None of this means that Qualcomm is free to charge an unlimited price: standard-essential patents must be licensed on “FRAND” terms, meaning they must be fair, reasonable, and nondiscriminatory. It is difficult to asses what constitutes FRAND, but the most restrictive method is to estimate what negotiated terms would look like before a patent was incorporated into a standard. “[R]oyalties that are or would be negotiated ex ante with full information are a market bench-mark reflecting legitimate return to innovation,” writes Carl Shapiro, the FTC’s own economic expert in the case.

And that is precisely what happened here: We don’t have to guess what the pre-standard terms of trade would look like; we know them, because they are the same terms that Qualcomm offers now.

We don’t know exactly what the consequence would be for consumers, device makers, and competitors if Qualcomm were forced to accede to the FTC’s benighted vision of how the market should operate. But we do know that the market we actually have is thriving, with new entry at every level, enormous investment in R&D, and continuous technological advance. These aren’t generally the characteristics of a typical monopoly market. While the FTC’s effort to “fix” the market may help Apple and Samsung reap a larger share of the benefits, it will undoubtedly end up only hurting consumers.

REGISTER HERE for the much-anticipated 2018 ICLE/Leeds competition law conference, this Friday and Saturday in Washington, DC.

NB: We’ve been approved for 8 credit hours of VA MCLE

The conference agenda is below. We hope to see you there!

ICLE/Leeds 2018 Competition Law Conference: Have We Exceeded the Limits of Antirust?
Agenda Day 1
Agenda Day 2

Last week, the DOJ cleared the merger of CVS Health and Aetna (conditional on Aetna’s divesting its Medicare Part D business), a merger that, as I previously noted at a House Judiciary hearing, “presents a creative effort by two of the most well-informed and successful industry participants to try something new to reform a troubled system.” (My full testimony is available here).

Of course it’s always possible that the experiment will fail — that the merger won’t “revolutioniz[e] the consumer health care experience” in the way that CVS and Aetna are hoping. But it’s a low (antitrust) risk effort to address some of the challenges confronting the healthcare industry — and apparently the DOJ agrees.

I discuss the weakness of the antitrust arguments against the merger at length in my testimony. What I particularly want to draw attention to here is how this merger — like many vertical mergers — represents business model innovation by incumbents.

The CVS/Aetna merger is just one part of a growing private-sector movement in the healthcare industry to adopt new (mostly) vertical arrangements that seek to move beyond some of the structural inefficiencies that have plagued healthcare in the United States since World War II. Indeed, ambitious and interesting as it is, the merger arises amidst a veritable wave of innovative, vertical healthcare mergers and other efforts to integrate the healthcare services supply chain in novel ways.

These sorts of efforts (and the current DOJ’s apparent support for them) should be applauded and encouraged. I need not rehash the economic literature on vertical restraints here (see, e.g., Lafontaine & Slade, etc.). But especially where government interventions have already impaired the efficient workings of a market (as they surely have, in spades, in healthcare), it is important not to compound the error by trying to micromanage private efforts to restructure around those constraints.   

Current trends in private-sector-driven healthcare reform

In the past, the most significant healthcare industry mergers have largely been horizontal (i.e., between two insurance providers, or two hospitals) or “traditional” business model mergers for the industry (i.e., vertical mergers aimed at building out managed care organizations). This pattern suggests a sort of fealty to the status quo, with insurers interested primarily in expanding their insurance business or providers interested in expanding their capacity to provide medical services.

Today’s health industry mergers and ventures seem more frequently to be different in character, and they portend an industry-wide experiment in the provision of vertically integrated healthcare that we should enthusiastically welcome.

Drug pricing and distribution innovations

To begin with, the CVS/Aetna deal, along with the also recently approved Cigna-Express Scripts deal, solidifies the vertical integration of pharmacy benefit managers (PBMs) with insurers.

But a number of other recent arrangements and business models center around relationships among drug manufacturers, pharmacies, and PBMs, and these tend to minimize the role of insurers. While not a “vertical” arrangement, per se, Walmart’s generic drug program, for example, offers $4 prescriptions to customers regardless of insurance (the typical generic drug copay for patients covered by employer-provided health insurance is $11), and Walmart does not seek or receive reimbursement from health plans for these drugs. It’s been offering this program since 2006, but in 2016 it entered into a joint buying arrangement with McKesson, a pharmaceutical wholesaler (itself vertically integrated with Rexall pharmacies), to negotiate lower prices. The idea, presumably, is that Walmart will entice consumers to its stores with the lure of low-priced generic prescriptions in the hope that they will buy other items while they’re there. That prospect presumably makes it worthwhile to route around insurers and PBMs, and their reimbursements.

Meanwhile, both Express Scripts and CVS Health (two of the country’s largest PBMs) have made moves toward direct-to-consumer sales themselves, establishing pricing for a small number of drugs independently of health plans and often in partnership with drug makers directly.   

Also apparently focused on disrupting traditional drug distribution arrangements, Amazon has recently purchased online pharmacy PillPack (out from under Walmart, as it happens), and with it received pharmacy licenses in 49 states. The move introduces a significant new integrated distributor/retailer, and puts competitive pressure on other retailers and distributors and potentially insurers and PBMs, as well.

Whatever its role in driving the CVS/Aetna merger (and I believe it is smaller than many reports like to suggest), Amazon’s moves in this area demonstrate the fluid nature of the market, and the opportunities for a wide range of firms to create efficiencies in the market and to lower prices.

At the same time, the differences between Amazon and CVS/Aetna highlight the scope of product and service differentiation that should contribute to the ongoing competitiveness of these markets following mergers like this one.

While Amazon inarguably excels at logistics and the routinizing of “back office” functions, it seems unlikely for the foreseeable future to be able to offer (or to be interested in offering) a patient interface that can rival the service offerings of a brick-and-mortar CVS pharmacy combined with an outpatient clinic and its staff and bolstered by the capabilities of an insurer like Aetna. To be sure, online sales and fulfillment may put price pressure on important, largely mechanical functions, but, like much technology, it is first and foremost a complement to services offered by humans, rather than a substitute. (In this regard it is worth noting that McKesson has long been offering Amazon-like logistics support for both online and brick-and-mortar pharmacies. “‘To some extent, we were Amazon before it was cool to be Amazon,’ McKesson CEO John Hammergren said” on a recent earnings call).

Treatment innovations

Other efforts focus on integrating insurance and treatment functions or on bringing together other, disparate pieces of the healthcare industry in interesting ways — all seemingly aimed at finding innovative, private solutions to solve some of the costly complexities that plague the healthcare market.

Walmart, for example, announced a deal with Quest Diagnostics last year to experiment with offering diagnostic testing services and potentially other basic healthcare services inside of some Walmart stores. While such an arrangement may simply be a means of making doctor-prescribed diagnostic tests more convenient, it may also suggest an effort to expand the availability of direct-to-consumer (patient-initiated) testing (currently offered by Quest in Missouri and Colorado) in states that allow it. A partnership with Walmart to market and oversee such services has the potential to dramatically expand their use.

Capping off (for now) a buying frenzy in recent years that included the purchase of PBM, CatamaranRx, UnitedHealth is seeking approval from the FTC for the proposed merger of its Optum unit with the DaVita Medical Group — a move that would significantly expand UnitedHealth’s ability to offer medical services (including urgent care, outpatient surgeries, and health clinic services), give it a significant group of doctors’ clinics throughout the U.S., and turn UnitedHealth into the largest employer of doctors in the country. But of course this isn’t a traditional managed care merger — it represents a significant bet on the decentralized, ambulatory care model that has been slowly replacing significant parts of the traditional, hospital-centric care model for some time now.

And, perhaps most interestingly, some recent moves are bringing together drug manufacturers and diagnostic and care providers in innovative ways. Swiss pharmaceutical company, Roche, announced recently that “it would buy the rest of U.S. cancer data company Flatiron Health for $1.9 billion to speed development of cancer medicines and support its efforts to price them based on how well they work.” Not only is the deal intended to improve Roche’s drug development process by integrating patient data, it is also aimed at accommodating efforts to shift the pricing of drugs, like the pricing of medical services generally, toward an outcome-based model.

Similarly interesting, and in a related vein, early this year a group of hospital systems including Intermountain Health, Ascension, and Trinity Health announced plans to begin manufacturing generic prescription drugs. This development further reflects the perceived benefits of vertical integration in healthcare markets, and the move toward creative solutions to the unique complexity of coordinating the many interrelated layers of healthcare provision. In this case,

[t]he nascent venture proposes a private solution to ensure contestability in the generic drug market and consequently overcome the failures of contracting [in the supply and distribution of generics]…. The nascent venture, however it solves these challenges and resolves other choices, will have important implications for the prices and availability of generic drugs in the US.

More enforcement decisions like CVS/Aetna and Bayer/Monsanto; fewer like AT&T/Time Warner

In the face of all this disruption, it’s difficult to credit anticompetitive fears like those expressed by the AMA in opposing the CVS-Aetna merger and a recent CEA report on pharmaceutical pricing, both of which are premised on the assumption that drug distribution is unavoidably dominated by a few PBMs in a well-defined, highly concentrated market. Creative arrangements like the CVS-Aetna merger and the initiatives described above (among a host of others) indicate an ease of entry, the fluidity of traditional markets, and a degree of business model innovation that suggest a great deal more competitiveness than static PBM market numbers would suggest.

This kind of incumbent innovation through vertical restructuring is an increasingly important theme in antitrust, and efforts to tar such transactions with purported evidence of static market dominance is simply misguided.

While the current DOJ’s misguided (and, remarkably, continuing) attempt to stop the AT&T/Time Warner merger is an aberrant step in the wrong direction, the leadership at the Antitrust Division generally seems to get it. Indeed, in spite of strident calls for stepped-up enforcement in the always-controversial ag-biotech industry, the DOJ recently approved three vertical ag-biotech mergers in fairly rapid succession.

As I noted in a discussion of those ag-biotech mergers, but equally applicable here, regulatory humility should continue to carry the day when it comes to structural innovation by incumbent firms:

But it is also important to remember that innovation comes from within incumbent firms, as well, and, often, that the overall level of innovation in an industry may be increased by the presence of large firms with economies of scope and scale.

In sum, and to paraphrase Olympia Dukakis’ character in Moonstruck: “what [we] don’t know about [the relationship between innovation and market structure] is a lot.”

What we do know, however, is that superficial, concentration-based approaches to antitrust analysis will likely overweight presumed foreclosure effects and underweight innovation effects.

We shouldn’t fetishize entry, or access, or head-to-head competition over innovation, especially where consumer welfare may be significantly improved by a reduction in the former in order to get more of the latter.

Today the European Commission launched its latest salvo against Google, issuing a decision in its three-year antitrust investigation into the company’s agreements for distribution of the Android mobile operating system. The massive fine levied by the Commission will dominate the headlines, but the underlying legal theory and proposed remedies are just as notable — and just as problematic.

The nirvana fallacy

It is sometimes said that the most important question in all of economics is “compared to what?” UCLA economist Harold Demsetz — one of the most important regulatory economists of the past century — coined the term “nirvana fallacy” to critique would-be regulators’ tendency to compare messy, real-world economic circumstances to idealized alternatives, and to justify policies on the basis of the discrepancy between them. Wishful thinking, in other words.

The Commission’s Android decision falls prey to the nirvana fallacy. It conjures a world in which Google offers its Android operating system on unrealistic terms, prohibits it from doing otherwise, and neglects the actual consequences of such a demand.

The idea at the core of the Commission’s decision is that by making its own services (especially Google Search and Google Play Store) easier to access than competing services on Android devices, Google has effectively foreclosed rivals from effective competition. In order to correct that claimed defect, the Commission demands that Google refrain from engaging in practices that favor its own products in its Android licensing agreements:

At a minimum, Google has to stop and to not re-engage in any of the three types of practices. The decision also requires Google to refrain from any measure that has the same or an equivalent object or effect as these practices.

The basic theory is straightforward enough, but its application here reflects a troubling departure from the underlying economics and a romanticized embrace of industrial policy that is unsupported by the realities of the market.

In a recent interview, European Commission competition chief, Margrethe Vestager, offered a revealing insight into her thinking about her oversight of digital platforms, and perhaps the economy in general: “My concern is more about whether we get the right choices,” she said. Asked about Facebook, for example, she specified exactly what she thinks the “right” choice looks like: “I would like to have a Facebook in which I pay a fee each month, but I would have no tracking and advertising and the full benefits of privacy.”

Some consumers may well be sympathetic with her preference (and even share her specific vision of what Facebook should offer them). But what if competition doesn’t result in our — or, more to the point, Margrethe Vestager’s — prefered outcomes? Should competition policy nevertheless enact the idiosyncratic consumer preferences of a particular regulator? What if offering consumers the “right” choices comes at the expense of other things they value, like innovation, product quality, or price? And, if so, can antitrust enforcers actually engineer a better world built around these preferences?

Android’s alleged foreclosure… that doesn’t really foreclose anything

The Commission’s primary concern is with the terms of Google’s deal: In exchange for royalty-free access to Android and a set of core, Android-specific applications and services (like Google Search and Google Maps) Google imposes a few contractual conditions.

Google allows manufacturers to use the Android platform — in which the company has invested (and continues to invest) billions of dollars — for free. It does not require device makers to include any of its core, Google-branded features. But if a manufacturer does decide to use any of them, it must include all of them, and make Google Search the device default. In another (much smaller) set of agreements, Google also offers device makers a small share of its revenue from Search if they agree to pre-install only Google Search on their devices (although users remain free to download and install any competing services they wish).

Essentially, that’s it. Google doesn’t allow device makers to pick and choose between parts of the ecosystem of Google products, free-riding on Google’s brand and investments. But manufacturers are free to use the Android platform and to develop their own competing brand built upon Google’s technology.

Other apps may be installed in addition to Google’s core apps. Google Search need not be the exclusive search service, but it must be offered out of the box as the default. Google Play and Chrome must be made available to users, but other app stores and browsers may be pre-installed and even offered as the default. And device makers who choose to do so may share in Search revenue by pre-installing Google Search exclusively — but users can and do install a different search service.

Alternatives to all of Google’s services (including Search) abound on the Android platform. It’s trivial both to install them and to set them as the default. Meanwhile, device makers regularly choose to offer these apps alongside Google’s services, and some, like Samsung, have developed entire customized app suites of their own. Still others, like Amazon, pre-install no Google apps and use Android without any of these constraints (and whose Google-free tablets are regularly ranked as the best-rated and most popular in Europe).

By contrast, Apple bundles its operating system with its devices, bypasses third-party device makers entirely, and offers consumers access to its operating system only if they pay (lavishly) for one of the very limited number of devices the company offers, as well. It is perhaps not surprising — although it is enlightening — that Apple earns more revenue in an average quarter from iPhone sales than Google is reported to have earned in total from Android since it began offering it in 2008.

Reality — and the limits it imposes on efforts to manufacture nirvana

The logic behind Google’s approach to Android is obvious: It is the extension of Google’s “advertisers pay” platform strategy to mobile. Rather than charging device makers (and thus consumers) directly for its services, Google earns its revenue by charging advertisers for targeted access to users via Search. Remove Search from mobile devices and you remove the mechanism by which Google gets paid.

It’s true that most device makers opt to offer Google’s suite of services to European users, and that most users opt to keep Google Search as the default on their devices — that is, indeed, the hoped-for effect, and necessary to ensure that Google earns a return on its investment.

That users often choose to keep using Google services instead of installing alternatives, and that device makers typically choose to engineer their products around the Google ecosystem, isn’t primarily the result of a Google-imposed mandate; it’s the result of consumer preferences for Google’s offerings in lieu of readily available alternatives.

The EU decision against Google appears to imagine a world in which Google will continue to develop Android and allow device makers to use the platform and Google’s services for free, even if the likelihood of recouping its investment is diminished.

The Commission also assessed in detail Google’s arguments that the tying of the Google Search app and Chrome browser were necessary, in particular to allow Google to monetise its investment in Android, and concluded that these arguments were not well founded. Google achieves billions of dollars in annual revenues with the Google Play Store alone, it collects a lot of data that is valuable to Google’s search and advertising business from Android devices, and it would still have benefitted from a significant stream of revenue from search advertising without the restrictions.

For the Commission, Google’s earned enough [trust me: you should follow the link. It’s my favorite joke…].

But that world in which Google won’t alter its investment decisions based on a government-mandated reduction in its allowable return on investment doesn’t exist; it’s a fanciful Nirvana.

Google’s real alternatives to the status quo are charging for the use of Android, closing the Android platform and distributing it (like Apple) only on a fully integrated basis, or discontinuing Android.

In reality, and compared to these actual alternatives, Google’s restrictions are trivial. Remember, Google doesn’t insist that Google Search be exclusive, only that it benefit from a “leg up” by being pre-installed as the default. And on this thin reed Google finances the development and maintenance of the (free) Android operating system and all of the other (free) apps from which Google otherwise earns little or no revenue.

It’s hard to see how consumers, device makers, or app developers would be made better off without Google’s restrictions, but in the real world in which the alternative is one of the three manifestly less desirable options mentioned above.

Missing the real competition for the trees

What’s more, while ostensibly aimed at increasing competition, the Commission’s proposed remedy — like the conduct it addresses — doesn’t relate to Google’s most significant competitors at all.

Facebook, Instagram, Firefox, Amazon, Spotify, Yelp, and Yahoo, among many others, are some of the most popular apps on Android phones, including in Europe. They aren’t foreclosed by Google’s Android distribution terms, and it’s even hard to imagine that they would be more popular if only Android phones didn’t come with, say, Google Search pre-installed.

It’s a strange anticompetitive story that has Google allegedly foreclosing insignificant competitors while apparently ignoring its most substantial threats.

The primary challenges Google now faces are from Facebook drawing away the most valuable advertising and Amazon drawing away the most valuable product searches (and increasingly advertising, as well). The fact that Google’s challenged conduct has never shifted in order to target these competitors as their threat emerged, and has had no apparent effect on these competitive dynamics, says all one needs to know about the merits of the Commission’s decision and the value of its proposed remedy.

In reality, as Demsetz suggested, Nirvana cannot be designed by politicians, especially in complex, modern technology markets. Consumers’ best hope for something close — continued innovation, low prices, and voluminous choice — lies in the evolution of markets spurred by consumer demand, not regulators’ efforts to engineer them.

A few weeks ago I posted a preliminary assessment of the relative antitrust risk of a Comcast vs Disney purchase of 21st Century Fox assets. (Also available in pdf as an ICLE Issue brief, here). On the eve of Judge Leon’s decision in the AT&T/Time Warner merger case, it seems worthwhile to supplement that assessment by calling attention to Assistant Attorney General Makan Delrahim’s remarks at The Deal’s Corporate Governance Conference last week. Somehow these remarks seem to have passed with little notice, but, given their timing, they deserve quite a bit more attention.

In brief, Delrahim spent virtually the entirety of his short remarks making and remaking the fundamental point at the center of my own assessment of the antitrust risk of a possible Comcast/Fox deal: The DOJ’s challenge of the AT&T/Time Warner merger tells you nothing about the likelihood that the agency would challenge a Comcast/Fox merger.

To begin, in my earlier assessment I pointed out that most vertical mergers are approved by antitrust enforcers, and I quoted Bruce Hoffman, Director of the FTC’s Bureau of Competition, who noted that:

[V]ertical merger enforcement is still a small part of our merger workload….

* * *

Where horizontal mergers reduce competition on their face — though that reduction could be minimal or more than offset by benefits — vertical mergers do not…. [T]here are plenty of theories of anticompetitive harm from vertical mergers. But the problem is that those theories don’t generally predict harm from vertical mergers; they simply show that harm is possible under certain conditions.

I may not have made it very clear in that post, but, of course, most horizontal mergers are approved by enforcers, as well.

Well, now we have the head of the DOJ Antitrust Division making the same point:

I’d say 95 or 96 percent of mergers — horizontal or vertical — are cleared — routinely…. Most mergers — horizontal or vertical — are procompetitive, or have no adverse effect.

Delrahim reinforced the point in an interview with The Street in advance of his remarks. Asked by a reporter, “what are your concerns with vertical mergers?,” Delrahim quickly corrected the questioner: “Well, I don’t have any concerns with most vertical mergers….”

But Delrahim went even further, noting that nothing about the Division’s approach to vertical mergers has changed since the AT&T/Time Warner case was brought — despite the efforts of some reporters to push a different narrative:

I understand that some journalists and observers have recently expressed concern that the Antitrust Division no longer believes that vertical mergers can be efficient and beneficial to competition and consumers. Some point to our recent decision to challenge some aspects of the AT&T/Time Warner merger as a supposed bellwether for a new vertical approach. Rest assured: These concerns are misplaced…. We have long recognized that vertical integration can and does generate efficiencies that benefit consumers. Indeed, most vertical mergers are procompetitive or competitively neutral. The same is of course true in horizontal transactions. To the extent that any recent action points to a closer review of vertical mergers, it’s not new…. [But,] to reiterate, our approach to vertical mergers has not changed, and our recent enforcement efforts are consistent with the Division’s long-standing, bipartisan approach to analyzing such mergers. We’ll continue to recognize that vertical mergers, in general, can yield significant economic efficiencies and benefit to competition.

Delrahim concluded his remarks by criticizing those who assume that the agency’s future enforcement decisions can be inferred from past cases with different facts, stressing that the agency employs an evidence-based, case-by-case approach to merger review:

Lumping all vertical transactions under the same umbrella, by comparison, obscures the reality that we conduct a vigorous investigation, aided by over 50 PhD economists in these markets, to make sure that we as lawyers don’t steer too far without the benefits of their views in each of these instances.

Arguably this was a rebuke directed at those, like Disney and Fox’s board, who are quick to ascribe increased regulatory risk to a Comcast/Fox tie-up because the DOJ challenged the AT&T/Time Warner merger. Recall that, in its proxy statement, the Fox board explained that it rejected Comcast’s earlier bid in favor of Disney’s in part because of “the regulatory risks presented by the DOJ’s unanticipated opposition to the proposed vertical integration of the AT&T / Time Warner transaction.”

I’ll likely have more to add once the AT&T/Time Warner decision is out. But in the meantime (and with apologies to Mark Twain), the takeaway is clear: Reports of the death of vertical mergers have been greatly exaggerated.