Archives For error costs

Last week, the DOJ cleared the merger of CVS Health and Aetna (conditional on Aetna’s divesting its Medicare Part D business), a merger that, as I previously noted at a House Judiciary hearing, “presents a creative effort by two of the most well-informed and successful industry participants to try something new to reform a troubled system.” (My full testimony is available here).

Of course it’s always possible that the experiment will fail — that the merger won’t “revolutioniz[e] the consumer health care experience” in the way that CVS and Aetna are hoping. But it’s a low (antitrust) risk effort to address some of the challenges confronting the healthcare industry — and apparently the DOJ agrees.

I discuss the weakness of the antitrust arguments against the merger at length in my testimony. What I particularly want to draw attention to here is how this merger — like many vertical mergers — represents business model innovation by incumbents.

The CVS/Aetna merger is just one part of a growing private-sector movement in the healthcare industry to adopt new (mostly) vertical arrangements that seek to move beyond some of the structural inefficiencies that have plagued healthcare in the United States since World War II. Indeed, ambitious and interesting as it is, the merger arises amidst a veritable wave of innovative, vertical healthcare mergers and other efforts to integrate the healthcare services supply chain in novel ways.

These sorts of efforts (and the current DOJ’s apparent support for them) should be applauded and encouraged. I need not rehash the economic literature on vertical restraints here (see, e.g., Lafontaine & Slade, etc.). But especially where government interventions have already impaired the efficient workings of a market (as they surely have, in spades, in healthcare), it is important not to compound the error by trying to micromanage private efforts to restructure around those constraints.   

Current trends in private-sector-driven healthcare reform

In the past, the most significant healthcare industry mergers have largely been horizontal (i.e., between two insurance providers, or two hospitals) or “traditional” business model mergers for the industry (i.e., vertical mergers aimed at building out managed care organizations). This pattern suggests a sort of fealty to the status quo, with insurers interested primarily in expanding their insurance business or providers interested in expanding their capacity to provide medical services.

Today’s health industry mergers and ventures seem more frequently to be different in character, and they portend an industry-wide experiment in the provision of vertically integrated healthcare that we should enthusiastically welcome.

Drug pricing and distribution innovations

To begin with, the CVS/Aetna deal, along with the also recently approved Cigna-Express Scripts deal, solidifies the vertical integration of pharmacy benefit managers (PBMs) with insurers.

But a number of other recent arrangements and business models center around relationships among drug manufacturers, pharmacies, and PBMs, and these tend to minimize the role of insurers. While not a “vertical” arrangement, per se, Walmart’s generic drug program, for example, offers $4 prescriptions to customers regardless of insurance (the typical generic drug copay for patients covered by employer-provided health insurance is $11), and Walmart does not seek or receive reimbursement from health plans for these drugs. It’s been offering this program since 2006, but in 2016 it entered into a joint buying arrangement with McKesson, a pharmaceutical wholesaler (itself vertically integrated with Rexall pharmacies), to negotiate lower prices. The idea, presumably, is that Walmart will entice consumers to its stores with the lure of low-priced generic prescriptions in the hope that they will buy other items while they’re there. That prospect presumably makes it worthwhile to route around insurers and PBMs, and their reimbursements.

Meanwhile, both Express Scripts and CVS Health (two of the country’s largest PBMs) have made moves toward direct-to-consumer sales themselves, establishing pricing for a small number of drugs independently of health plans and often in partnership with drug makers directly.   

Also apparently focused on disrupting traditional drug distribution arrangements, Amazon has recently purchased online pharmacy PillPack (out from under Walmart, as it happens), and with it received pharmacy licenses in 49 states. The move introduces a significant new integrated distributor/retailer, and puts competitive pressure on other retailers and distributors and potentially insurers and PBMs, as well.

Whatever its role in driving the CVS/Aetna merger (and I believe it is smaller than many reports like to suggest), Amazon’s moves in this area demonstrate the fluid nature of the market, and the opportunities for a wide range of firms to create efficiencies in the market and to lower prices.

At the same time, the differences between Amazon and CVS/Aetna highlight the scope of product and service differentiation that should contribute to the ongoing competitiveness of these markets following mergers like this one.

While Amazon inarguably excels at logistics and the routinizing of “back office” functions, it seems unlikely for the foreseeable future to be able to offer (or to be interested in offering) a patient interface that can rival the service offerings of a brick-and-mortar CVS pharmacy combined with an outpatient clinic and its staff and bolstered by the capabilities of an insurer like Aetna. To be sure, online sales and fulfillment may put price pressure on important, largely mechanical functions, but, like much technology, it is first and foremost a complement to services offered by humans, rather than a substitute. (In this regard it is worth noting that McKesson has long been offering Amazon-like logistics support for both online and brick-and-mortar pharmacies. “‘To some extent, we were Amazon before it was cool to be Amazon,’ McKesson CEO John Hammergren said” on a recent earnings call).

Treatment innovations

Other efforts focus on integrating insurance and treatment functions or on bringing together other, disparate pieces of the healthcare industry in interesting ways — all seemingly aimed at finding innovative, private solutions to solve some of the costly complexities that plague the healthcare market.

Walmart, for example, announced a deal with Quest Diagnostics last year to experiment with offering diagnostic testing services and potentially other basic healthcare services inside of some Walmart stores. While such an arrangement may simply be a means of making doctor-prescribed diagnostic tests more convenient, it may also suggest an effort to expand the availability of direct-to-consumer (patient-initiated) testing (currently offered by Quest in Missouri and Colorado) in states that allow it. A partnership with Walmart to market and oversee such services has the potential to dramatically expand their use.

Capping off (for now) a buying frenzy in recent years that included the purchase of PBM, CatamaranRx, UnitedHealth is seeking approval from the FTC for the proposed merger of its Optum unit with the DaVita Medical Group — a move that would significantly expand UnitedHealth’s ability to offer medical services (including urgent care, outpatient surgeries, and health clinic services), give it a significant group of doctors’ clinics throughout the U.S., and turn UnitedHealth into the largest employer of doctors in the country. But of course this isn’t a traditional managed care merger — it represents a significant bet on the decentralized, ambulatory care model that has been slowly replacing significant parts of the traditional, hospital-centric care model for some time now.

And, perhaps most interestingly, some recent moves are bringing together drug manufacturers and diagnostic and care providers in innovative ways. Swiss pharmaceutical company, Roche, announced recently that “it would buy the rest of U.S. cancer data company Flatiron Health for $1.9 billion to speed development of cancer medicines and support its efforts to price them based on how well they work.” Not only is the deal intended to improve Roche’s drug development process by integrating patient data, it is also aimed at accommodating efforts to shift the pricing of drugs, like the pricing of medical services generally, toward an outcome-based model.

Similarly interesting, and in a related vein, early this year a group of hospital systems including Intermountain Health, Ascension, and Trinity Health announced plans to begin manufacturing generic prescription drugs. This development further reflects the perceived benefits of vertical integration in healthcare markets, and the move toward creative solutions to the unique complexity of coordinating the many interrelated layers of healthcare provision. In this case,

[t]he nascent venture proposes a private solution to ensure contestability in the generic drug market and consequently overcome the failures of contracting [in the supply and distribution of generics]…. The nascent venture, however it solves these challenges and resolves other choices, will have important implications for the prices and availability of generic drugs in the US.

More enforcement decisions like CVS/Aetna and Bayer/Monsanto; fewer like AT&T/Time Warner

In the face of all this disruption, it’s difficult to credit anticompetitive fears like those expressed by the AMA in opposing the CVS-Aetna merger and a recent CEA report on pharmaceutical pricing, both of which are premised on the assumption that drug distribution is unavoidably dominated by a few PBMs in a well-defined, highly concentrated market. Creative arrangements like the CVS-Aetna merger and the initiatives described above (among a host of others) indicate an ease of entry, the fluidity of traditional markets, and a degree of business model innovation that suggest a great deal more competitiveness than static PBM market numbers would suggest.

This kind of incumbent innovation through vertical restructuring is an increasingly important theme in antitrust, and efforts to tar such transactions with purported evidence of static market dominance is simply misguided.

While the current DOJ’s misguided (and, remarkably, continuing) attempt to stop the AT&T/Time Warner merger is an aberrant step in the wrong direction, the leadership at the Antitrust Division generally seems to get it. Indeed, in spite of strident calls for stepped-up enforcement in the always-controversial ag-biotech industry, the DOJ recently approved three vertical ag-biotech mergers in fairly rapid succession.

As I noted in a discussion of those ag-biotech mergers, but equally applicable here, regulatory humility should continue to carry the day when it comes to structural innovation by incumbent firms:

But it is also important to remember that innovation comes from within incumbent firms, as well, and, often, that the overall level of innovation in an industry may be increased by the presence of large firms with economies of scope and scale.

In sum, and to paraphrase Olympia Dukakis’ character in Moonstruck: “what [we] don’t know about [the relationship between innovation and market structure] is a lot.”

What we do know, however, is that superficial, concentration-based approaches to antitrust analysis will likely overweight presumed foreclosure effects and underweight innovation effects.

We shouldn’t fetishize entry, or access, or head-to-head competition over innovation, especially where consumer welfare may be significantly improved by a reduction in the former in order to get more of the latter.

regulation-v41n3-coverCalm Down about Common Ownership” is the title of a piece Thom Lambert and I published in the Fall 2018 issue of Regulation, which just hit online. The article is a condensed version our recent paper, “The Case for Doing Nothing About Institutional Investors’ Common Ownership of Small Stakes in Competing Firms.” In short, we argue that concern about common ownership lacks a theoretically sound foundation and is built upon faulty empirical support. We also explain why proposed “fixes” would do more harm than good.

Over the past several weeks we wrote a series of blog posts here that summarize or expand upon different parts of our argument. To pull them all into one place:

Today the European Commission launched its latest salvo against Google, issuing a decision in its three-year antitrust investigation into the company’s agreements for distribution of the Android mobile operating system. The massive fine levied by the Commission will dominate the headlines, but the underlying legal theory and proposed remedies are just as notable — and just as problematic.

The nirvana fallacy

It is sometimes said that the most important question in all of economics is “compared to what?” UCLA economist Harold Demsetz — one of the most important regulatory economists of the past century — coined the term “nirvana fallacy” to critique would-be regulators’ tendency to compare messy, real-world economic circumstances to idealized alternatives, and to justify policies on the basis of the discrepancy between them. Wishful thinking, in other words.

The Commission’s Android decision falls prey to the nirvana fallacy. It conjures a world in which Google offers its Android operating system on unrealistic terms, prohibits it from doing otherwise, and neglects the actual consequences of such a demand.

The idea at the core of the Commission’s decision is that by making its own services (especially Google Search and Google Play Store) easier to access than competing services on Android devices, Google has effectively foreclosed rivals from effective competition. In order to correct that claimed defect, the Commission demands that Google refrain from engaging in practices that favor its own products in its Android licensing agreements:

At a minimum, Google has to stop and to not re-engage in any of the three types of practices. The decision also requires Google to refrain from any measure that has the same or an equivalent object or effect as these practices.

The basic theory is straightforward enough, but its application here reflects a troubling departure from the underlying economics and a romanticized embrace of industrial policy that is unsupported by the realities of the market.

In a recent interview, European Commission competition chief, Margrethe Vestager, offered a revealing insight into her thinking about her oversight of digital platforms, and perhaps the economy in general: “My concern is more about whether we get the right choices,” she said. Asked about Facebook, for example, she specified exactly what she thinks the “right” choice looks like: “I would like to have a Facebook in which I pay a fee each month, but I would have no tracking and advertising and the full benefits of privacy.”

Some consumers may well be sympathetic with her preference (and even share her specific vision of what Facebook should offer them). But what if competition doesn’t result in our — or, more to the point, Margrethe Vestager’s — prefered outcomes? Should competition policy nevertheless enact the idiosyncratic consumer preferences of a particular regulator? What if offering consumers the “right” choices comes at the expense of other things they value, like innovation, product quality, or price? And, if so, can antitrust enforcers actually engineer a better world built around these preferences?

Android’s alleged foreclosure… that doesn’t really foreclose anything

The Commission’s primary concern is with the terms of Google’s deal: In exchange for royalty-free access to Android and a set of core, Android-specific applications and services (like Google Search and Google Maps) Google imposes a few contractual conditions.

Google allows manufacturers to use the Android platform — in which the company has invested (and continues to invest) billions of dollars — for free. It does not require device makers to include any of its core, Google-branded features. But if a manufacturer does decide to use any of them, it must include all of them, and make Google Search the device default. In another (much smaller) set of agreements, Google also offers device makers a small share of its revenue from Search if they agree to pre-install only Google Search on their devices (although users remain free to download and install any competing services they wish).

Essentially, that’s it. Google doesn’t allow device makers to pick and choose between parts of the ecosystem of Google products, free-riding on Google’s brand and investments. But manufacturers are free to use the Android platform and to develop their own competing brand built upon Google’s technology.

Other apps may be installed in addition to Google’s core apps. Google Search need not be the exclusive search service, but it must be offered out of the box as the default. Google Play and Chrome must be made available to users, but other app stores and browsers may be pre-installed and even offered as the default. And device makers who choose to do so may share in Search revenue by pre-installing Google Search exclusively — but users can and do install a different search service.

Alternatives to all of Google’s services (including Search) abound on the Android platform. It’s trivial both to install them and to set them as the default. Meanwhile, device makers regularly choose to offer these apps alongside Google’s services, and some, like Samsung, have developed entire customized app suites of their own. Still others, like Amazon, pre-install no Google apps and use Android without any of these constraints (and whose Google-free tablets are regularly ranked as the best-rated and most popular in Europe).

By contrast, Apple bundles its operating system with its devices, bypasses third-party device makers entirely, and offers consumers access to its operating system only if they pay (lavishly) for one of the very limited number of devices the company offers, as well. It is perhaps not surprising — although it is enlightening — that Apple earns more revenue in an average quarter from iPhone sales than Google is reported to have earned in total from Android since it began offering it in 2008.

Reality — and the limits it imposes on efforts to manufacture nirvana

The logic behind Google’s approach to Android is obvious: It is the extension of Google’s “advertisers pay” platform strategy to mobile. Rather than charging device makers (and thus consumers) directly for its services, Google earns its revenue by charging advertisers for targeted access to users via Search. Remove Search from mobile devices and you remove the mechanism by which Google gets paid.

It’s true that most device makers opt to offer Google’s suite of services to European users, and that most users opt to keep Google Search as the default on their devices — that is, indeed, the hoped-for effect, and necessary to ensure that Google earns a return on its investment.

That users often choose to keep using Google services instead of installing alternatives, and that device makers typically choose to engineer their products around the Google ecosystem, isn’t primarily the result of a Google-imposed mandate; it’s the result of consumer preferences for Google’s offerings in lieu of readily available alternatives.

The EU decision against Google appears to imagine a world in which Google will continue to develop Android and allow device makers to use the platform and Google’s services for free, even if the likelihood of recouping its investment is diminished.

The Commission also assessed in detail Google’s arguments that the tying of the Google Search app and Chrome browser were necessary, in particular to allow Google to monetise its investment in Android, and concluded that these arguments were not well founded. Google achieves billions of dollars in annual revenues with the Google Play Store alone, it collects a lot of data that is valuable to Google’s search and advertising business from Android devices, and it would still have benefitted from a significant stream of revenue from search advertising without the restrictions.

For the Commission, Google’s earned enough [trust me: you should follow the link. It’s my favorite joke…].

But that world in which Google won’t alter its investment decisions based on a government-mandated reduction in its allowable return on investment doesn’t exist; it’s a fanciful Nirvana.

Google’s real alternatives to the status quo are charging for the use of Android, closing the Android platform and distributing it (like Apple) only on a fully integrated basis, or discontinuing Android.

In reality, and compared to these actual alternatives, Google’s restrictions are trivial. Remember, Google doesn’t insist that Google Search be exclusive, only that it benefit from a “leg up” by being pre-installed as the default. And on this thin reed Google finances the development and maintenance of the (free) Android operating system and all of the other (free) apps from which Google otherwise earns little or no revenue.

It’s hard to see how consumers, device makers, or app developers would be made better off without Google’s restrictions, but in the real world in which the alternative is one of the three manifestly less desirable options mentioned above.

Missing the real competition for the trees

What’s more, while ostensibly aimed at increasing competition, the Commission’s proposed remedy — like the conduct it addresses — doesn’t relate to Google’s most significant competitors at all.

Facebook, Instagram, Firefox, Amazon, Spotify, Yelp, and Yahoo, among many others, are some of the most popular apps on Android phones, including in Europe. They aren’t foreclosed by Google’s Android distribution terms, and it’s even hard to imagine that they would be more popular if only Android phones didn’t come with, say, Google Search pre-installed.

It’s a strange anticompetitive story that has Google allegedly foreclosing insignificant competitors while apparently ignoring its most substantial threats.

The primary challenges Google now faces are from Facebook drawing away the most valuable advertising and Amazon drawing away the most valuable product searches (and increasingly advertising, as well). The fact that Google’s challenged conduct has never shifted in order to target these competitors as their threat emerged, and has had no apparent effect on these competitive dynamics, says all one needs to know about the merits of the Commission’s decision and the value of its proposed remedy.

In reality, as Demsetz suggested, Nirvana cannot be designed by politicians, especially in complex, modern technology markets. Consumers’ best hope for something close — continued innovation, low prices, and voluminous choice — lies in the evolution of markets spurred by consumer demand, not regulators’ efforts to engineer them.

Our story begins on the morning of January 9, 2007. Few people knew it at the time, but the world of wireless communications was about to change forever. Steve Jobs walked on stage wearing his usual turtleneck, and proceeded to reveal the iPhone. The rest, as they say, is history. The iPhone moved the wireless communications industry towards a new paradigm. No more physical keyboards, clamshell bodies, and protruding antennae. All of these were replaced by a beautiful black design, a huge touchscreen (3.5” was big for that time), a rear-facing camera, and (a little bit later) a revolutionary new way to consume applications: the App Store. Sales soared and Apple’s stock started an upward trajectory that would see it become one of the world’s most valuable companies.

The story could very well have ended there. If it had, we might all be using iPhones today. However, years before, Google had commenced its own march into the wireless communications space by purchasing a small startup called Android. A first phone had initially been slated for release in late 2007. But Apple’s iPhone announcement sent Google back to the drawing board. It took Google and its partners until 2010 to come up with a competitive answer – the Google Nexus One produced by HTC.

Understanding the strategy that Google put in place during this three year timespan is essential to understanding the European Commission’s Google Android decision.

How to beat one of the great innovations?

In order to overthrow — or even merely just compete with — the iPhone, Google faced the same dilemma that most second-movers have to contend with: imitate or differentiate. Its solution was a mix of both. It took the touchscreen, camera, and applications, but departed on one key aspect. Whereas Apple controls the iPhone from end-to-end, Google opted for a licensed, open-source operating system that substitutes a more-decentralized approach for Apple’s so-called “walled garden.”

Google and a number of partners founded the Open Handset Alliance (“OHA”) in November 2007. This loose association of network operators, software companies and handset manufacturers became the driving force behind the Android OS. Through the OHA, Google and its partners have worked to develop minimal specifications for OHA-compliant Android devices in order to ensure that all levels of the device ecosystem — from device makers to app developers — function well together. As its initial press release boasts, through the OHA:

Handset manufacturers and wireless operators will be free to customize Android in order to bring to market innovative new products faster and at a much lower cost. Developers will have complete access to handset capabilities and tools that will enable them to build more compelling and user-friendly services, bringing the Internet developer model to the mobile space. And consumers worldwide will have access to less expensive mobile devices that feature more compelling services, rich Internet applications and easier-to-use interfaces — ultimately creating a superior mobile experience.

The open source route has a number of advantages — notably the improved division of labor — but it is not without challenges. One key difficulty lies in coordinating and incentivizing the dozens of firms that make up the alliance. Google must not only keep the diverse Android ecosystem directed toward a common, compatible goal, it also has to monetize a product that, by its very nature, is given away free of charge. It is Google’s answers to these two problems that set off the Commission’s investigation.

The first problem is a direct consequence of Android’s decentralization. Whereas there are only a small number of iPhones (the couple of models which Apple markets at any given time) running the same operating system, Android comes in a jaw-dropping array of flavors. Some devices are produced by Google itself, others are the fruit of high-end manufacturers such as Samsung and LG, there are also so-called “flagship killers” like OnePlus, and budget phones from the likes of Motorola and Honor (one of Huawei’s brands). The differences don’t stop there. Manufacturers, like Samsung, Xiaomi and LG (to name but a few) have tinkered with the basic Android setup. Samsung phones heavily incorporate its Bixby virtual assistant, while Xiaomi packs in a novel user interface. The upshot is that the Android marketplace is tremendously diverse.

Managing this variety is challenging, to say the least (preventing projects from unravelling into a myriad of forks is always an issue for open source projects). Google and the OHA have come up with an elegant solution. The alliance penalizes so-called “incompatible” devices — that is, handsets whose software or hardware stray too far from a predetermined series of specifications. When this is the case, Google may refuse to license its proprietary applications (most notably the Play Store). This minimum level of uniformity ensures that apps will run smoothly on all devices. It also provides users with a consistent experience (thereby protecting the Android brand) and reduces the cost of developing applications for Android. Unsurprisingly, Android developers have lauded these “anti-fragmentation” measures, branding the Commission’s case a disaster.

A second important problem stems from the fact that the Android OS is an open source project. Device manufacturers can thus license the software free of charge. This is no small advantage. It shaves precious dollars from the price of Android smartphones, thus opening-up the budget end of the market. Although there are numerous factors at play, it should be noted that a top of the range Samsung Galaxy S9+ is roughly 30% cheaper ($819) than its Apple counterpart, the iPhone X ($1165).

Offering a competitive operating system free of charge might provide a fantastic deal for consumers, but it poses obvious business challenges. How can Google and other members of the OHA earn a return on the significant amounts of money poured into developing, improving, and marketing and Android devices? As is often the case with open source projects, they essentially rely on complementarities. Google produces the Android OS in the hope that it will boost users’ consumption of its profitable, ad-supported services (Google Search in particular). This is sometimes referred to as a loss leader or complementary goods strategy.

Google uses two important sets of contractual provisions to cement this loss leader strategy. First, it seemingly bundles a number of proprietary applications together. Manufacturers must pre-load the Google Search and Chrome apps in order to obtain the Play Store app (the lynchpin on which the Android ecosystem sits). Second, Google has concluded a number of “revenue sharing” deals with manufacturers and network operators. These companies receive monetary compensation when the Google Search is displayed prominently on a user’s home screen. In effect, they are receiving a cut of the marginal revenue that the use of this search bar generates for Google. Both of these measures ultimately nudge users — but do not force them, as neither prevents users from installing competing apps — into using Google’s most profitable services.

Readers would be forgiven for thinking that this is a win-win situation. Users get a competitive product free of charge, while Google and other members of the OHA earn enough money to compete against Apple.

The Commission is of another mind, however.

Commission’s hubris

The European Commission believes that Google is hurting competition. Though the text of the decision is not yet available, the thrust of its argument is that Google’s anti-fragmentation measures prevent software developers from launching competing OSs, while the bundling and revenue sharing both thwart rival search engines.

This analysis runs counter to some rather obvious facts:

  • For a start, the Android ecosystem is vibrant. Numerous firms have launched forked versions of Android, both with and without Google’s apps. Amazon’s Fire line of devices is a notable example.
  • Second, although Google’s behavior does have an effect on the search engine market, there is nothing anticompetitive about it. Yahoo could very well have avoided its high-profile failure if, way back in 2005, it had understood the importance of the mobile internet. At the time, it still had a 30% market share, compared to Google’s 36%. Firms that fail to seize upon business opportunities will fall out of the market. This is not a bug; it is possibly the most important feature of market economies. It reveals the products that consumers prefer and stops resources from being allocated to less valuable propositions.
  • Last but not least, Google’s behavior does not prevent other search engines from placing their own search bars or virtual assistants on smartphones. This is essentially what Samsung has done by ditching Google’s assistant in favor of its Bixby service. In other words, Google is merely competing with other firms to place key apps on or near the home screen of devices.

Even if the Commission’s reasoning where somehow correct, the competition watchdog is using a sledgehammer to crack a nut. The potential repercussions for Android, the software industry, and European competition law are great:

  • For a start, the Commission risks significantly weakening Android’s competitive position relative to Apple. Android is a complex ecosystem. The idea that it is possible to bring incremental changes to its strategy without threatening the viability of the whole is a sign of the Commission’s hubris.
  • More broadly, the harsh treatment of Google could have significant incentive effects for other tech platforms. As others have already pointed out, the Commission’s decision rests on the idea that dominant firms should not be allowed to favor their own services compared to those of rivals. Taken a face value, this anti-discrimination policy will push firms to design closed platforms. If rivals are excluded from the very start, there is no one against whom to discriminate. Antitrust watchdogs are thus kept at bay (and thus the Commission is acting against Google’s marginal preference for its own services, rather than Apple’s far-more-substantial preferencing of its own services). Moving to a world of only walled gardens might harm users and innovators alike.

Over the next couple of days and weeks, many will jump to the Commission’s defense. They will see its action as a necessary step against the abstract “power” of Silicon Valley’s tech giants. Rivals will feel vindicated. But when all is done and dusted, there seems to be little doubt that the decision is misguided. The Commission will have struck a blow to the heart of the most competitive offering in the smartphone space. And consumers will be the biggest losers.

This is not what the competition laws were intended to achieve.

Today would have been Henry Manne’s 90th birthday. When he passed away in 2015 he left behind an immense and impressive legacy. In 1991, at the inaugural meeting of the American Law & Economics Association (ALEA), Manne was named a Life Member of ALEA and, along with Nobel Laureate Ronald Coase, and federal appeals court judges Richard Posner and Guido Calabresi, one of the four Founders of Law and Economics. The organization I founded, the International Center for Law & Economics is dedicated to his memory, along with that of his great friend and mentor, UCLA economist Armen Alchian.

Manne is best known for his work in corporate governance and securities law and regulation, of course. But sometimes forgotten is that his work on the market for corporate control was motivated by concerns about analytical flaws in merger enforcement. As former FTC commissioners Maureen Ohlhausen and Joshua Wright noted in a 2015 dissenting statement:

The notion that the threat of takeover would induce current managers to improve firm performance to the benefit of shareholders was first developed by Henry Manne. Manne’s pathbreaking work on the market for corporate control arose out of a concern that antitrust constraints on horizontal mergers would distort its functioning. See Henry G. Manne, Mergers and the Market for Corporate Control, 73 J. POL. ECON. 110 (1965).

But Manne’s focus on antitrust didn’t end in 1965. Moreover, throughout his life he was a staunch critic of misguided efforts to expand the power of government, especially when these efforts claimed to have their roots in economic reasoning — which, invariably, was hopelessly flawed. As his obituary notes:

In his teaching, his academic writing, his frequent op-eds and essays, and his work with organizations like the Cato Institute, the Liberty Fund, the Institute for Humane Studies, and the Mont Pèlerin Society, among others, Manne advocated tirelessly for a clearer understanding of the power of markets and competition and the importance of limited government and economically sensible regulation.

Thus it came to be, in 1974, that Manne was called to testify before the Senate Judiciary Committee, Subcommittee on Antitrust and Monopoly, on Michigan Senator Philip A. Hart’s proposed Industrial Reorganization Act. His testimony is a tour de force, and a prescient rejoinder to the faddish advocates of today’s “hipster antitrust”— many of whom hearken longingly back to the antitrust of the 1960s and its misguided “gurus.”

Henry Manne’s trenchant testimony critiquing the Industrial Reorganization Act and its (ostensible) underpinnings is reprinted in full in this newly released ICLE white paper (with introductory material by Geoffrey Manne):

Henry G. Manne: Testimony on the Proposed Industrial Reorganization Act of 1973 — What’s Hip (in Antitrust) Today Should Stay Passé

Sen. Hart proposed the Industrial Reorganization Act in order to address perceived problems arising from industrial concentration. The bill was rooted in the belief that industry concentration led inexorably to monopoly power; that monopoly power, however obtained, posed an inexorable threat to freedom and prosperity; and that the antitrust laws (i.e., the Sherman and Clayton Acts) were insufficient to address the purported problems.

That sentiment — rooted in the reflexive application of the (largely-discredited structure-conduct-performance (SCP) paradigm) — had already become largely passé among economists in the 70s, but it has resurfaced today as the asserted justification for similar (although less onerous) antitrust reform legislation and the general approach to antitrust analysis commonly known as “hipster antitrust.”

The critiques leveled against the asserted economic underpinnings of efforts like the Industrial Reorganization Act are as relevant today as they were then. As Henry Manne notes in his testimony:

To be successful in this stated aim [“getting the government out of the market”] the following dreams would have to come true: The members of both the special commission and the court established by the bill would have to be satisfied merely to complete their assigned task and then abdicate their tremendous power and authority; they would have to know how to satisfactorily define and identify the limits of the industries to be restructured; the Government’s regulation would not sacrifice significant efficiencies or economies of scale; and the incentive for new firms to enter an industry would not be diminished by the threat of a punitive response to success.

The lessons of history, economic theory, and practical politics argue overwhelmingly against every one of these assumptions.

Both the subject matter of and impetus for the proposed bill (as well as Manne’s testimony explaining its economic and political failings) are eerily familiar. The preamble to the Industrial Reorganization Act asserts that

competition… preserves a democratic society, and provides an opportunity for a more equitable distribution of wealth while avoiding the undue concentration of economic, social, and political power; [and] the decline of competition in industries with oligopoly or monopoly power has contributed to unemployment, inflation, inefficiency, an underutilization of economic capacity, and the decline of exports….

The echoes in today’s efforts to rein in corporate power by adopting structural presumptions are unmistakable. Compare, for example, this language from Sen. Klobuchar’s Consolidation Prevention and Competition Promotion Act of 2017:

[C]oncentration that leads to market power and anticompetitive conduct makes it more difficult for people in the United States to start their own businesses, depresses wages, and increases economic inequality;

undue market concentration also contributes to the consolidation of political power, undermining the health of democracy in the United States; [and]

the anticompetitive effects of market power created by concentration include higher prices, lower quality, significantly less choice, reduced innovation, foreclosure of competitors, increased entry barriers, and monopsony power.

Remarkably, Sen. Hart introduced his bill as “an alternative to government regulation and control.” Somehow, it was the antithesis of “government control” to introduce legislation that, in Sen. Hart’s words,

involves changing the life styles of many of our largest corporations, even to the point of restructuring whole industries. It involves positive government action, not to control industry but to restore competition and freedom of enterprise in the economy

Like today’s advocates of increased government intervention to design the structure of the economy, Sen. Hart sought — without a trace of irony — to “cure” the problem of politicized, ineffective enforcement by doubling down on the power of the enforcers.

Henry Manne was having none of it. As he pointedly notes in his testimony, the worst problems of monopoly power are of the government’s own making. The real threat to democracy, freedom, and prosperity is the political power amassed in the bureaucratic apparatus that frequently confers monopoly, at least as much as the monopoly power it spawns:

[I]t takes two to make that bargain [political protection and subsidies in exchange for lobbying]. And as we look around at various industries we are constrained to ask who has not done this. And more to the point, who has not succeeded?

It is unhappily almost impossible to name a significant industry in the United States that has not gained some degree of protection from the rigors of competition from Federal, State or local governments.

* * *

But the solution to inefficiencies created by Government controls cannot lie in still more controls. The politically responsible task ahead for Congress is to dismantle our existing regulatory monster before it strangles us.

We have spawned a gigantic bureaucracy whose own political power threatens the democratic legitimacy of government.

We are rapidly moving toward the worst features of a centrally planned economy with none of the redeeming political, economic, or ethical features usually claimed for such systems.

The new white paper includes Manne’s testimony in full, including his exchange with Sen. Hart and committee staffers following his prepared remarks.

It is, sadly, nearly as germane today as it was then.

One final note: The subtitle for the paper is a reference to the song “What Is Hip?” by Tower of Power. Its lyrics are decidedly apt:

You done went and found you a guru,

In your effort to find you a new you,

And maybe even managed

To raise your conscious level.

While you’re striving to find the right road,

There’s one thing you should know:

What’s hip today

Might become passé.

— Tower of Power, What Is Hip? (Emilio Castillo, John David Garibaldi & Stephen M. Kupka, What Is Hip? (Bob-A-Lew Songs 1973), from the album TOWER OF POWER (Warner Bros. 1973))

And here’s the song, in all its glory:

 

One of the hottest antitrust topics of late has been institutional investors’ “common ownership” of minority stakes in competing firms.  Writing in the Harvard Law Review, Einer Elhauge proclaimed that “[a]n economic blockbuster has recently been exposed”—namely, “[a] small group of institutions has acquired large shareholdings in horizontal competitors throughout our economy, causing them to compete less vigorously with each other.”  In the Antitrust Law Journal, Eric Posner, Fiona Scott Morton, and Glen Weyl contended that “the concentration of markets through large institutional investors is the major new antitrust challenge of our time.”  Those same authors took to the pages of the New York Times to argue that “[t]he great, but mostly unknown, antitrust story of our time is the astonishing rise of the institutional investor … and the challenge that it poses to market competition.”

Not surprisingly, these scholars have gone beyond just identifying a potential problem; they have also advocated policy solutions.  Elhauge has called for allowing government enforcers and private parties to use Section 7 of the Clayton Act, the provision primarily used to prevent anticompetitive mergers, to police institutional investors’ ownership of minority positions in competing firms.  Posner et al., concerned “that private litigation or unguided public litigation could cause problems because of the interactive nature of institutional holdings on competition,” have proposed that federal antitrust enforcers adopt an enforcement policy that would encourage institutional investors either to avoid common ownership of firms in concentrated industries or to limit their influence over such firms by refraining from voting their shares.

The position of these scholars is thus (1) that common ownership by institutional investors significantly diminishes competition in concentrated industries, and (2) that additional antitrust intervention—beyond generally applicable rules on, say, hub-and-spoke conspiracies and anticompetitive information exchanges—is appropriate to prevent competitive harm.

Mike Sykuta and I have recently posted a paper taking issue with this two-pronged view.  With respect to the first prong, we contend that there are serious problems with both the theory of competitive harm stemming from institutional investors’ common ownership and the empirical evidence that has been marshalled in support of that theory.  With respect to the second, we argue that even if competition were softened by institutional investors’ common ownership of small minority interests in competing firms, the unintended negative consequences of an antitrust fix would outweigh any benefits from such intervention.

Over the next few days, we plan to unpack some of the key arguments in our paper, The Case for Doing Nothing About Institutional Investors’ Common Ownership of Small Stakes in Competing Firms.  In the meantime, we encourage readers to download the paper and send us any comments.

The paper’s abstract is below the fold. Continue Reading…

As I explain in my new book, How to Regulate, sound regulation requires thinking like a doctor.  When addressing some “disease” that reduces social welfare, policymakers should catalog the available “remedies” for the problem, consider the implementation difficulties and “side effects” of each, and select the remedy that offers the greatest net benefit.

If we followed that approach in deciding what to do about the way Internet Service Providers (ISPs) manage traffic on their networks, we would conclude that FCC Chairman Ajit Pai is exactly right:  The FCC should reverse its order classifying ISPs as common carriers (Title II classification) and leave matters of non-neutral network management to antitrust, the residual regulator of practices that may injure competition.

Let’s walk through the analysis.

Diagnose the Disease.  The primary concern of net neutrality advocates is that ISPs will block some Internet content or will slow or degrade transmission from content providers who do not pay for a “fast lane.”  Of course, if an ISP’s non-neutral network management impairs the user experience, it will lose business; the vast majority of Americans have access to multiple ISPs, and competition is growing by the day, particularly as mobile broadband expands.

But an ISP might still play favorites, despite the threat of losing some subscribers, if it has a relationship with content providers.  Comcast, for example, could opt to speed up content from HULU, which streams programming of Comcast’s NBC subsidiary, or might slow down content from Netflix, whose streaming video competes with Comcast’s own cable programming.  Comcast’s losses in the distribution market (from angry consumers switching ISPs) might be less than its gains in the content market (from reducing competition there).

It seems, then, that the “disease” that might warrant a regulatory fix is an anticompetitive vertical restraint of trade: a business practice in one market (distribution) that could restrain trade in another market (content production) and thereby reduce overall output in that market.

Catalog the Available Remedies.  The statutory landscape provides at least three potential remedies for this disease.

The simplest approach would be to leave the matter to antitrust, which applies in the absence of more focused regulation.  In recent decades, courts have revised the standards governing vertical restraints of trade so that antitrust, which used to treat such restraints in a ham-fisted fashion, now does a pretty good job separating pro-consumer restraints from anti-consumer ones.

A second legally available approach would be to craft narrowly tailored rules precluding ISPs from blocking, degrading, or favoring particular Internet content.  The U.S. Court of Appeals for the D.C. Circuit held that Section 706 of the 1996 Telecommunications Act empowered the FCC to adopt targeted net neutrality rules, even if ISPs are not classified as common carriers.  The court insisted the that rules not treat ISPs as common carriers (if they are not officially classified as such), but it provided a road map for tailored net neutrality rules. The FCC pursued this targeted, rules-based approach until President Obama pushed for a third approach.

In November 2014, reeling from a shellacking in the  midterm elections and hoping to shore up his base, President Obama posted a video calling on the Commission to assure net neutrality by reclassifying ISPs as common carriers.  Such reclassification would subject ISPs to Title II of the 1934 Communications Act, giving the FCC broad power to assure that their business practices are “just and reasonable.”  Prodded by the President, the nominally independent commissioners abandoned their targeted, rules-based approach and voted to regulate ISPs like utilities.  They then used their enhanced regulatory authority to impose rules forbidding the blocking, throttling, or paid prioritization of Internet content.

Assess the Remedies’ Limitations, Implementation Difficulties, and Side Effects.   The three legally available remedies — antitrust, tailored rules under Section 706, and broad oversight under Title II — offer different pros and cons, as I explained in How to Regulate:

The choice between antitrust and direct regulation generally (under either Section 706 or Title II) involves a tradeoff between flexibility and determinacy. Antitrust is flexible but somewhat indeterminate; it would condemn non-neutral network management practices that are likely to injure consumers, but it would permit such practices if they would lower costs, improve quality, or otherwise enhance consumer welfare. The direct regulatory approaches are rigid but clearer; they declare all instances of non-neutral network management to be illegal per se.

Determinacy and flexibility influence decision and error costs.  Because they are more determinate, ex ante rules should impose lower decision costs than would antitrust. But direct regulation’s inflexibility—automatic condemnation, no questions asked—will generate higher error costs. That’s because non-neutral network management is often good for end users. For example, speeding up the transmission of content for which delivery lags are particularly detrimental to the end-user experience (e.g., an Internet telephone call, streaming video) at the expense of content that is less lag-sensitive (e.g., digital photographs downloaded from a photo-sharing website) can create a net consumer benefit and should probably be allowed. A per se rule against non-neutral network management would therefore err fairly frequently. Antitrust’s flexible approach, informed by a century of economic learning on the output effects of contractual restraints between vertically related firms (like content producers and distributors), would probably generate lower error costs.

Although both antitrust and direct regulation offer advantages vis-à-vis each other, this isn’t simply a wash. The error cost advantage antitrust holds over direct regulation likely swamps direct regulation’s decision cost advantage. Extensive experience with vertical restraints on distribution have shown that they are usually good for consumers. For that reason, antitrust courts in recent decades have discarded their old per se rules against such practices—rules that resemble the FCC’s direct regulatory approach—in favor of structured rules of reason that assess liability based on specific features of the market and restraint at issue. While these rules of reason (standards, really) may be less determinate than the old, error-prone per se rules, they are not indeterminate. By relying on past precedents and the overarching principle that legality turns on consumer welfare effects, business planners and adjudicators ought to be able to determine fairly easily whether a non-neutral network management practice passes muster. Indeed, the fact that the FCC has uncovered only four instances of anticompetitive network management over the commercial Internet’s entire history—a period in which antitrust, but not direct regulation, has governed ISPs—suggests that business planners are capable of determining what behavior is off-limits. Direct regulation’s per se rule against non-neutral network management is thus likely to add error costs that exceed any reduction in decision costs. It is probably not the remedy that would be selected under this book’s recommended approach.

In any event, direct regulation under Title II, the currently prevailing approach, is certainly not the optimal way to address potentially anticompetitive instances of non-neutral network management by ISPs. Whereas any ex ante   regulation of network management will confront the familiar knowledge problem, opting for direct regulation under Title II, rather than the more cabined approach under Section 706, adds adverse public choice concerns to the mix.

As explained earlier, reclassifying ISPs to bring them under Title II empowers the FCC to scrutinize the “justice” and “reasonableness” of nearly every aspect of every arrangement between content providers, ISPs, and consumers. Granted, the current commissioners have pledged not to exercise their Title II authority beyond mandating network neutrality, but public choice insights would suggest that this promised forbearance is unlikely to endure. FCC officials, who remain self-interest maximizers even when acting in their official capacities, benefit from expanding their regulatory turf; they gain increased power and prestige, larger budgets to manage, a greater ability to “make or break” businesses, and thus more opportunity to take actions that may enhance their future career opportunities. They will therefore face constant temptation to exercise the Title II authority that they have committed, as of now, to leave fallow. Regulated businesses, knowing that FCC decisions are key to their success, will expend significant resources lobbying for outcomes that benefit them or impair their rivals. If they don’t get what they want because of the commissioners’ voluntary forbearance, they may bring legal challenges asserting that the Commission has failed to assure just and reasonable practices as Title II demands. Many of the decisions at issue will involve the familiar “concentrated benefits/diffused costs” dynamic that tends to result in underrepresentation by those who are adversely affected by a contemplated decision. Taken together, these considerations make it unlikely that the current commissioners’ promised restraint will endure. Reclassification of ISPs so that they are subject to Title II regulation will probably lead to additional constraints on edge providers and ISPs.

It seems, then, that mandating net neutrality under Title II of the 1934 Communications Act is the least desirable of the three statutorily available approaches to addressing anticompetitive network management practices. The Title II approach combines the inflexibility and ensuing error costs of the Section 706 direct regulation approach with the indeterminacy and higher decision costs of an antitrust approach. Indeed, the indeterminacy under Title II is significantly greater than that under antitrust because the “just and reasonable” requirements of the Communications Act, unlike antitrust’s reasonableness requirements (no unreasonable restraint of trade, no unreasonably exclusionary conduct) are not constrained by the consumer welfare principle. Whereas antitrust always protects consumers, not competitors, the FCC may well decide that business practices in the Internet space are unjust or unreasonable solely because they make things harder for the perpetrator’s rivals. Business planners are thus really “at sea” when it comes to assessing the legality of novel practices.

All this implies that Internet businesses regulated by Title II need to court the FCC’s favor, that FCC officials have more ability than ever to manipulate government power to private ends, that organized interest groups are well-poised to secure their preferences when the costs are great but widely dispersed, and that the regulators’ dictated outcomes—immune from market pressures reflecting consumers’ preferences—are less likely to maximize net social welfare. In opting for a Title II solution to what is essentially a market power problem, the powers that be gave short shrift to an antitrust approach, even though there was no natural monopoly justification for direct regulation. They paid little heed to the adverse consequences likely to result from rigid per se rules adopted under a highly discretionary (and politically manipulable) standard. They should have gone back to basics, assessing the disease to be remedied (market power), the full range of available remedies (including antitrust), and the potential side effects of each. In other words, they could’ve used this book.

How to Regulate‘s full discussion of net neutrality and Title II is here:  Net Neutrality Discussion in How to Regulate.

In recent years, the European Union’s (EU) administrative body, the European Commission (EC), increasingly has applied European competition law in a manner that undermines free market dynamics.  In particular, its approach to “dominant” firm conduct disincentivizes highly successful companies from introducing product and service innovations that enhance consumer welfare and benefit the economy – merely because they threaten to harm less efficient competitors.

For example, the EC fined Microsoft 561 million euros in 2013 for its failure to adhere to an order that it offer a version of its Window software suite that did not include its popular Windows Media Player (WMP) – despite the lack of consumer demand for a “dumbed down” Windows without WMP.  This EC intrusion into software design has been described as a regulatory “quagmire.”

In June 2017 the EC fined Google 2.42 billion euros for allegedly favoring its own comparison shopping service over others favored in displaying Google search results – ignoring economic research that shows Google’s search policies benefit consumers.  Google also faces potentially higher EC antitrust fines due to alleged abuses involving android software (bundling of popular Google search and Chrome apps), a product that has helped spur dynamic smartphone innovations and foster new markets.

Furthermore, other highly innovative single firms, such as Apple and Amazon (favorable treatment deemed “state aids”), Qualcomm (alleged anticompetitive discounts), and Facebook (in connection with its WhatsApp acquisition), face substantial EC competition law penalties.

Underlying the EC’s current enforcement philosophy is an implicit presumption that innovations by dominant firms violate competition law if they in any way appear to disadvantage competitors.  That presumption forgoes considering the actual effects on the competitive process of dominant firm activities.  This is a recipe for reduced innovation, as successful firms “pull their competitive punches” to avoid onerous penalties.

The European Court of Justice (ECJ) implicitly recognized this problem in its September 6, 2017 decision setting aside the European General Court’s affirmance of the EC’s 2009 1.06 billion euro fine against Intel.  Intel involved allegedly anticompetitive “loyalty rebates” by Intel, which allowed buyers to achieve cost savings in Intel chip purchases.  In remanding the Intel case to the General Court for further legal and factual analysis, the ECJ’s opinion stressed that the EC needed to do more than find a dominant position and categorize the rebates in order to hold Intel liable.  The EC also needed to assess the “capacity of [Intel’s] . . . practice to foreclose competitors which are at least as efficient” and whether any exclusionary effect was outweighed by efficiencies that also benefit consumers.  In short, evidence-based antitrust analysis was required.  Mere reliance on presumptions was not enough.  Why?  Because competition on the merits is centered on the recognition that the departure of less efficient competitors is part and parcel of consumer welfare-based competition on the merits.  As the ECJ cogently put it:

[I]t must be borne in mind that it is in no way the purpose of Article 102 TFEU [which prohibits abuse of a dominant position] to prevent an undertaking from acquiring, on its own merits, the dominant position on a market.  Nor does that provision seek to ensure that competitors less efficient than the undertaking with the dominant position should remain on the market . . . .  [N]ot every exclusionary effect is necessarily detrimental to competition. Competition on the merits may, by definition, lead to the departure from the market or the marginalisation of competitors that are less efficient and so less attractive to consumers from the point of view of, among other things, price, choice, quality or innovation[.]

Although the ECJ’s recent decision is commendable, it does not negate the fact that Intel had to wait eight years to have its straightforward arguments receive attention – and the saga is far from over, since the General Court has to address this matter once again.  These sorts of long-term delays, during which firms face great uncertainty (and the threat of further EC investigations and fines), are antithetical to innovative activity by enterprises deemed dominant.  In short, unless and until the EC changes its competition policy perspective on dominant firm conduct (and there are no indications that such a change is imminent), innovation and economic dynamism will suffer.

Even if the EC dithers, the United Kingdom’s (UK) imminent withdrawal from the EU (Brexit) provides it with a unique opportunity to blaze a new competition policy trail – and perhaps in so doing influence other jurisdictions.

In particular, Brexit will enable the UK’s antitrust enforcer, the Competition and Markets Authority (CMA), to adopt an outlook on competition policy in general – and on single firm conduct in particular – that is more sensitive to innovation and economic dynamism.  What might such a CMA enforcement policy look like?  It should reject the EC’s current approach.  It should focus instead on the actual effects of competitive activity.  In particular, it should incorporate the insights of decision theory (see here, for example) and place great weight on efficiencies (see here, for example).

Let us hope that the CMA acts boldly – carpe diem.  Such action, combined with other regulatory reforms, could contribute substantially to the economic success of Brexit (see here).

The Federal Trade Commission’s (FTC) regrettable January 17 filing of a federal court injunctive action against Qualcomm, in the waning days of the Obama Administration, is a blow to its institutional integrity and well-earned reputation as a top notch competition agency.

Stripping away the semantic gloss, the heart of the FTC’s complaint is that Qualcomm is charging smartphone makers “too much” for licenses needed to practice standardized cellular communications technologies – technologies that Qualcomm developed. This complaint flies in the face of the Supreme Court’s teaching in Verizon v. Trinko that a monopolist has every right to charge monopoly prices and thereby enjoy the full fruits of its legitimately obtained monopoly. But Qualcomm is more than one exceptionally ill-advised example of prosecutorial overreach, that (hopefully) will fail and end up on the scrapheap of unsound federal antitrust initiatives. The Qualcomm complaint undoubtedly will be cited by aggressive foreign competition authorities as showing that American antitrust enforcement now recognizes mere “excessive pricing” as a form of “monopoly abuse” – therefore justifying “excessive pricing” cases that are growing like topsy abroad, especially in East Asia.

Particularly unfortunate is the fact that the Commission chose to authorize the filing by a 2-1 vote, which ignored Commissioner Maureen Ohlhausen’s pithy dissent – a rarity in cases involving the filing of federal lawsuits. Commissioner Ohlhausen’s analysis skewers the legal and economic basis for the FTC’s complaint, and her summary, which includes an outstanding statement of basic antitrust enforcement principles, is well worth noting (footnote omitted):

My practice is not to write dissenting statements when the Commission, against my vote, authorizes litigation. That policy reflects several principles. It preserves the integrity of the agency’s mission, recognizes that reasonable minds can differ, and supports the FTC’s staff, who litigate demanding cases for consumers’ benefit. On the rare occasion when I do write, it has been to avoid implying that I disagree with the complaint’s theory of liability.

I do not depart from that policy lightly. Yet, in the Commission’s 2-1 decision to sue Qualcomm, I face an extraordinary situation: an enforcement action based on a flawed legal theory (including a standalone Section 5 count) that lacks economic and evidentiary support, that was brought on the eve of a new presidential administration, and that, by its mere issuance, will undermine U.S. intellectual property rights in Asia and worldwide. These extreme circumstances compel me to voice my objections.

Let us hope that President Trump makes it an early and high priority to name Commissioner Ohlhausen Acting Chairman of the FTC. The FTC simply cannot afford any more embarrassing and ill-reasoned antitrust initiatives that undermine basic principles of American antitrust enforcement and may be used by foreign competition authorities to justify unwarranted actions against American firms. Maureen Ohlhausen can be counted upon to provide needed leadership in moving the Commission in a sounder direction.

P.S. I have previously published a commentary at this site regarding an unwarranted competition law Statement of Objections directed at Google by the European Commission, a matter which did not involve patent licensing. And for a more general critique of European competition policy along these lines, see here.

Thanks to Geoff for the introduction. I look forward to posting a few things over the summer.

I’d like to begin by discussing Geoff’s post on the pending legislative proposals designed to combat strategic abuse of drug safety regulations to prevent generic competition. Specifically, I’d like to address the economic incentive structure that is in effect in this highly regulated market.

Like many others, I first noticed the abuse of drug safety regulations to prevent competition when Turing Pharmaceuticals—then led by now infamous CEO Martin Shkreli—acquired the manufacturing rights for the anti-parasitic drug Daraprim, and raised the price of the drug by over 5,000%. The result was a drug that cost $750 per tablet. Daraprim (pyrimethamine) is used to combat malaria and toxoplasma gondii infections in immune-compromised patients, especially those with HIV. The World Health Organization includes Daraprim on its “List of Essential Medicines” as a medicine important to basic health systems. After the huge price hike, the drug was effectively out of reach for many insurance plans and uninsured patients who needed it for the six to eight week course of treatment for toxoplasma gondii infections.

It’s not unusual for drugs to sell at huge multiples above their manufacturing cost. Indeed, a primary purpose of patent law is to allow drug companies to earn sufficient profits to engage in the expensive and risky business of developing new drugs. But Daraprim was first sold in 1953 and thus has been off patent for decades. With no intellectual property protection Daraprim should, theoretically, now be available from generic drug manufactures for only a little above cost. Indeed, this is what we see in the rest of the world. Daraprim is available all over the world for very cheap prices. The per tablet price is 3 rupees (US$0.04) in India, R$0.07 (US$0.02) in Brazil, US$0.18 in Australia, and US$0.66 in the UK.

So what gives in the U.S.? Or rather, what does not give? What in our system of drug distribution has gotten stuck and is preventing generic competition from swooping in to compete down the high price of off-patent drugs like Daraprim? The answer is not market failure, but rather regulatory failure, as Geoff noted in his post. While generics would love to enter a market where a drug is currently selling for high profits, they cannot do so without getting FDA approval for their generic version of the drug at issue. To get approval, a generic simply has to file an Abbreviated New Drug Application (“ANDA”) that shows that its drug is equivalent to the branded drug with which it wants to compete. There’s no need for the generic to repeat the safety and efficacy tests that the brand manufacturer originally conducted. To test for equivalence, the generic needs samples of the brand drug. Without those samples, the generic cannot meet its burden of showing equivalence. This is where the strategic use of regulation can come into play.

Geoff’s post explains the potential abuse of Risk Evaluation and Mitigation Strategies (“REMS”). REMS are put in place to require certain safety steps (like testing a woman for pregnancy before prescribing a drug that can cause birth defects) or to restrict the distribution channels for dangerous or addictive drugs. As Geoff points out, there is evidence that a few brand name manufacturers have engaged in bad-faith refusals to provide samples using the excuse of REMS or restricted distribution programs to (1) deny requests for samples, (2) prevent generic manufacturers from buying samples from resellers, and (3) deny generics whose drugs have won approval access to the REMS system that is required for generics to distribute their drugs. Once the FDA has certified that a generic manufacturer can safely handle the drug at issue, there is no legitimate basis for the owners of brand name drugs to deny samples to the generic maker. Expressed worries about liability from entering joint REMS programs with generics also ring hollow, for the most part, and would be ameliorated by the pending legislation.

It’s important to note that this pricing situation is unique to drugs because of the regulatory framework surrounding drug manufacture and distribution. If a manufacturer of, say, an off-patent vacuum cleaner wants to prevent competitors from copying its vacuum cleaner design, it is unlikely to be successful. Even if the original manufacturer refuses to sell any vacuum cleaners to a competitor, and instructs its retailers not to sell either, this will be very difficult to monitor and enforce. Moreover, because of an unrestricted resale market, a competitor would inevitably be able to obtain samples of the vacuum cleaner it wishes to copy. Only patent law can successfully protect against the copying of a product sold to the general public, and when the patent expires, so too will the ability to prevent copying.

Drugs are different. The only way a consumer can resell prescription drugs is by breaking the law. Pills bought from an illegal secondary market would be useless to generics for purposes of FDA approval anyway, because the chain of custody would not exist to prove that the samples are the real thing. This means generics need to get samples from the authorized manufacturer or distribution company. When a drug is subject to a REMS-required restricted distribution program, it is even more difficult, if not impossible, for a generic maker to get samples of the drugs for which it wants to make generic versions. Restricted distribution programs, which are used for dangerous or addictive drugs, by design very tightly control the chain of distribution so that the drugs go only to patients with proper prescriptions from authorized doctors.

A troubling trend has arisen recently in which drug owners put their branded drugs into restricted distribution programs not because of any FDA REMS requirement, but instead as a method to prevent generics from obtaining samples and making generic versions of the drugs. This is the strategy that Turing used before it raised prices over 5,000% on Daraprim. And Turing isn’t the only company to use this strategy. It is being emulated by others, although perhaps not so conspicuously. For instance, in 2015, Valeant Pharmaceuticals completed a hostile takeover of Allergan Pharmaceuticals, with the help of the hedge fund, Pershing Square. Once Valeant obtained ownership of Allergan and its drug portfolio, it adopted restricted distribution programs and raised the prices on its off-patent drugs substantially. It raised the price of two life-saving heart drugs by 212% and 525% respectively. Others have followed suit.

A key component of the strategy to profit from hiking prices on off-patent drugs while avoiding competition from generics is to select drugs that do not currently have generic competitors. Sometimes this is because a drug has recently come off patent, and sometimes it is because the drug is for a small patient population, and thus generics haven’t bothered to enter the market given that brand name manufacturers generally drop their prices to close to cost after the drug comes off patent. But with the strategic control of samples and refusals to allow generics to enter REMS programs, the (often new) owners of the brand name drugs seek to prevent the generic competition that we count on to make products cheap and plentiful once their patent protection expires.

Most brand name drug makers do not engage in withholding samples from generics and abusing restricted distribution and REMS programs. But the few that do cost patients and insurers dearly for important medicines that should be much cheaper once they go off patent. More troubling still is the recent strategy of taking drugs that have been off patent and cheap for years, and abusing the regulatory regime to raise prices and block competition. This growing trend of abusing restricted distribution and REMS to facilitate rent extraction from drug purchasers needs to be corrected.

Two bills addressing this issue are pending in Congress. Both bills (1) require drug companies to provide samples to generics after the FDA has certified the generic, (2) require drug companies to enter into shared REMS programs with generics, (3) allow generics to set up their own REMS compliant systems, and (4) exempt drug companies from liability for sharing products and REMS-compliant systems with generic companies in accordance with the steps set out in the bills. When it comes to remedies, however, the Senate version is significantly better. The penalties provided in the House bill are both vague and overly broad. The bill provides for treble damages and costs against the drug company “of the kind described in section 4(a) of the Clayton Act.” Not only is the application of the Clayton Act unclear in the context of the heavily regulated market for drugs (see Trinko), but treble damages may over-deter reasonably restrictive behavior by drug companies when it comes to distributing dangerous drugs.

The remedies in the Senate version are very well crafted to deter rent seeking behavior while not overly deterring reasonable behavior. The remedial scheme is particularly good, because it punishes most those companies that attempt to make exorbitant profits on drugs by denying generic entry. The Senate version provides as a remedy for unreasonable delay that the plaintiff shall be awarded attorneys’ fees, costs, and the defending drug company’s profits on the drug at issue during the time of the unreasonable delay. This means that a brand name drug company that sells an old drug for a low price and delays sharing only because of honest concern about the safety standards of a particular generic company will not face terribly high damages if it is found unreasonable. On the other hand, a company that sends the price of an off-patent drug soaring and then attempts to block generic entry will know that it can lose all of its rent-seeking profits, plus the cost of the victorious generic company’s attorneys fees. This vastly reduces the incentive for the company owning the brand name drug to raise prices and keep competitors out. It likewise greatly increases the incentive of a generic company to enter the market and–if it is unreasonably blocked–to file a civil action the result of which would be to transfer the excess profits to the generic. This provides a rather elegant fix to the regulatory gaming in this area that has become an increasing problem. The balancing of interests and incentives in the Senate bill should leave many congresspersons feeling comfortable to support the bill.