Archives For anticompetitive market distortions

Last week the Senate Judiciary Committee held a hearing, Intellectual Property and the Price of Prescription Drugs: Balancing Innovation and Competition, that explored whether changes to the pharmaceutical patent process could help lower drug prices.  The committee’s goal was to evaluate various legislative proposals that might facilitate the entry of cheaper generic drugs, while also recognizing that strong patent rights for branded drugs are essential to incentivize drug innovation.  As Committee Chairman Lindsey Graham explained:

One thing you don’t want to do is kill the goose who laid the golden egg, which is pharmaceutical development. But you also don’t want to have a system that extends unnecessarily beyond the ability to get your money back and make a profit, a patent system that drives up costs for the average consumer.

Several proposals that were discussed at the hearing have the potential to encourage competition in the pharmaceutical industry and help rein in drug prices. Below, I discuss these proposals, plus a few additional reforms. I also point out some of the language in the current draft proposals that goes a bit too far and threatens the ability of drug makers to remain innovative.  

1. Prevent brand drug makers from blocking generic companies’ access to drug samples. Some brand drug makers have attempted to delay generic entry by restricting generics’ access to the drug samples necessary to conduct FDA-required bioequivalence studies.  Some brand drug manufacturers have limited the ability of pharmacies or wholesalers to sell samples to generic companies or abused the REMS (Risk Evaluation Mitigation Strategy) program to refuse samples to generics under the auspices of REMS safety requirements.  The Creating and Restoring Equal Access To Equivalent Samples (CREATES) Act of 2019 would allow potential generic competitors to bring an action in federal court for both injunctive relief and damages when brand companies block access to drug samples.  It also gives the FDA discretion to approve alternative REMS safety protocols for generic competitors that have been denied samples under the brand companies’ REMS protocol.  Although the vast majority of brand drug companies do not engage in the delay tactics addressed by CREATES, the Act would prevent the handful that do from thwarting generic competition.  Increased generic competition should, in turn, reduce drug prices.

2. Restrict abuses of FDA Citizen Petitions.  The citizen petition process was created as a way for individuals and community groups to flag legitimate concerns about drugs awaiting FDA approval.  However, critics claim that the process has been misused by some brand drug makers who file petitions about specific generic drugs in the hopes of delaying their approval and market entry.  Although FDA has indicated that citizens petitions rarely delay the approval of generic drugs, there have been a few drug makers, such as Shire ViroPharma, that have clearly abused the process and put unnecessary strain on FDA resources. The Stop The Overuse of Petitions and Get Affordable Medicines to Enter Soon (STOP GAMES) Act is intended to prevent such abuses.  The Act reinforces the FDA and FTC’s ability to crack down on petitions meant to lengthen the approval process of a generic competitor, which should deter abuses of the system that can occasionally delay generic entry.  However, lawmakers should make sure that adopted legislation doesn’t limit the ability of stakeholders (including drug makers that often know more about the safety of drugs than ordinary citizens) to raise serious concerns with the FDA. 

3. Curtail Anticompetitive Pay-for-Delay Settlements.  The Hatch-Waxman Act incentivizes generic companies to challenge brand drug patents by granting the first successful generic challenger a period of marketing exclusivity. Like all litigation, many of these patent challenges result in settlements instead of trials.  The FTC and some courts have concluded that these settlements can be anticompetitive when the brand companies agree to pay the generic challenger in exchange for the generic company agreeing to forestall the launch of their lower-priced drug. Settlements that result in a cash payment are a red flag for anti-competitive behavior, so pay-for-delay settlements have evolved to involve other forms of consideration instead.  As a result, the Preserve Access to Affordable Generics and Biosimilars Act aims to make an exchange of anything of value presumptively anticompetitive if the terms include a delay in research, development, manufacturing, or marketing of a generic drug. Deterring obvious pay-for-delay settlements will prevent delays to generic entry, making cheaper drugs available as quickly as possible to patients. 

However, the Act’s rigid presumption that an exchange of anything of value is presumptively anticompetitive may also prevent legitimate settlements that ultimately benefit consumers.  Brand drug makers should be allowed to compensate generic challengers to eliminate litigation risk and escape litigation expenses, and many settlements result in the generic drug coming to market before the expiration of the brand patent and possibly earlier than if there was prolonged litigation between the generic and brand company.  A rigid presumption of anticompetitive behavior will deter these settlements, thereby increasing expenses for all parties that choose to litigate and possibly dissuading generics from bringing patent challenges in the first place.  Indeed, the U.S. Supreme Court has declined to define these settlements as per se anticompetitive, and the FTC’s most recent agreement involving such settlements exempts several forms of exchanges of value.  Any adopted legislation should follow the FTC’s lead and recognize that some exchanges of value are pro-consumer and pro-competitive.

4. Restore the balance established by Hatch-Waxman between branded drug innovators and generic drug challengers.  I have previously discussed how an unbalanced inter partes review (IPR) process for challenging patents threatens to stifle drug innovation.  Moreover, current law allows generic challengers to file duplicative claims in both federal court and through the IPR process.  And because IPR proceedings do not have a standing requirement, the process has been exploited  by entities that would never be granted standing in traditional patent litigation—hedge funds betting against a company by filing an IPR challenge in hopes of crashing the stock and profiting from the bet. The added expense to drug makers of defending both duplicative claims and claims against challengers that are exploiting the system increases litigation costs, which may be passed on to consumers in the form of higher prices. 

The Hatch-Waxman Integrity Act (HWIA) is designed to return the balance established by Hatch-Waxman between branded drug innovators and generic drug challengers. It requires generic challengers to choose between either Hatch-Waxman litigation (which saves considerable costs by allowing generics to rely on the brand company’s safety and efficacy studies for FDA approval) or an IPR proceeding (which is faster and provides certain pro-challenger provisions). The HWIA would also eliminate the ability of hedge funds and similar entities to file IPR claims while shorting the stock.  By reducing duplicative litigation and the exploitation of the IPR process, the HWIA will reduce costs and strengthen innovation incentives for drug makers.  This will ensure that patent owners achieve clarity on the validity of their patents, which will spur new drug innovation and make sure that consumers continue to have access to life-improving drugs.

5. Curb illegal product hopping and patent thickets.  Two drug maker tactics currently garnering a lot of attention are so-called “product hopping” and “patent thickets.”  At its worst, product hopping involves brand drug makers making minor changes to a drug nearing the end of its patent so that they gets a new patent on the slightly-tweaked drug, and then withdrawing the original drug from the market so that patients shift to the newly patented drug and pharmacists can’t substitute a generic version of the original drug.  Similarly, at their worst, patent thickets involve brand drug makers obtaining a web of patents on a single drug to extend the life of their exclusivity and make it too costly for other drug makers to challenge all of the patents associated with a drug.  The proposed Affordable Prescriptions for Patients Act of 2019 is meant to stop these abuses of the patent system, which would facilitate generic entry and help to lower drug prices.

However, the Act goes too far by also capturing many legitimate activities in its definitions. For example, the bill defines as anticompetitive product-hopping the selling of any improved version of a drug during a window which extends to a year after the launch of the first generic competitor.  Presently, to acquire a patent and FDA approval, the improved version of the drug must be different and innovative enough from the original drug, yet the Act would prevent the drug maker from selling such a product without satisfying a demanding three-pronged test before the FTC or a district court.  Similarly, the Act defines as anticompetitive patent thickets any new patents filed on a drug in the same general family as the original patent, and this presumption can only be rebutted by providing extensive evidence and satisfying demanding standards to the FTC or a district court.  As a result, the Act deters innovation activity that is at all related to an initial patent and, in doing so, ignores the fact that most important drug innovation is incremental innovation based on previous inventions.  Thus, the proposal should be redrafted to capture truly anticompetitive product hopping and patent thicket activity, while exempting behavior this is critical for drug innovation. 

Reforms that close loopholes in the current patent process should facilitate competition in the pharmaceutical industry and help to lower drug prices.  However, lawmakers need to be sure that they don’t restrict patent rights to the extent that they deter innovation because a significant body of research predicts that patients’ health outcomes will suffer as a result.

It might surprise some readers to learn that we think the Court’s decision today in Apple v. Pepper reaches — superficially — the correct result. But, we hasten to add, the Court’s reasoning (and, for that matter, the dissent’s) is completely wrongheaded. It would be an understatement to say that the Court reached the right result for the wrong reason; in fact, the Court’s analysis wasn’t even in the same universe as the correct reasoning.

Below we lay out our assessment, in a post drawn from an article forthcoming in the Nebraska Law Review.

Did the Court forget that, just last year, it decided Amex, the most significant U.S. antitrust case in ages?

What is most remarkable about the decision (and the dissent) is that neither mentions Ohio v. Amex, nor even the two-sided market context in which the transactions at issue take place.

If the decision in Apple v. Pepper hewed to the precedent established by Ohio v. Amex it would start with the observation that the relevant market analysis for the provision of app services is an integrated one, in which the overall effect of Apple’s conduct on both app users and app developers must be evaluated. A crucial implication of the Amex decision is that participants on both sides of a transactional platform are part of the same relevant market, and the terms of their relationship to the platform are inextricably intertwined.

Under this conception of the market, it’s difficult to maintain that either side does not have standing to sue the platform for the terms of its overall pricing structure, whether the specific terms at issue apply directly to that side or not. Both end users and app developers are “direct” purchasers from Apple — of different products, but in a single, inextricably interrelated market. Both groups should have standing.

More controversially, the logic of Amex also dictates that both groups should be able to establish antitrust injury — harm to competition — by showing harm to either group, as long as it establishes the requisite interrelatedness of the two sides of the market.

We believe that the Court was correct to decide in Amex that effects falling on the “other” side of a tightly integrated, two-sided market from challenged conduct must be addressed by the plaintiff in making its prima facie case. But that outcome entails a market definition that places both sides of such a market in the same relevant market for antitrust analysis.

As a result, the Court’s holding in Amex should also have required a finding in Apple v. Pepper that an app user on one side of the platform who transacts with an app developer on the other side of the market, in a transaction made possible and directly intermediated by Apple’s App Store, should similarly be deemed in the same market for standing purposes.

Relative to a strict construction of the traditional baseline, the former entails imposing an additional burden on two-sided market plaintiffs, while the latter entails a lessening of that burden. Whether the net effect is more or fewer successful cases in two-sided markets is unclear, of course. But from the perspective of aligning evidentiary and substantive doctrine with economic reality such an approach would be a clear improvement.

Critics accuse the Court of making antitrust cases unwinnable against two-sided market platforms thanks to Amex’s requirement that a prima facie showing of anticompetitive effect requires assessment of the effects on both sides of a two-sided market and proof of a net anticompetitive outcome. The critics should have been chastened by a proper decision in Apple v. Pepper. As it is, the holding (although not the reasoning) still may serve to undermine their fears.

But critics should have recognized that a necessary corollary of Amex’s “expanded” market definition is that, relative to previous standing doctrine, a greater number of prospective parties should have standing to sue.

More important, the Court in Apple v. Pepper should have recognized this. Although nominally limited to the indirect purchaser doctrine, the case presented the Court with an opportunity to grapple with this logical implication of its Amex decision. It failed to do so.

On the merits, it looks like Apple should win. But, for much the same reason, the Respondents in Apple v. Pepper should have standing

This does not, of course, mean that either party should win on the merits. Indeed, on the merits of the case, the Petitioner in Apple v. Pepper appears to have the stronger argument, particularly in light of Amex which (assuming the App Store is construed as some species of a two-sided “transaction” market) directs that Respondent has the burden of considering harms and efficiencies across both sides of the market.

At least on the basis of the limited facts as presented in the case thus far, Respondents have not remotely met their burden of proving anticompetitive effects in the relevant market.

The actual question presented in Apple v. Pepper concerns standing, not whether the plaintiffs have made out a viable case on the merits. Thus it may seem premature to consider aspects of the latter in addressing the former. But the structure of the market considered by the court should be consistent throughout its analysis.

Adjustments to standing in the context of two-sided markets must be made in concert with the nature of the substantive rule of reason analysis that will be performed in a case. The two doctrines are connected not only by the just demands for consistency, but by the error-cost framework of the overall analysis, which runs throughout the stages of an antitrust case.

Here, the two-sided markets approach in Amex properly understands that conduct by a platform has relevant effects on both sides of its interrelated two-sided market. But that stems from the actual economics of the platform; it is not merely a function of a judicial construct. It thus holds true at all stages of the analysis.

The implication for standing is that users on both sides of a two-sided platform may suffer similarly direct (or indirect) injury as a result of the platform’s conduct, regardless of the side to which that conduct is nominally addressed.

The consequence, then, of Amex’s understanding of the market is that more potential plaintiffs — specifically, plaintiffs on both sides of a two-sided market — may claim to suffer antitrust injury.

Why the myopic focus of the holding (and dissent) on Illinois Brick is improper: It’s about the market definition, stupid!

Moreover, because of the Amex understanding, the problem of analyzing the pass-through of damages at issue in Illinois Brick (with which the Court entirely occupies itself in Apple v. Pepper) is either mitigated or inevitable.

In other words, either the users on the different sides of a two-sided market suffer direct injury without pass-through under a proper definition of the relevant market, or else their interrelatedness is so strong that, complicated as it may be, the needs of substantive accuracy trump the administrative costs in sorting out the incidence of the costs, and courts cannot avoid them.

Illinois Brick’s indirect purchaser doctrine was designed for an environment in which the relationship between producers and consumers is mediated by a distributor in a direct, linear supply chain; it was not designed for platforms. Although the question presented in Apple v. Pepper is explicitly about whether the Illinois Brick “indirect purchaser” doctrine applies to the Apple App Store, that determination is contingent on the underlying product market definition (whether the product market is in fact well-specified by the parties and the court or not).

Particularly where intermediaries exist precisely to address transaction costs between “producers” and “consumers,” the platform services they provide may be central to the underlying claim in a way that the traditional direct/indirect filters — and their implied relevant markets — miss.

Further, the Illinois Brick doctrine was itself based not on the substantive necessity of cutting off liability evaluations at a particular level of distribution, but on administrability concerns. In particular, the Court was concerned with preventing duplicative recovery when there were many potential groups of plaintiffs, as well as preventing injustices that would occur if unknown groups of plaintiffs inadvertently failed to have their rights adequately adjudicated in absentia. It was also concerned with avoiding needlessly complicated damages calculations.

But, almost by definition, the tightly coupled nature of the two sides of a two-sided platform should mitigate the concerns about duplicative recovery and unknown parties. Moreover, much of the presumed complexity in damages calculations in a platform setting arise from the nature of the platform itself. Assessing and apportioning damages may be complicated, but such is the nature of complex commercial relationships — the same would be true, for example, of damages calculations between vertically integrated companies that transact simultaneously at multiple levels, or between cross-licensing patent holders/implementers. In fact, if anything, the judicial efficiency concerns in Illinois Brick point toward the increased importance of properly assessing the nature of the product or service of the platform in order to ensure that it accurately encompasses the entire relevant transaction.

Put differently, under a proper, more-accurate market definition, the “direct” and “indirect” labels don’t necessarily reflect either business or antitrust realities.

Where the Court in Apple v. Pepper really misses the boat is in its overly formalistic claim that the business model (and thus the product) underlying the complained-of conduct doesn’t matter:

[W]e fail to see why the form of the upstream arrangement between the manufacturer or supplier and the retailer should determine whether a monopolistic retailer can be sued by a downstream consumer who has purchased a good or service directly from the retailer and has paid a higher-than-competitive price because of the retailer’s unlawful monopolistic conduct.

But Amex held virtually the opposite:

Because “[l]egal presumptions that rest on formalistic distinctions rather than actual market realities are generally disfavored in antitrust law,” courts usually cannot properly apply the rule of reason without an accurate definition of the relevant market.

* * *

Price increases on one side of the platform likewise do not suggest anticompetitive effects without some evidence that they have increased the overall cost of the platform’s services. Thus, courts must include both sides of the platform—merchants and cardholders—when defining the credit-card market.

In the face of novel business conduct, novel business models, and novel economic circumstances, the degree of substantive certainty may be eroded, as may the reasonableness of the expectation that typical evidentiary burdens accurately reflect competitive harm. Modern technology — and particularly the platform business model endemic to many modern technology firms — presents a need for courts to adjust their doctrines in the face of such novel issues, even if doing so adds additional complexity to the analysis.

The unlearned market-definition lesson of the Eighth Circuit’s Campos v. Ticketmaster dissent

The Eight Circuit’s Campos v. Ticketmaster case demonstrates the way market definition shapes the application of the indirect purchaser doctrine. Indeed, the dissent in that case looms large in the Ninth Circuit’s decision in Apple v. Pepper. [Full disclosure: One of us (Geoff) worked on the dissent in Campos v. Ticketmaster as a clerk to Eighth Circuit judge Morris S. Arnold]

In Ticketmaster, the plaintiffs alleged that Ticketmaster abused its monopoly in ticket distribution services to force supracompetitve charges on concert venues — a practice that led to anticompetitive prices for concert tickets. Although not prosecuted as a two-sided market, the business model is strikingly similar to the App Store model, with Ticketmaster charging fees to venues and then facilitating ticket purchases between venues and concert goers.

As the dissent noted, however:

The monopoly product at issue in this case is ticket distribution services, not tickets.

Ticketmaster supplies the product directly to concert-goers; it does not supply it first to venue operators who in turn supply it to concert-goers. It is immaterial that Ticketmaster would not be supplying the service but for its antecedent agreement with the venues.

But it is quite relevant that the antecedent agreement was not one in which the venues bought some product from Ticketmaster in order to resell it to concert-goers.

More important, and more telling, is the fact that the entirety of the monopoly overcharge, if any, is borne by concert-goers.

In contrast to the situations described in Illinois Brick and the literature that the court cites, the venues do not pay the alleged monopoly overcharge — in fact, they receive a portion of that overcharge from Ticketmaster. (Emphasis added).

Thus, if there was a monopoly overcharge it was really borne entirely by concert-goers. As a result, apportionment — the complexity of which gives rise to the standard in Illinois Brick — was not a significant issue. And the antecedent transaction that allegedly put concertgoers in an indirect relationship with Ticketmaster is one in which Ticketmaster and concert venues divvied up the alleged monopoly spoils, not one in which the venues absorb their share of the monopoly overcharge.

The analogy to Apple v. Pepper is nearly perfect. Apple sits between developers on one side and consumers on the other, charges a fee to developers for app distribution services, and facilitates app sales between developers and users. It is possible to try to twist the market definition exercise to construe the separate contracts between developers and Apple on one hand, and the developers and consumers on the other, as some sort of complicated version of the classical manufacturing and distribution chains. But, more likely, it is advisable to actually inquire into the relevant factual differences that underpin Apple’s business model and adapt how courts consider market definition for two-sided platforms.

Indeed, Hanover Shoe and Illinois Brick were born out of a particular business reality in which businesses structured themselves in what are now classical production and distribution chains. The Supreme Court adopted the indirect purchaser rule as a prudential limitation on antitrust law in order to optimize the judicial oversight of such cases. It seems strangely nostalgic to reflexively try to fit new business methods into old legal analyses, when prudence and reality dictate otherwise.

The dissent in Ticketmaster was ahead of its time insofar as it recognized that the majority’s formal description of the ticket market was an artifact of viewing what was actually something much more like a ticket-services platform operated by Ticketmaster through the poor lens of the categories established decades earlier.

The Ticketmaster dissent’s observations demonstrate that market definition and antitrust standing are interrelated. It makes no sense to adhere to a restrictive reading of the latter if it connotes an economically improper understanding of the former. Ticketmaster provided an intermediary service — perhaps not quite a two-sided market, but something close — that stands outside a traditional manufacturing supply chain. Had it been offered by the venues themselves and bundled into the price of concert tickets there would be no question of injury and of standing (nor would market definition matter much, as both tickets and distribution services would be offered as a joint product by the same parties, in fixed proportions).

What antitrust standing doctrine should look like after Amex

There are some clear implications for antitrust doctrine that (should) follow from the preceding discussion.

A plaintiff has a choice to allege that a defendant operates either as a two-sided market or in a more traditional, linear chain during the pleading stage. If the plaintiff alleges a two-sided market, then, to demonstrate standing, it need only be shown that injury occurred to some subset of platform users with which the plaintiff is inextricably interrelated. The plaintiff would not need to demonstrate injury to him or herself, nor allege net harm, nor show directness.

In response, a defendant can contest standing by challenging the interrelatedness of the plaintiff and the group of platform users with whom the plaintiff claims interrelatedness. If the defendant does not challenge the allegation that it operates a two-sided market, it could not challenge standing by showing indirectness, that plaintiff had not alleged personal injury, or that plaintiff hasn’t alleged a net harm.

Once past a determination of standing, however, a plaintiff who pleads a two-sided market would not be able to later withdraw this allegation in order to lessen the attendant legal burdens.

If the court accepts that the defendant is operating a two-sided market, both parties would be required to frame their allegations and defenses in accordance with the nature of the two-sided market and thus the holding in Amex. This is critical because, whereas alleging a two-sided market may make it easier for plaintiffs to demonstrate standing, Amex’s requirement that net harm be demonstrated across interrelated sets of users makes it more difficult for plaintiffs to present a viable prima facie case. Further, defendants would not be barred from presenting efficiencies defenses based on benefits that interrelated users enjoy.

Conclusion: The Court in Apple v. Pepper should have acknowledged the implications of its holding in Amex

After Amex, claims against two-sided platforms might require more evidence to establish anticompetitive harm, but that business model also means that firms should open themselves up to a larger pool of potential plaintiffs. The legal principles still apply, but the relative importance of those principles to judicial outcomes shifts (or should shift) in line with the unique economic position of potential plaintiffs and defendants in a platform environment.

Whether a priori the net result is more or fewer cases and more or fewer victories for plaintiffs is not the issue; what matters is matching the legal and economic theory to the relevant facts in play. Moreover, decrying Amex as the end of antitrust was premature: the actual affect on injured parties can’t be known until other changes (like standing for a greater number of plaintiffs) are factored into the analysis. The Court’s holding in Apple v. Pepper sidesteps this issue entirely, and thus fails to properly move antitrust doctrine forward in line with its holding in Amex.

Of course, it’s entirely possible that platforms and courts might be inundated with expensive and difficult to manage lawsuits. There may be reasons of administrability for limiting standing (as Illinois Brick perhaps prematurely did for fear of the costs of courts’ managing suits). But then that should have been the focus of the Court’s decision.

Allowing standing in Apple v. Pepper permits exactly the kind of legal experimentation needed to enable the evolution of antitrust doctrine along with new business realities. But in some ways the Court reached the worst possible outcome. It announced a rule that permits more plaintiffs to establish standing, but it did not direct lower courts to assess standing within the proper analytical frame. Instead, it just expands standing in a manner unmoored from the economic — and, indeed, judicial — context. That’s not a recipe for the successful evolution of antitrust doctrine.

The German Bundeskartellamt’s Facebook decision is unsound from either a competition or privacy policy perspective, and will only make the fraught privacy/antitrust relationship worse.

Continue Reading...

Drug makers recently announced their 2019 price increases on over 250 prescription drugs. As examples, AbbVie Inc. increased the price of the world’s top-selling drug Humira by 6.2 percent, and Hikma Pharmaceuticals increased the price of blood-pressure medication Enalaprilat by more than 30 percent. Allergan reported an average increase across its portfolio of drugs of 3.5 percent; although the drug maker is keeping most of its prices the same, it raised the prices on 27 drugs by 9.5 percent and on another 24 drugs by 4.9 percent. Other large drug makers, such as Novartis and Pfizer, will announce increases later this month.

So far, the number of price increases is significantly lower than last year when drug makers increased prices on more than 400 drugs.  Moreover, on the drugs for which prices did increase, the average price increase of 6.3 percent is only about half of the average increase for drugs in 2018. Nevertheless, some commentators have expressed indignation and President Trump this week summoned advisors to the White House to discuss the increases.  However, commentators and the administration should keep in mind what the price increases actually mean and the numerous players that are responsible for increasing drug prices. 

First, it is critical to emphasize the difference between drug list prices and net prices.  The drug makers recently announced increases in the list, or “sticker” prices, for many drugs.  However, the list price is usually very different from the net price that most consumers and/or their health plans actually pay, which depends on negotiated discounts and rebates.  For example, whereas drug list prices increased by an average of 6.9 percent in 2017, net drug prices after discounts and rebates increased by only 1.9 percent. The differential between the growth in list prices and net prices has persisted for years.  In 2016 list prices increased by 9 percent but net prices increased by 3.2 percent; in 2015 list prices increased by 11.9 percent but net prices increased by 2.4 percent, and in 2014 list price increases peaked at 13.5 percent but net prices increased by only 4.3 percent.

For 2019, the list price increases for many drugs will actually translate into very small increases in the net prices that consumers actually pay.  In fact, drug maker Allergan has indicated that, despite its increase in list prices, the net prices that patients actually pay will remain about the same as last year.

One might wonder why drug makers would bother to increase list prices if there’s little to no change in net prices.  First, at least 40 percent of the American prescription drug market is subject to some form of federal price control.  As I’ve previously explained, because these federal price controls generally require percentage rebates off of average drug prices, drug makers have the incentive to set list prices higher in order to offset the mandated discounts that determine what patients pay.

Further, as I discuss in a recent Article, the rebate arrangements between drug makers and pharmacy benefit managers (PBMs) under many commercial health plans create strong incentives for drug makers to increase list prices. PBMs negotiate rebates from drug manufacturers in exchange for giving the manufacturers’ drugs preferred status on a health plan’s formulary.  However, because the rebates paid to PBMs are typically a percentage of a drug’s list price, drug makers are compelled to increase list prices in order to satisfy PBMs’ demands for higher rebates. Drug makers assert that they are pressured to increase drug list prices out of fear that, if they do not, PBMs will retaliate by dropping their drugs from the formularies. The value of rebates paid to PBMs has doubled since 2012, with drug makers now paying $150 billion annually.  These rebates have grown so large that, today, the drug makers that actually invest in drug innovation and bear the risk of drug failures receive only 39 percent of the total spending on drugs, while 42 percent of the spending goes to these pharmaceutical middlemen.

Although a portion of the increasing rebate dollars may eventually find its way to patients in the form of lower co-pays, many patients still suffer from the list prices increases.  The 29 million Americans without drug plan coverage pay more for their medications when list prices increase. Even patients with insurance typically have cost-sharing obligations that require them to pay 30 to 40 percent of list prices.  Moreover, insured patients within the deductible phase of their drug plan pay the entire higher list price until they meet their deductible.  Higher list prices jeopardize patients’ health as well as their finances; as out-of-pocket costs for drugs increase, patients are less likely to adhere to their medication routine and more likely to abandon their drug regimen altogether.

Policymakers must realize that the current system of government price controls and distortive rebates creates perverse incentives for drug makers to continue increasing drug list prices. Pointing the finger at drug companies alone for increasing prices does not represent the problem at hand.

By Pinar Akman, Professor of Law, University of Leeds*

The European Commission’s decision in Google Android cuts a fine line between punishing a company for its success and punishing a company for falling afoul of the rules of the game. Which side of the line it actually falls on cannot be fully understood until the Commission publishes its full decision. Much depends on the intricate facts of the case. As the full decision may take months to come, this post offers merely the author’s initial thoughts on the decision on the basis of the publicly available information.

The eye-watering fine of $5.1 billion — which together with the fine of $2.7 billion in the Google Shopping decision from last year would (according to one estimate) suffice to fund for almost one year the additional yearly public spending necessary to eradicate world hunger by 2030 — will not be further discussed in this post. This is because the fine is assumed to have been duly calculated on the basis of the Commission’s relevant Guidelines, and, from a legal and commercial point of view, the absolute size of the fine is not as important as the infringing conduct and the remedy Google will need to adopt to comply with the decision.

First things first. This post proceeds on the premise that the aim of competition law is to prevent the exclusion of competitors that are (at least) as efficient as the dominant incumbent, whose exclusion would ultimately harm consumers.

Next, it needs to be noted that the Google Android case is a more conventional antitrust case than Google Shopping in the sense that one can at least envisage a potentially robust antitrust theory of harm in the former case. If a dominant undertaking ties its products together to exclude effective competition in some of these markets or if it pays off customers to exclude access by its efficient competitors to consumers, competition law intervention may be justified.

The central question in Google Android is whether on the available facts this appears to have happened.

What we know and market definition

The premise of the case is that Google used its dominance in the Google Play Store (which enables users to download apps onto their Android phones) to “cement Google’s dominant position in general internet search.”

It is interesting that the case appears to concern a dominant undertaking leveraging its dominance from a market in which it is dominant (Google Play Store) into another market in which it is also dominant (internet search). As far as this author is aware, most (if not all?) cases of tying in the EU to date concerned tying where the dominant undertaking leveraged its dominance in one market to distort or eliminate competition in an otherwise competitive market.

Thus, for example, in Microsoft (Windows Operating System —> media players), Hilti (patented cartridge strips —> nails), and Tetra Pak II (packaging machines —> non-aseptic cartons), the tied market was actually or potentially competitive, and this was why the tying was alleged to have eliminated competition. It will be interesting to see which case the Commission uses as precedent in its decision — more on that later.

Also noteworthy is that the Commission does not appear to have defined a separate mobile search market that would have been competitive but for Google’s alleged leveraging. The market has been defined as the general internet search market. So, according to the Commission, the Google Search App and Google Search engine appear to be one and the same thing, and desktop and mobile devices are equivalent (or substitutable).

Finding mobile and desktop devices to be equivalent to one another may have implications for other cases including the ongoing appeal in Google Shopping where, for example, the Commission found that “[m]obile [apps] are not a viable alternative for replacing generic search traffic from Google’s general search results pages” for comparison shopping services. The argument that mobile apps and mobile traffic are fundamental in Google Android but trivial in Google Shopping may not play out favourably for the Commission before the Court of Justice of the EU.

Another interesting market definition point is that the Commission has found Apple not to be a competitor to Google in the relevant market defined by the Commission: the market for “licensable smart mobile operating systems.” Apple does not fall within that market because Apple does not license its mobile operating system to anyone: Apple’s model eliminates all possibility of competition from the start and is by definition exclusive.

Although there is some internal logic in the Commission’s exclusion of Apple from the upstream market that it has defined, is this not a bit of a definitional stop? How can Apple compete with Google in the market as defined by the Commission when Apple allows only itself to use its operating system only on devices that Apple itself manufactures?

To be fair, the Commission does consider there to be some competition between Apple and Android devices at the level of consumers — just not sufficient to constrain Google at the upstream, manufacturer level.

Nevertheless, the implication of the Commission’s assessment that separates the upstream and downstream in this way is akin to saying that the world’s two largest corn producers that produce the corn used to make corn flakes do not compete with one another in the market for corn flakes because one of them uses its corn exclusively in its own-brand cereal.

Although the Commission cabins the use of supply-side substitutability in market definition, its own guidance on the topic notes that

Supply-side substitutability may also be taken into account when defining markets in those situations in which its effects are equivalent to those of demand substitution in terms of effectiveness and immediacy. This means that suppliers are able to switch production to the relevant products and market them in the short term….

Apple could — presumably — rather immediately and at minimal cost produce and market a version of iOS for use on third-party device makers’ devices. By the Commission’s own definition, it would seem to make sense to include Apple in the relevant market. Nevertheless, it has apparently not done so here.

The message that the Commission sends with the finding is that if Android had not been open source and freely available, and if Google competed with Apple with its own version of a walled-garden built around exclusivity, it is possible that none of its practices would have raised any concerns. Or, should Apple be expecting a Statement of Objections next from the EU Commission?

Is Microsoft really the relevant precedent?

Given that Google Android appears to revolve around the idea of tying and leveraging, the EU Commission’s infringement decision against Microsoft, which found an abusive tie in Microsoft’s tying of Windows Operating System with Windows Media Player, appears to be the most obvious precedent, at least for the tying part of the case.

There are, however, potentially important factual differences between the two cases. To take just a few examples:

  • Microsoft charged for the Windows Operating System, whereas Google does not;
  • Microsoft tied the setting of Windows Media Player as the default to OEMs’ licensing of the operating system (Windows), whereas Google ties the setting of Search as the default to device makers’ use of other Google apps, while allowing them to use the operating system (Android) without any Google apps; and
  • Downloading competing media players was difficult due to download speeds and lack of user familiarity, whereas it is trivial and commonplace for users to download apps that compete with Google’s.

Moreover, there are also some conceptual hurdles in finding the conduct to be that of tying.

First, the difference between “pre-installed,” “default,” and “exclusive” matters a lot in establishing whether effective competition has been foreclosed. The Commission’s Press Release notes that to pre-install Google Play, manufacturers have to also pre-install Google Search App and Google Chrome. It also states that Google Search is the default search engine on Google Chrome. The Press Release does not indicate that Google Search App has to be the exclusive or default search app. (It is worth noting, however, that the Statement of Objections in Google Android did allege that Google violated EU competition rules by requiring Search to be installed as the default. We will have to await the decision itself to see if this was dropped from the case or simply not mentioned in the Press Release).

In fact, the fact that the other infringement found is that of Google’s making payments to manufacturers in return for exclusively pre-installing the Google Search App indirectly suggests that not every manufacturer pre-installs Google Search App as the exclusive, pre-installed search app. This means that any other search app (provider) can also (request to) be pre-installed on these devices. The same goes for the browser app.

Of course, regardless, even if the manufacturer does not pre-install competing apps, the consumer is free to download any other app — for search or browsing — as they wish, and can do so in seconds.

In short, pre-installation on its own does not necessarily foreclose competition, and thus may not constitute an illegal tie under EU competition law. This is particularly so when download speeds are fast (unlike the case at the time of Microsoft) and consumers regularly do download numerous apps.

What may, however, potentially foreclose effective competition is where a dominant undertaking makes payments to stop its customers, as a practical matter, from selling its rivals’ products. Intel, for example, was found to have abused its dominant position through payments to a computer retailer in return for its not selling computers with its competitor AMD’s chips, and to computer manufacturers in return for delaying the launch of computers with AMD chips.

In Google Android, the exclusivity provision that would require manufacturers to pre-install Google Search App exclusively in return for financial incentives may be deemed to be similar to this.

Having said that, unlike in Intel where a given computer can have a CPU from only one given manufacturer, even the exclusive pre-installation of the Google Search App would not have prevented consumers from downloading competing apps. So, again, in theory effective competition from other search apps need not have been foreclosed.

It must also be noted that just because a Google app is pre-installed does not mean that it generates any revenue to Google — consumers have to actually choose to use that app as opposed to another one that they might prefer in order for Google to earn any revenue from it. The Commission seems to place substantial weight on pre-installation which it alleges to create “a status quo bias.”

The concern with this approach is that it is not possible to know whether those consumers who do not download competing apps do so out of a preference for Google’s apps or, instead, for other reasons that might indicate competition not to be working. Indeed, one hurdle as regards conceptualising the infringement as tying is that it would require establishing that a significant number of phone users would actually prefer to use Google Play Store (the tying product) without Google Search App (the tied product).

This is because, according to the Commission’s Guidance Paper, establishing tying starts with identifying two distinct products, and

[t]wo products are distinct if, in the absence of tying or bundling, a substantial number of customers would purchase or would have purchased the tying product without also buying the tied product from the same supplier.

Thus, if a substantial number of customers would not want to use Google Play Store without also preferring to use Google Search App, this would cause a conceptual problem for making out a tying claim.

In fact, the conduct at issue in Google Android may be closer to a refusal to supply type of abuse.

Refusal to supply also seems to make more sense regarding the prevention of the development of Android forks being found to be an abuse. In this context, it will be interesting to see how the Commission overcomes the argument that Android forks can be developed freely and Google may have legitimate business reasons in wanting to associate its own, proprietary apps only with a certain, standardised-quality version of the operating system.

More importantly, the possible underlying theory in this part of the case is that the Google apps — and perhaps even the licensed version of Android — are a “must-have,” which is close to an argument that they are an essential facility in the context of Android phones. But that would indeed require a refusal to supply type of abuse to be established, which does not appear to be the case.

What will happen next?

To answer the question raised in the title of this post — whether the Google Android decision will benefit consumers — one needs to consider what Google may do in order to terminate the infringing conduct as required by the Commission, whilst also still generating revenue from Android.

This is because unbundling Google Play Store, Google Search App and Google Chrome (to allow manufacturers to pre-install Google Play Store without the latter two) will disrupt Google’s main revenue stream (i.e., ad revenue generated through the use of Google Search App or Google Search within the Chrome app) which funds the free operating system. This could lead Google to start charging for the operating system, and limiting to whom it licenses the operating system under the Commission’s required, less-restrictive terms.

As the Commission does not seem to think that Apple constrains Google when it comes to dealings with device manufacturers, in theory, Google should be able to charge up to the monopoly level licensing fee to device manufacturers. If that happens, the price of Android smartphones may go up. It is possible that there is a new competitor lurking in the woods that will grow and constrain that exercise of market power, but how this will all play out for consumers — as well as app developers who may face increasing costs due to the forking of Android — really remains to be seen.

 

* Pinar Akman is Professor of Law, Director of Centre for Business Law and Practice, University of Leeds, UK. This piece has not been commissioned or funded by any entity. The author has not been involved in the Google Android case in any capacity. In the past, the author wrote a piece on the Commission’s Google Shopping case, ‘The Theory of Abuse in Google Search: A Positive and Normative Assessment under EU Competition Law,’ supported by a research grant from Google. The author would like to thank Peter Whelan, Konstantinos Stylianou, and Geoffrey Manne for helpful comments. All errors remain her own. The author can be contacted here.

Today the European Commission launched its latest salvo against Google, issuing a decision in its three-year antitrust investigation into the company’s agreements for distribution of the Android mobile operating system. The massive fine levied by the Commission will dominate the headlines, but the underlying legal theory and proposed remedies are just as notable — and just as problematic.

The nirvana fallacy

It is sometimes said that the most important question in all of economics is “compared to what?” UCLA economist Harold Demsetz — one of the most important regulatory economists of the past century — coined the term “nirvana fallacy” to critique would-be regulators’ tendency to compare messy, real-world economic circumstances to idealized alternatives, and to justify policies on the basis of the discrepancy between them. Wishful thinking, in other words.

The Commission’s Android decision falls prey to the nirvana fallacy. It conjures a world in which Google offers its Android operating system on unrealistic terms, prohibits it from doing otherwise, and neglects the actual consequences of such a demand.

The idea at the core of the Commission’s decision is that by making its own services (especially Google Search and Google Play Store) easier to access than competing services on Android devices, Google has effectively foreclosed rivals from effective competition. In order to correct that claimed defect, the Commission demands that Google refrain from engaging in practices that favor its own products in its Android licensing agreements:

At a minimum, Google has to stop and to not re-engage in any of the three types of practices. The decision also requires Google to refrain from any measure that has the same or an equivalent object or effect as these practices.

The basic theory is straightforward enough, but its application here reflects a troubling departure from the underlying economics and a romanticized embrace of industrial policy that is unsupported by the realities of the market.

In a recent interview, European Commission competition chief, Margrethe Vestager, offered a revealing insight into her thinking about her oversight of digital platforms, and perhaps the economy in general: “My concern is more about whether we get the right choices,” she said. Asked about Facebook, for example, she specified exactly what she thinks the “right” choice looks like: “I would like to have a Facebook in which I pay a fee each month, but I would have no tracking and advertising and the full benefits of privacy.”

Some consumers may well be sympathetic with her preference (and even share her specific vision of what Facebook should offer them). But what if competition doesn’t result in our — or, more to the point, Margrethe Vestager’s — prefered outcomes? Should competition policy nevertheless enact the idiosyncratic consumer preferences of a particular regulator? What if offering consumers the “right” choices comes at the expense of other things they value, like innovation, product quality, or price? And, if so, can antitrust enforcers actually engineer a better world built around these preferences?

Android’s alleged foreclosure… that doesn’t really foreclose anything

The Commission’s primary concern is with the terms of Google’s deal: In exchange for royalty-free access to Android and a set of core, Android-specific applications and services (like Google Search and Google Maps) Google imposes a few contractual conditions.

Google allows manufacturers to use the Android platform — in which the company has invested (and continues to invest) billions of dollars — for free. It does not require device makers to include any of its core, Google-branded features. But if a manufacturer does decide to use any of them, it must include all of them, and make Google Search the device default. In another (much smaller) set of agreements, Google also offers device makers a small share of its revenue from Search if they agree to pre-install only Google Search on their devices (although users remain free to download and install any competing services they wish).

Essentially, that’s it. Google doesn’t allow device makers to pick and choose between parts of the ecosystem of Google products, free-riding on Google’s brand and investments. But manufacturers are free to use the Android platform and to develop their own competing brand built upon Google’s technology.

Other apps may be installed in addition to Google’s core apps. Google Search need not be the exclusive search service, but it must be offered out of the box as the default. Google Play and Chrome must be made available to users, but other app stores and browsers may be pre-installed and even offered as the default. And device makers who choose to do so may share in Search revenue by pre-installing Google Search exclusively — but users can and do install a different search service.

Alternatives to all of Google’s services (including Search) abound on the Android platform. It’s trivial both to install them and to set them as the default. Meanwhile, device makers regularly choose to offer these apps alongside Google’s services, and some, like Samsung, have developed entire customized app suites of their own. Still others, like Amazon, pre-install no Google apps and use Android without any of these constraints (and whose Google-free tablets are regularly ranked as the best-rated and most popular in Europe).

By contrast, Apple bundles its operating system with its devices, bypasses third-party device makers entirely, and offers consumers access to its operating system only if they pay (lavishly) for one of the very limited number of devices the company offers, as well. It is perhaps not surprising — although it is enlightening — that Apple earns more revenue in an average quarter from iPhone sales than Google is reported to have earned in total from Android since it began offering it in 2008.

Reality — and the limits it imposes on efforts to manufacture nirvana

The logic behind Google’s approach to Android is obvious: It is the extension of Google’s “advertisers pay” platform strategy to mobile. Rather than charging device makers (and thus consumers) directly for its services, Google earns its revenue by charging advertisers for targeted access to users via Search. Remove Search from mobile devices and you remove the mechanism by which Google gets paid.

It’s true that most device makers opt to offer Google’s suite of services to European users, and that most users opt to keep Google Search as the default on their devices — that is, indeed, the hoped-for effect, and necessary to ensure that Google earns a return on its investment.

That users often choose to keep using Google services instead of installing alternatives, and that device makers typically choose to engineer their products around the Google ecosystem, isn’t primarily the result of a Google-imposed mandate; it’s the result of consumer preferences for Google’s offerings in lieu of readily available alternatives.

The EU decision against Google appears to imagine a world in which Google will continue to develop Android and allow device makers to use the platform and Google’s services for free, even if the likelihood of recouping its investment is diminished.

The Commission also assessed in detail Google’s arguments that the tying of the Google Search app and Chrome browser were necessary, in particular to allow Google to monetise its investment in Android, and concluded that these arguments were not well founded. Google achieves billions of dollars in annual revenues with the Google Play Store alone, it collects a lot of data that is valuable to Google’s search and advertising business from Android devices, and it would still have benefitted from a significant stream of revenue from search advertising without the restrictions.

For the Commission, Google’s earned enough [trust me: you should follow the link. It’s my favorite joke…].

But that world in which Google won’t alter its investment decisions based on a government-mandated reduction in its allowable return on investment doesn’t exist; it’s a fanciful Nirvana.

Google’s real alternatives to the status quo are charging for the use of Android, closing the Android platform and distributing it (like Apple) only on a fully integrated basis, or discontinuing Android.

In reality, and compared to these actual alternatives, Google’s restrictions are trivial. Remember, Google doesn’t insist that Google Search be exclusive, only that it benefit from a “leg up” by being pre-installed as the default. And on this thin reed Google finances the development and maintenance of the (free) Android operating system and all of the other (free) apps from which Google otherwise earns little or no revenue.

It’s hard to see how consumers, device makers, or app developers would be made better off without Google’s restrictions, but in the real world in which the alternative is one of the three manifestly less desirable options mentioned above.

Missing the real competition for the trees

What’s more, while ostensibly aimed at increasing competition, the Commission’s proposed remedy — like the conduct it addresses — doesn’t relate to Google’s most significant competitors at all.

Facebook, Instagram, Firefox, Amazon, Spotify, Yelp, and Yahoo, among many others, are some of the most popular apps on Android phones, including in Europe. They aren’t foreclosed by Google’s Android distribution terms, and it’s even hard to imagine that they would be more popular if only Android phones didn’t come with, say, Google Search pre-installed.

It’s a strange anticompetitive story that has Google allegedly foreclosing insignificant competitors while apparently ignoring its most substantial threats.

The primary challenges Google now faces are from Facebook drawing away the most valuable advertising and Amazon drawing away the most valuable product searches (and increasingly advertising, as well). The fact that Google’s challenged conduct has never shifted in order to target these competitors as their threat emerged, and has had no apparent effect on these competitive dynamics, says all one needs to know about the merits of the Commission’s decision and the value of its proposed remedy.

In reality, as Demsetz suggested, Nirvana cannot be designed by politicians, especially in complex, modern technology markets. Consumers’ best hope for something close — continued innovation, low prices, and voluminous choice — lies in the evolution of markets spurred by consumer demand, not regulators’ efforts to engineer them.

As Thom previously posted, he and I have a new paper explaining The Case for Doing Nothing About Common Ownership of Small Stakes in Competing Firms. Our paper is a response to cries from the likes of Einer Elhauge and of Eric Posner, Fiona Scott Morton, and Glen Weyl, who have called for various types of antitrust action to reign in what they claim is an “economic blockbuster” and “the major new antitrust challenge of our time,” respectively. This is the first in a series of posts that will unpack some of the issues and arguments we raise in our paper.

At issue is the growth in the incidence of common-ownership across firms within various industries. In particular, institutional investors with broad portfolios frequently report owning small stakes in a number of firms within a given industry. Although small, these stakes may still represent large block holdings relative to other investors. This intra-industry diversification, critics claim, changes the managerial objectives of corporate executives from aggressively competing to increase their own firm’s profits to tacitly colluding to increase industry-level profits instead. The reason for this change is that competition by one firm comes at a cost of profits from other firms in the industry. If investors own shares across firms, then any competitive gains in one firm’s stock are offset by competitive losses in the stocks of other firms in the investor’s portfolio. If one assumes corporate executives aim to maximize total value for their largest shareholders, then managers would have incentive to soften competition against firms with which they share common ownership. Or so the story goes (more on that in a later post.)

Elhague and Posner, et al., draw their motivation for new antitrust offenses from a handful of papers that purport to establish an empirical link between the degree of common ownership among competing firms and various measures of softened competitive behavior, including airline prices, banking fees, executive compensation, and even corporate disclosure patterns. The paper of most note, by José Azar, Martin Schmalz, and Isabel Tecu and forthcoming in the Journal of Finance, claims to identify a causal link between the degree of common ownership among airlines competing on a given route and the fares charged for flights on that route.

Measuring common ownership with MHHI

Azar, et al.’s airline paper uses a metric of industry concentration called a Modified Herfindahl–Hirschman Index, or MHHI, to measure the degree of industry concentration taking into account the cross-ownership of investors’ stakes in competing firms. The original Herfindahl–Hirschman Index (HHI) has long been used as a measure of industry concentration, debuting in the Department of Justice’s Horizontal Merger Guidelines in 1982. The HHI is calculated by squaring the market share of each firm in the industry and summing the resulting numbers.

The MHHI is rather more complicated. MHHI is composed of two parts: the HHI measuring product market concentration and the MHHI_Delta measuring the additional concentration due to common ownership. We offer a step-by-step description of the calculations and their economic rationale in an appendix to our paper. For this post, I’ll try to distill that down. The MHHI_Delta essentially has three components, each of which is measured relative to every possible competitive pairing in the market as follows:

  1. A measure of the degree of common ownership between Company A and Company -A (Not A). This is calculated by multiplying the percentage of Company A shares owned by each Investor I with the percentage of shares Investor I owns in Company -A, then summing those values across all investors in Company A. As this value increases, MHHI_Delta goes up.
  2. A measure of the degree of ownership concentration in Company A, calculated by squaring the percentage of shares owned by each Investor I and summing those numbers across investors. As this value increases, MHHI_Delta goes down.
  3. A measure of the degree of product market power exerted by Company A and Company -A, calculated by multiplying the market shares of the two firms. As this value increases, MHHI_Delta goes up.

This process is repeated and aggregated first for every pairing of Company A and each competing Company -A, then repeated again for every other company in the market relative to its competitors (e.g., Companies B and -B, Companies C and -C, etc.). Mathematically, MHHI_Delta takes the form:

where the Ss represent the firm market shares of, and Betas represent ownership shares of Investor I in, the respective companies A and -A.

As the relative concentration of cross-owning investors to all investors in Company A increases (i.e., the ratio on the right increases), managers are assumed to be more likely to soften competition with that competitor. As those two firms control more of the market, managers’ ability to tacitly collude and increase joint profits is assumed to be higher. Consequently, the empirical research assumes that as MHHI_Delta increases, we should observe less competitive behavior.

And indeed that is the “blockbuster” evidence giving rise to Elhauge’s and Posner, et al.,’s arguments  For example, Azar, et. al., calculate HHI and MHHI_Delta for every US airline market–defined either as city-pairs or departure-destination pairs–for each quarter of the 14-year time period in their study. They then regress ticket prices for each route against the HHI and the MHHI_Delta for that route, controlling for a number of other potential factors. They find that airfare prices are 3% to 7% higher due to common ownership. Other papers using the same or similar measures of common ownership concentration have likewise identified positive correlations between MHHI_Delta and their respective measures of anti-competitive behavior.

Problems with the problem and with the measure

We argue that both the theoretical argument underlying the empirical research and the empirical research itself suffer from some serious flaws. On the theoretical side, we have two concerns. First, we argue that there is a tremendous leap of faith (if not logic) in the idea that corporate executives would forgo their own self-interest and the interests of the vast majority of shareholders and soften competition simply because a small number of small stakeholders are intra-industry diversified. Second, we argue that even if managers were so inclined, it clearly is not the case that softening competition would necessarily be desirable for institutional investors that are both intra- and inter-industry diversified, since supra-competitive pricing to increase profits in one industry would decrease profits in related industries that may also be in the investors’ portfolios.

On the empirical side, we have concerns both with the data used to calculate the MHHI_Deltas and with the nature of the MHHI_Delta itself. First, the data on institutional investors’ holdings are taken from Schedule 13 filings, which report aggregate holdings across all the institutional investor’s funds. Using these data masks the actual incentives of the institutional investors with respect to investments in any individual company or industry. Second, the construction of the MHHI_Delta suffers from serious endogeneity concerns, both in investors’ shareholdings and in market shares. Finally, the MHHI_Delta, while seemingly intuitive, is an empirical unknown. While HHI is theoretically bounded in a way that lends to interpretation of its calculated value, the same is not true for MHHI_Delta. This makes any inference or policy based on nominal values of MHHI_Delta completely arbitrary at best.

We’ll expand on each of these concerns in upcoming posts. We will then take on the problems with the policy proposals being offered in response to the common ownership ‘problem.’

 

 

 

 

 

 

In a recent post at the (appallingly misnamed) ProMarket blog (the blog of the Stigler Center at the University of Chicago Booth School of Business — George Stigler is rolling in his grave…), Marshall Steinbaum keeps alive the hipster-antitrust assertion that lax antitrust enforcement — this time in the labor market — is to blame for… well, most? all? of what’s wrong with “the labor market and the broader macroeconomic conditions” in the country.

In this entry, Steinbaum takes particular aim at the US enforcement agencies, which he claims do not consider monopsony power in merger review (and other antitrust enforcement actions) because their current consumer welfare framework somehow doesn’t recognize monopsony as a possible harm.

This will probably come as news to the agencies themselves, whose Horizontal Merger Guidelines devote an entire (albeit brief) section (section 12) to monopsony, noting that:

Mergers of competing buyers can enhance market power on the buying side of the market, just as mergers of competing sellers can enhance market power on the selling side of the market. Buyer market power is sometimes called “monopsony power.”

* * *

Market power on the buying side of the market is not a significant concern if suppliers have numerous attractive outlets for their goods or services. However, when that is not the case, the Agencies may conclude that the merger of competing buyers is likely to lessen competition in a manner harmful to sellers.

Steinbaum fails to mention the HMGs, but he does point to a US submission to the OECD to make his point. In that document, the agencies state that

The U.S. Federal Trade Commission (“FTC”) and the Antitrust Division of the Department of Justice (“DOJ”) [] do not consider employment or other non-competition factors in their antitrust analysis. The antitrust agencies have learned that, while such considerations “may be appropriate policy objectives and worthy goals overall… integrating their consideration into a competition analysis… can lead to poor outcomes to the detriment of both businesses and consumers.” Instead, the antitrust agencies focus on ensuring robust competition that benefits consumers and leave other policies such as employment to other parts of government that may be specifically charged with or better placed to consider such objectives.

Steinbaum, of course, cites only the first sentence. And he uses it as a launching-off point to attack the notion that antitrust is an improper tool for labor market regulation. But if he had just read a little bit further in the (very short) document he cites, Steinbaum might have discovered that the US antitrust agencies have, in fact, challenged the exercise of collusive monopsony power in labor markets. As footnote 19 of the OECD submission notes:

Although employment is not a relevant policy goal in antitrust analysis, anticompetitive conduct affecting terms of employment can violate the Sherman Act. See, e.g., DOJ settlement with eBay Inc. that prevents the company from entering into or maintaining agreements with other companies that restrain employee recruiting or hiring; FTC settlement with ski equipment manufacturers settling charges that companies illegally agreed not to compete for one another’s ski endorsers or employees. (Emphasis added).

And, ironically, while asserting that labor market collusion doesn’t matter to the agencies, Steinbaum himself points to “the Justice Department’s 2010 lawsuit against Silicon Valley employers for colluding not to hire one another’s programmers.”

Steinbaum instead opts for a willful misreading of the first sentence of the OECD submission. But what the OECD document refers to, of course, are situations where two firms merge, no market power is created (either in input or output markets), but people are laid off because the merged firm does not need all of, say, the IT and human resources employees previously employed in the pre-merger world.

Does Steinbaum really think this is grounds for challenging the merger on antitrust grounds?

Actually, his post suggests that he does indeed think so, although he doesn’t come right out and say it. What he does say — as he must in order to bring antitrust enforcement to bear on the low- and unskilled labor markets (e.g., burger flippers; retail cashiers; Uber drivers) he purports to care most about — is that:

Employers can have that control [over employees, as opposed to independent contractors] without first establishing themselves as a monopoly—in fact, reclassification [of workers as independent contractors] is increasingly standard operating procedure in many industries, which means that treating it as a violation of Section 2 of the Sherman Act should not require that outright monopolization must first be shown. (Emphasis added).

Honestly, I don’t have any idea what he means. Somehow, because firms hire independent contractors where at one time long ago they might have hired employees… they engage in Sherman Act violations, even if they don’t have market power? Huh?

I get why he needs to try to make this move: As I intimated above, there is probably not a single firm in the world that hires low- or unskilled workers that has anything approaching monopsony power in those labor markets. Even Uber, the example he uses, has nothing like monopsony power, unless perhaps you define the market (completely improperly) as “drivers already working for Uber.” Even then Uber doesn’t have monopsony power: There can be no (or, at best, virtually no) markets in the world where an Uber driver has no other potential employment opportunities but working for Uber.

Moreover, how on earth is hiring independent contractors evidence of anticompetitive behavior? ”Reclassification” is not, in fact, “standard operating procedure.” It is the case that in many industries firms (unilaterally) often decide to contract out the hiring of low- and unskilled workers over whom they do not need to exercise direct oversight to specialized firms, thus not employing those workers directly. That isn’t “reclassification” of existing workers who have no choice but to accept their employer’s terms; it’s a long-term evolution of the economy toward specialization, enabled in part by technology.

And if we’re really concerned about what “employee” and “independent contractor” mean for workers and employment regulation, we should reconsider those outdated categories. Firms are faced with a binary choice: hire workers or independent contractors. Neither really fits many of today’s employment arrangements very well, but that’s the choice firms are given. That they sometimes choose “independent worker” over “employee” is hardly evidence of anticompetitive conduct meriting antitrust enforcement.

The point is: The notion that any of this is evidence of monopsony power, or that the antitrust enforcement agencies don’t care about monopsony power — because, Bork! — is absurd.

Even more absurd is the notion that the antitrust laws should be used to effect Steinbaum’s preferred market regulations — independent of proof of actual anticompetitive effect. I get that it’s hard to convince Congress to pass the precise laws you want all the time. But simply routing around Congress and using the antitrust statutes as a sort of meta-legislation to enact whatever happens to be Marshall Steinbaum’s preferred regulation du jour is ridiculous.

Which is a point the OECD submission made (again, if only Steinbaum had read beyond the first sentence…):

[T]wo difficulties with expanding the scope of antitrust analysis to include employment concerns warrant discussion. First, a full accounting of employment effects would require consideration of short-term effects, such as likely layoffs by the merged firm, but also long-term effects, which could include employment gains elsewhere in the industry or in the economy arising from efficiencies generated by the merger. Measuring these effects would [be extremely difficult.]. Second, unless a clear policy spelling out how the antitrust agency would assess the appropriate weight to give employment effects in relation to the proposed conduct or transaction’s procompetitive and anticompetitive effects could be developed, [such enforcement would be deeply problematic, and essentially arbitrary].

To be sure, the agencies don’t recognize enough that they already face the problem of reconciling multidimensional effects — e.g., short-, medium-, and long-term price effects, innovation effects, product quality effects, etc. But there is no reason to exacerbate the problem by asking them to also consider employment effects. Especially not in Steinbaum’s world in which certain employment effects are problematic even without evidence of market power or even actual anticompetitive harm, just because he says so.

Consider how this might play out:

Suppose that Pepsi, Coca-Cola, Dr. Pepper… and every other soft drink company in the world attempted to merge, creating a monopoly soft drink manufacturer. In what possible employment market would even this merger create a monopsony in which anticompetitive harm could be tied to the merger? In the market for “people who know soft drink secret formulas?” Yet Steinbaum would have the Sherman Act enforced against such a merger not because it might create a product market monopoly, but because the existence of a product market monopoly means the firm must be able to bad things in other markets, as well. For Steinbaum and all the other scolds who see concentration as the source of all evil, the dearth of evidence to support such a claim is no barrier (on which, see, e.g., this recent, content-less NYT article (that, naturally, quotes Steinbaum) on how “big business may be to blame” for the slowing rate of startups).

The point is, monopoly power in a product market does not necessarily have any relationship to monopsony power in the labor market. Simply asserting that it does — and lambasting the enforcement agencies for not just accepting that assertion — is farcical.

The real question, however, is what has happened to the University of Chicago that it continues to provide a platform for such nonsense?

Regardless of the merits and soundness (or lack thereof) of this week’s European Commission Decision in the Google Shopping case — one cannot assess this until we have the text of the decision — two comments really struck me during the press conference.

First, it was said that Google’s conduct had essentially reduced innovation. If I heard correctly, this is a formidable statement. In 2016, another official EU service published stats that described Alphabet as increasing its R&D by 22% and ranked it as the world’s 4th top R&D investor. Sure it can always be better. And sure this does not excuse everything. But still. The press conference language on incentives to innovate was a bit of an oversell, to say the least.

Second, the Commission views this decision as a “precedent” or as a “framework” that will inform the way dominant Internet platforms should display, intermediate and market their services and those of their competitors. This may fuel additional complaints by other vertical search rivals against (i) Google in relation to other product lines, but also against (ii) other large platform players.

Beyond this, the Commission’s approach raises a gazillion questions of law and economics. Pending the disclosure of the economic evidence in the published decision, let me share some thoughts on a few (arbitrarily) selected legal issues.

First, the Commission has drawn the lesson of the Microsoft remedy quagmire. The Commission refrains from using a trustee to ensure compliance with the decision. This had been a bone of contention in the 2007 Microsoft appeal. Readers will recall that the Commission had imposed on Microsoft to appoint a monitoring trustee, who was supposed to advise on possible infringements in the implementation of the decision. On appeal, the Court eventually held that the Commission was solely responsible for this, and could not delegate those powers. Sure, the Commission could “retai[n] its own external expert to provide advice when it investigates the implementation of the remedies.” But no more than that.

Second, we learn that the Commission is no longer in the business of software design. Recall the failed untying of WMP and Windows — Windows Naked sold only 11,787 copies, likely bought by tech bootleggers willing to acquire the first piece of software ever designed by antitrust officials — or the browser “Choice Screen” compliance saga which eventually culminated with a €561 million fine. Nothing of this can be found here. The Commission leaves remedial design to the abstract concept of “equal treatment”.[1] This, certainly, is a (relatively) commendable approach, and one that could inspire remedies in other unilateral conduct cases, in particular, exploitative conduct ones where pricing remedies are both costly, impractical, and consequentially inefficient.

On the other hand, readers will also not fail to see the corollary implication of “equal treatment”: search neutrality could actually cut both ways, and lead to a lawful degradation in consumer welfare if Google were ever to decide to abandon rich format displays for both its own shopping services and those of rivals.

Third, neither big data nor algorithmic design is directly vilified in the case (“The Commission Decision does not object to the design of Google’s generic search algorithms or to demotions as such, nor to the way that Google displays or organises its search results pages”). In fact, the Commission objects to the selective application of Google’s generic search algorithms to its own products. This is an interesting, and subtle, clarification given all the coverage that this topic has attracted in recent antitrust literature. We are in fact very close to a run of the mill claim of disguised market manipulation, not causally related to data or algorithmic technology.

Fourth, Google said it contemplated a possible appeal of the decision. Now, here’s a challenging question: can an antitrust defendant effectively exercise its right to judicial review of an administrative agency (and more generally its rights of defense), when it operates under the threat of antitrust sanctions in ongoing parallel cases investigated by the same agency (i.e., the antitrust inquiries related to Android and Ads)? This question cuts further than the Google Shopping case. Say firm A contemplates a merger with firm B in market X, while it is at the same time subject to antitrust investigations in market Z. And assume that X and Z are neither substitutes nor complements so there is little competitive relationship between both products. Can the Commission leverage ongoing antitrust investigations in market Z to extract merger concessions in market X? Perhaps more to the point, can the firm interact with the Commission as if the investigations are completely distinct, or does it have to play a more nuanced game and consider the ramifications of its interactions with the Commission in both markets?

Fifth, as to the odds of a possible appeal, I don’t believe that arguments on the economic evidence or legal theory of liability will ever be successful before the General Court of the EU. The law and doctrine in unilateral conduct cases are disturbingly — and almost irrationally — severe. As I have noted elsewhere, the bottom line in the EU case-law on unilateral conduct is to consider the genuine requirement of “harm to competition” as a rhetorical question, not an empirical one. In EU unilateral conduct law, exclusion of every and any firm is a per se concern, regardless of evidence of efficiency, entry or rivalry.

In turn, I tend to opine that Google has a stronger game from a procedural standpoint, having been left with (i) the expectation of a settlement (it played ball three times by making proposals); (ii) a corollary expectation of the absence of a fine (settlement discussions are not appropriate for cases that could end with fines); and (iii) a full seven long years of an investigatory cloud. We know from the past that EU judges like procedural issues, but like comparably less to debate the substance of the law in unilateral conduct cases. This case could thus be a test case in terms of setting boundaries on how freely the Commission can U-turn a case (the Commissioner said “take the case forward in a different way”).

Today, the Senate Committee on Health, Education, Labor, and Pensions (HELP) enters the drug pricing debate with a hearing on “The Cost of Prescription Drugs: How the Drug Delivery System Affects What Patients Pay.”  By questioning the role of the drug delivery system in pricing, the hearing goes beyond the more narrow focus of recent hearings that have explored how drug companies set prices.  Instead, today’s hearing will explore how pharmacy benefit managers, insurers, providers, and others influence the amounts that patients pay.

In 2016, net U.S. drug spending increased by 4.8% to $323 billion (after adjusting for rebates and off-invoice discounts).  This rate of growth slowed to less than half the rates of 2014 and 2015, when net drug spending grew at rates of 10% and 8.9% respectively.  Yet despite the slowing in drug spending, the public outcry over the cost of prescription drugs continues.

In today’s hearing, there will be testimony both on the various causes of drug spending increases and on various proposals that could reduce the cost of drugs.  Several of the proposals will focus on ways to increase competition in the pharmaceutical industry, and in turn, reduce drug prices.  I have previously explained several ways that the government could reduce prices through enhanced competition, including reducing the backlog of generic drugs awaiting FDA approval and expediting the approval and acceptance of biosimilars.  Other proposals today will likely call for regulatory reforms to enable innovative contractual arrangements that allow for outcome- or indication-based pricing and other novel reimbursement designs.

However, some proposals will undoubtedly return to the familiar call for more government negotiation of drug prices, especially drugs covered under Medicare Part D.  As I’ve discussed in a previous post, in order for government negotiation to significantly lower drug prices, the government must be able to put pressure on drug makers to secure price concessions. This could be achieved if the government could set prices administratively, penalize manufacturers that don’t offer price reductions, or establish a formulary.  Setting prices or penalizing drug makers that don’t reduce prices would produce the same disastrous effects as price controls: drug shortages in certain markets, increased prices for non-Medicare patients, and reduced incentives for innovation. A government formulary for Medicare Part D coverage would provide leverage to obtain discounts from manufacturers, but it would mean that many patients could no longer access some of their optimal drugs.

As lawmakers seriously consider changes that would produce these negative consequences, industry would do well to voluntarily constrain prices.  Indeed, in the last year, many drug makers have pledged to limit price increases to keep drug spending under control.  Allergan was first, with its “social contract” introduced last September that promised to keep price increases below 10 percent. Since then, Novo Nordisk, AbbVie, and Takeda, have also voluntarily committed to single-digit price increases.

So far, the evidence shows the drug makers are sticking to their promises. Allergan has raised the price of U.S. branded products by an average of 6.7% in 2017, and no drug’s list price has increased by more than single digits.  In contrast, Pfizer, who has made no pricing commitment, has raised the price of many of its drugs by 20%.

If more drug makers brought about meaningful change by committing to voluntary pricing restraints, the industry could prevent the market-distorting consequences of government intervention while helping patients afford the drugs they need.   Moreover, avoiding intrusive government mandates and price controls would preserve drug innovation that has brought life-saving and life-enhancing drugs to millions of Americans.

 

 

 

Nicolas Petit is Professor of Law at the University of Liege (Belgium) and Research Professor at the University of South Australia (UniSA)

This symposium offers a good opportunity to look again into the complex relation between concentration and innovation in antitrust policy. Whilst the details of the EC decision in Dow/Dupont remain unknown, the press release suggests that the issue of “incentives to innovate” was central to the review. Contrary to what had leaked in the antitrust press, the decision has apparently backed off from the introduction of a new “model”, and instead followed a more cautious approach. After a quick reminder of the conventional “appropriability v cannibalizationframework that drives merger analysis in innovation markets (1), I make two sets of hopefully innovative remarks on appropriability and IP rights (2) and on cannibalization in the ag-biotech sector (3).

Appropriability versus cannibalization

Antitrust economics 101 teach that mergers affect innovation incentives in two polar ways. A merger may increase innovation incentives. This occurs when the increment in power over price or output achieved through merger enhances the appropriability of the social returns to R&D. The appropriability effect of mergers is often tied to Joseph Schumpeter, who observed that the use of “protecting devices” for past investments like patent protection or trade secrecy constituted a “normal elemen[t] of rational management”. The appropriability effect can in principle be observed at firm – specific incentives – and industry – general incentives – levels, because actual or potential competitors can also use the M&A market to appropriate the payoffs of R&D investments.

But a merger may decrease innovation incentives. This happens when the increased industry position achieved through merger discourages the introduction of new products, processes or services. This is because an invention will cannibalize the merged entity profits in proportions larger as would be the case in a more competitive market structure. This idea is often tied to Kenneth Arrow who famously observed that a “preinvention monopoly power acts as a strong disincentive to further innovation”.

Schumpeter’s appropriability hypothesis and Arrow’s cannibalization theory continue to drive much of the discussion on concentration and innovation in antitrust economics. True, many efforts have been made to overcome, reconcile or bypass both views of the world. Recent studies by Carl Shapiro or Jon Baker are worth mentioning. But Schumpeter and Arrow remain sticky references in any discussion of the issue. Perhaps more than anything, the persistence of their ideas denotes that both touched a bottom point when they made their seminal contribution, laying down two systems of belief on the workings of innovation-driven markets.

Now beyond the theory, the appropriability v cannibalization gravitational models provide from the outset an appealing framework for the examination of mergers in R&D driven industries in general. From an operational perspective, the antitrust agency will attempt to understand if the transaction increases appropriability – which leans in favour of clearance – or cannibalization – which leans in favour of remediation. At the same time, however, the downside of the appropriability v cannibalization framework (and of any framework more generally) may be to oversimplify our understanding of complex phenomena. This, in turn, prompts two important observations on each branch of the framework.

Appropriability and IP rights

Any antitrust agency committed to promoting competition and innovation should consider mergers in light of the degree of appropriability afforded by existing protecting devices (essentially contracts and entitlements). This is where Intellectual Property (“IP”) rights become relevant to the discussion. In an industry with strong IP rights, the merging parties (and its rivals) may be able to appropriate the social returns to R&D without further corporate concentration. Put differently, the stronger the IP rights, the lower the incremental contribution of a merger transaction to innovation, and the higher the case for remediation.

This latter proposition, however, rests on a heavy assumption: that IP rights confer perfect appropriability. The point is, however, far from obvious. Most of us know that – and our antitrust agencies’ misgivings with other sectors confirm it – IP rights are probabilistic in nature. There is (i) no certainty that R&D investments will lead to commercially successful applications; (ii) no guarantee that IP rights will resist to invalidity proceedings in court; (iii) little safety to competition by other product applications which do not practice the IP but provide substitute functionality; and (iv) no inevitability that the environmental, toxicological and regulatory authorization rights that (often) accompany IP rights will not be cancelled when legal requirements change. Arrow himself called for caution, noting that “Patent laws would have to be unimaginably complex and subtle to permit [such] appropriation on a large scale”. A thorough inquiry into the specific industry-strength of IP rights that goes beyond patent data and statistics thus constitutes a necessary step in merger review.

But it is not a sufficient one. The proposition that strong IP rights provide appropriability is essentially valid if the observed pre-merger market situation is one where several IP owners compete on differentiated products and as a result wield a degree of market power. In contrast, the proposition is essentially invalid if the observed pre-merger market situation leans more towards the competitive equilibrium and IP owners compete at prices closer to costs. In both variants, the agency should thus look carefully at the level and evolution of prices and costs, including R&D ones, in the pre-merger industry. Moreover, in the second variant, the agency ought to consider as a favourable appropriability factor any increase of the merging entity’s power over price, but also any improvement of its power over cost. By this, I have in mind efficiency benefits, which can arise as the result of economies of scale (in manufacturing but also in R&D), but also when the transaction combines complementary technological and marketing assets. In Dow/Dupont, no efficiency argument has apparently been made by the parties, so it is difficult to understand if and how such issues have played a role in the Commission’s assessment.

Cannibalization, technological change, and drastic innovation

Arrow’s cannibalization theory – namely that a pre-invention monopoly acts as a strong disincentive to further innovation – fails to capture that successful inventions create new technology frontiers, and with them entirely novel needs that even a monopolist has an incentive to serve. This can be understood with an example taken from the ag-biotech field. It is undisputed that progress in crop protection science has led to an expanding range of resistant insects, weeds, and pathogens. This, in turn, is one (if not the main) key drivers of ag-tech research. In a 2017 paper published in Pest Management Science, Sparks and Lorsbach observe that:

resistance to agrochemicals is an ongoing driver for the development of new chemical control options, along with an increased emphasis on resistance management and how these new tools can fit into resistance management programs. Because resistance is such a key driver for the development of new agrochemicals, a highly prized attribute for a new agrochemical is a new MoA [method of action] that is ideally a new molecular target either in an existing target site (e.g., an unexploited binding site in the voltage-gated sodium channel), or new/under-utilized target site such as calcium channels.

This, and other factors, leads them to conclude that:

even with fewer companies overall involved in agrochemical discovery, innovation continues, as demonstrated by the continued introduction of new classes of agrochemicals with new MoAs.

Sparks, Hahn, and Garizi make a similar point. They stress in particular that the discovery of natural products (NPs) which are the “output of nature’s chemical laboratory” is today a main driver of crop protection research. According to them:

NPs provide very significant value in identifying new MoAs, with 60% of all agrochemical MoAs being, or could have been, defined by a NP. This information again points to the importance of NPs in agrochemical discovery, since new MoAs remain a top priority for new agrochemicals.

More generally, the point is not that Arrow’s cannibalization theory is wrong. Arrow’s work convincingly explains monopolists’ low incentives to invest in substitute invention. Instead, the point is that Arrow’s cannibalization theory is narrower than often assumed in the antitrust policy literature. Admittedly, Arrow’s cannibalization theory is relevant in industries primarily driven by a process of cumulative innovation. But it is much less helpful to understand the incentives of a monopolist in industries subject to technological change. As a result of this, the first question that should guide an antitrust agency investigation is empirical in nature: is the industry under consideration one driven by cumulative innovation, or one where technology disruption, shocks, and serendipity incentivize drastic innovation?

Note that exogenous factors beyond technological frontiers also promote drastic innovation. This point ought not to be overlooked. A sizeable amount of the specialist scientific literature stresses the powerful innovation incentives created by changing dietary habits, new diseases (e.g. the Zika virus), global population growth, and environmental challenges like climate change and weather extremes. In 2015, Jeschke noted:

In spite of the significant consolidation of the agrochemical companies, modern agricultural chemistry is vital and will have the opportunity to shape the future of agriculture by continuing to deliver further innovative integrated solutions. 

Words of wisdom caution for antitrust agencies tasked with the complex mission of reviewing mergers in the ag-biotech industry?

In a weekend interview with the Washington Post, Donald Trump vowed to force drug companies to negotiate directly with the government on prices in Medicare and Medicaid.  It’s unclear what, if anything, Trump intends for Medicaid; drug makers are already required to sell drugs to Medicaid at the lowest price they negotiate with any other buyer.  For Medicare, Trump didn’t offer any more details about the intended negotiations, but he’s referring to his campaign proposals to allow the Department of Health and Human Services (HHS) to negotiate directly with manufacturers the prices of drugs covered under Medicare Part D.

Such proposals have been around for quite a while.  As soon as the Medicare Modernization Act (MMA) of 2003 was enacted, creating the Medicare Part D prescription drug benefit, many lawmakers began advocating for government negotiation of drug prices. Both Hillary Clinton and Bernie Sanders favored this approach during their campaigns, and the Obama Administration’s proposed budget for fiscal years 2016 and 2017 included a provision that would have allowed the HHS to negotiate prices for a subset of drugs: biologics and certain high-cost prescription drugs.

However, federal law would have to change if there is to be any government negotiation of drug prices under Medicare Part D. Congress explicitly included a “noninterference” clause in the MMA that stipulates that HHS “may not interfere with the negotiations between drug manufacturers and pharmacies and PDP sponsors, and may not require a particular formulary or institute a price structure for the reimbursement of covered part D drugs.”

Most people don’t understand what it means for the government to “negotiate” drug prices and the implications of the various options.  Some proposals would simply eliminate the MMA’s noninterference clause and allow HHS to negotiate prices for a broad set of drugs on behalf of Medicare beneficiaries.  However, the Congressional Budget Office has already concluded that such a plan would have “a negligible effect on federal spending” because it is unlikely that HHS could achieve deeper discounts than the current private Part D plans (there are 746 such plans in 2017).  The private plans are currently able to negotiate significant discounts from drug manufacturers by offering preferred formulary status for their drugs and channeling enrollees to the formulary drugs with lower cost-sharing incentives. In most drug classes, manufacturers compete intensely for formulary status and offer considerable discounts to be included.

The private Part D plans are required to provide only two drugs in each of several drug classes, giving the plans significant bargaining power over manufacturers by threatening to exclude their drugs.  However, in six protected classes (immunosuppressant, anti-cancer, anti-retroviral, antidepressant, antipsychotic and anticonvulsant drugs), private Part D plans must include “all or substantially all” drugs, thereby eliminating their bargaining power and ability to achieve significant discounts.  Although the purpose of the limitation is to prevent plans from cherry-picking customers by denying coverage of certain high cost drugs, giving the private Part D plans more ability to exclude drugs in the protected classes should increase competition among manufacturers for formulary status and, in turn, lower prices.  And it’s important to note that these price reductions would not involve any government negotiation or intervention in Medicare Part D.  However, as discussed below, excluding more drugs in the protected classes would reduce the value of the Part D plans to many patients by limiting access to preferred drugs.

For government negotiation to make any real difference on Medicare drug prices, HHS must have the ability to not only negotiate prices, but also to put some pressure on drug makers to secure price concessions.  This could be achieved by allowing HHS to also establish a formulary, set prices administratively, or take other regulatory actions against manufacturers that don’t offer price reductions.  Setting prices administratively or penalizing manufacturers that don’t offer satisfactory reductions would be tantamount to a price control.  I’ve previously explained that price controls—whether direct or indirect—are a bad idea for prescription drugs for several reasons. Evidence shows that price controls lead to higher initial launch prices for drugs, increased drug prices for consumers with private insurance coverage,  drug shortages in certain markets, and reduced incentives for innovation.

Giving HHS the authority to establish a formulary for Medicare Part D coverage would provide leverage to obtain discounts from manufacturers, but it would produce other negative consequences.  Currently, private Medicare Part D plans cover an average of 85% of the 200 most popular drugs, with some plans covering as much as 93%.  In contrast, the drug benefit offered by the Department of Veterans Affairs (VA), one government program that is able to set its own formulary to achieve leverage over drug companies, covers only 59% of the 200 most popular drugs.  The VA’s ability to exclude drugs from the formulary has generated significant price reductions. Indeed, estimates suggest that if the Medicare Part D formulary was restricted to the VA offerings and obtained similar price reductions, it would save Medicare Part D $510 per beneficiary.  However, the loss of access to so many popular drugs would reduce the value of the Part D plans by $405 per enrollee, greatly narrowing the net gains.

History has shown that consumers don’t like their access to drugs reduced.  In 2014, Medicare proposed to take antidepressants, antipsychotic and immunosuppressant drugs off the protected list, thereby allowing the private Part D plans to reduce offerings of these drugs on the formulary and, in turn, reduce prices.  However, patients and their advocates were outraged at the possibility of losing access to their preferred drugs, and the proposal was quickly withdrawn.

Thus, allowing the government to negotiate prices under Medicare Part D could carry important negative consequences.  Policy-makers must fully understand what it means for government to negotiate directly with drug makers, and what the potential consequences are for price reductions, access to popular drugs, drug innovation, and drug prices for other consumers.