Archives For markets

Next week the FCC is slated to vote on the second iteration of Chairman Wheeler’s proposed broadband privacy rules. Of course, as has become all too common, none of us outside the Commission has actually seen the proposal. But earlier this month Chairman Wheeler released a Fact Sheet that suggests some of the ways it would update the rules he initially proposed.

According to the Fact Sheet, the new proposed rules are

designed to evolve with changing technologies and encourage innovation, and are in harmony with other key privacy frameworks and principles — including those outlined by the Federal Trade Commission and the Administration’s Consumer Privacy Bill of Rights.

Unfortunately, the Chairman’s proposal appears to fall short of the mark on both counts.

As I discuss in detail in a letter filed with the Commission yesterday, despite the Chairman’s rhetoric, the rules described in the Fact Sheet fail to align with the FTC’s approach to privacy regulation embodied in its 2012 Privacy Report in at least two key ways:

  • First, the Fact Sheet significantly expands the scope of information that would be considered “sensitive” beyond that contemplated by the FTC. That, in turn, would impose onerous and unnecessary consumer consent obligations on commonplace uses of data, undermining consumer welfare, depriving consumers of information and access to new products and services, and restricting competition.
  • Second, unlike the FTC’s framework, the proposal described by the Fact Sheet ignores the crucial role of “context” in determining the appropriate level of consumer choice before affected companies may use consumer data. Instead, the Fact Sheet takes a rigid, acontextual approach that would stifle innovation and harm consumers.

The Chairman’s proposal moves far beyond the FTC’s definition of “sensitive” information requiring “opt-in” consent

The FTC’s privacy guidance is, in its design at least, appropriately flexible, aimed at balancing the immense benefits of information flows with sensible consumer protections. Thus it eschews an “inflexible list of specific practices” that would automatically trigger onerous consent obligations and “risk[] undermining companies’ incentives to innovate and develop new products and services….”

Under the FTC’s regime, depending on the context in which it is used (on which see the next section, below), the sensitivity of data delineates the difference between data uses that require “express affirmative” (opt-in) consent and those that do not (requiring only “other protections” short of opt-in consent — e.g., opt-out).

Because the distinction is so important — because opt-in consent is much more likely to staunch data flows — the FTC endeavors to provide guidance as to what data should be considered sensitive, and to cabin the scope of activities requiring opt-in consent. Thus, the FTC explains that “information about children, financial and health information, Social Security numbers, and precise geolocation data [should be treated as] sensitive.” But beyond those instances, the FTC doesn’t consider any other type of data as inherently sensitive.

By contrast, and without explanation, Chairman Wheeler’s Fact Sheet significantly expands what constitutes “sensitive” information requiring “opt-in” consent by adding “web browsing history,” “app usage history,” and “the content of communications” to the list of categories of data deemed sensitive in all cases.

By treating some of the most common and important categories of data as always “sensitive,” and by making the sensitivity of data the sole determinant for opt-in consent, the Chairman’s proposal would make it almost impossible for ISPs to make routine (to say nothing of innovative), appropriate, and productive uses of data comparable to those undertaken by virtually every major Internet company.  This goes well beyond anything contemplated by the FTC — with no evidence of any corresponding benefit to consumers and with obvious harm to competition, innovation, and the overall economy online.

And because the Chairman’s proposal would impose these inappropriate and costly restrictions only on ISPs, it would create a barrier to competition by ISPs in other platform markets, without offering a defensible consumer protection rationale to justify either the disparate treatment or the restriction on competition.

As Fred Cate and Michael Staten have explained,

“Opt-in” offers no greater privacy protection than allowing consumers to “opt-out”…, yet it imposes significantly higher costs on consumers, businesses, and the economy.

Not surprisingly, these costs fall disproportionately on the relatively poor and the less technology-literate. In the former case, opt-in requirements may deter companies from offering services at all, even to people who would make a very different trade-off between privacy and monetary price. In the latter case, because an initial decision to opt-in must be taken in relative ignorance, users without much experience to guide their decisions will face effectively higher decision-making costs than more knowledgeable users.

The Chairman’s proposal ignores the central role of context in the FTC’s privacy framework

In part for these reasons, central to the FTC’s more flexible framework is the establishment of a sort of “safe harbor” for data uses where the benefits clearly exceed the costs and consumer consent may be inferred:

Companies do not need to provide choice before collecting and using consumer data for practices that are consistent with the context of the transaction or the company’s relationship with the consumer….

Thus for many straightforward uses of data, the “context of the transaction,” not the asserted “sensitivity” of the underlying data, is the threshold question in evaluating the need for consumer choice in the FTC’s framework.

Chairman Wheeler’s Fact Sheet, by contrast, ignores this central role of context in its analysis. Instead, it focuses solely on data sensitivity, claiming that doing so is “in line with customer expectations.”

But this is inconsistent with the FTC’s approach.

In fact, the FTC’s framework explicitly rejects a pure “consumer expectations” standard:

Rather than relying solely upon the inherently subjective test of consumer expectations, the… standard focuses on more objective factors related to the consumer’s relationship with a business.

And while everyone agrees that sensitivity is a key part of pegging privacy regulation to actual consumer and corporate relationships, the FTC also recognizes that the importance of the sensitivity of the underlying data varies with the context in which it is used. Or, in the words of the White House’s 2012 Consumer Data Privacy in a Networked World Report (introducing its Consumer Privacy Bill of Rights), “[c]ontext should shape the balance and relative emphasis of particular principles” guiding the regulation of privacy.

By contrast, Chairman Wheeler’s “sensitivity-determines-consumer-expectations” framing is a transparent attempt to claim fealty to the FTC’s (and the Administration’s) privacy standards while actually implementing a privacy regime that is flatly inconsistent with them.

The FTC’s approach isn’t perfect, but that’s no excuse to double down on its failings

The FTC’s privacy guidance, and even more so its privacy enforcement practices under Section 5, are far from perfect. The FTC should be commended for its acknowledgement that consumers’ privacy preferences and companies’ uses of data will change over time, and that there are trade-offs inherent in imposing any constraints on the flow of information. But even the FTC fails to actually assess the magnitude of the costs and benefits of, and the deep complexities involved in, the trade-off, and puts an unjustified thumb on the scale in favor of limiting data use.  

But that’s no excuse for Chairman Wheeler to ignore what the FTC gets right, and to double down on its failings. Based on the Fact Sheet (and the initial NPRM), it’s a virtual certainty that the Chairman’s proposal doesn’t heed the FTC’s refreshing call for humility and flexibility regarding the application of privacy rules to ISPs (and other Internet platforms):

These are complex and rapidly evolving areas, and more work should be done to learn about the practices of all large platform providers, their technical capabilities with respect to consumer data, and their current and expected uses of such data.

The rhetoric of the Chairman’s Fact Sheet is correct: the FCC should in fact conform its approach to privacy to the framework established by the FTC. Unfortunately, the reality of the Fact Sheet simply doesn’t comport with its rhetoric.

As the FCC’s vote on the Chairman’s proposal rapidly nears, and in light of its significant defects, we can only hope that the rest of the Commission refrains from reflexively adopting the proposed regime, and works to ensure that these problematic deviations from the FTC’s framework are addressed before moving forward.

Today ICLE released a white paper entitled, A critical assessment of the latest charge of Google’s anticompetitive bias from Yelp and Tim Wu.

The paper is a comprehensive response to a study by Michael Luca, Timothy Wu, Sebastian Couvidat, Daniel Frank, & William Seltzer, entitled, Is Google degrading search? Consumer harm from Universal Search.

The Wu, et al. paper will be one of the main topics of discussion at today’s Capitol Forum and George Washington Institute of Public Policy event on Dominant Platforms Under the Microscope: Policy Approaches in the US and EU, at which I will be speaking — along with a host of luminaries including, inter alia, Josh Wright, Jonathan Kanter, Allen Grunes, Catherine Tucker, and Michael Luca — one of the authors of the Universal Search study.

Follow the link above to register — the event starts at noon today at the National Press Club.

Meanwhile, here’s a brief description of our paper:

Late last year, Tim Wu of Columbia Law School (and now the White House Office of Management and Budget), Michael Luca of Harvard Business School (and a consultant for Yelp), and a group of Yelp data scientists released a study claiming that Google has been purposefully degrading search results from its more-specialized competitors in the area of local search. The authors’ claim is that Google is leveraging its dominant position in general search to thwart competition from specialized search engines by favoring its own, less-popular, less-relevant results over those of its competitors:

To improve the popularity of its specialized search features, Google has used the power of its dominant general search engine. The primary means for doing so is what is called the “universal search” or the “OneBox.”

This is not a new claim, and researchers have been attempting (and failing) to prove Google’s “bias” for some time. Likewise, these critics have drawn consistent policy conclusions from their claims, asserting that antitrust violations lie at the heart of the perceived bias. But the studies are systematically marred by questionable methodology and bad economics.

This latest study by Tim Wu, along with a cadre of researchers employed by Yelp (one of Google’s competitors and one of its chief antitrust provocateurs), fares no better, employing slightly different but equally questionable methodology, bad economics, and a smattering of new, but weak, social science. (For a thorough criticism of the inherent weaknesses of Wu et al.’s basic social science methodology, see Miguel de la Mano, Stephen Lewis, and Andrew Leyden, Focus on the Evidence: A Brief Rebuttal of Wu, Luca, et al (2016), available here).

The basic thesis of the study is that Google purposefully degrades its local searches (e.g., for restaurants, hotels, services, etc.) to the detriment of its specialized search competitors, local businesses, consumers, and even Google’s bottom line — and that this is an actionable antitrust violation.

But in fact the study shows nothing of the kind. Instead, the study is marred by methodological problems that, in the first instance, make it impossible to draw any reliable conclusions. Nor does the study show that Google’s conduct creates any antitrust-relevant problems. Rather, the construction of the study and the analysis of its results reflect a superficial and inherently biased conception of consumer welfare that completely undermines the study’s purported legal and economic conclusions.

Read the whole thing here.

Mylan Pharmaceuticals recently reinvigorated the public outcry over pharmaceutical price increases when news surfaced that the company had raised the price of EpiPens by more than 500% over the past decade and, purportedly, had plans to increase the price even more. The Mylan controversy comes on the heels of several notorious pricing scandals last year. Recall Valeant Pharmaceuticals, that acquired cardiac drugs Isuprel and Nitropress and then quickly raised their prices by 525% and 212%, respectively. And of course, who can forget Martin Shkreli of Turing Pharmaceuticals, who increased the price of toxoplasmosis treatment Daraprim by 5,000% and then claimed he should have raised the price even higher.

However, one company, pharmaceutical giant Allergan, seems to be taking a different approach to pricing.   Last week, Allergan CEO Brent Saunders condemned the scandalous pricing increases that have raised suspicions of drug companies and placed the entire industry in the political hot seat. In an entry on the company’s blog, Saunders issued Allergan’s “social contract with patients” that made several drug pricing commitments to its customers.

Some of the most important commitments Allergan made to its customers include:

  • A promise to not increase prices more than once a year, and to limit price increases to singe-digit percentage increases.
  • A pledge to improve patient access to Allergan medications by enhancing patient assistance programs in 2017
  • A vow to cooperate with policy makers and payers (including government drug plans, private insurers, and pharmacy benefit managers) to facilitate better access to Allergan products by offering pricing discounts and paying rebates to lower drug costs.
  • An assurance that Allergan will no longer engage in the common industry tactic of dramatically increasing prices for branded drugs nearing patent expiry, without cost increases that justify the increase.
  • A commitment to provide annual updates on how pricing affects Allergan’s business.
  • A pledge to price Allergan products in a way that is commensurate with, or lower than, the value they create.

Saunders also makes several non-pricing pledges to maintain a continuous supply of its drugs, diligently monitor the safety of its products, and appropriately educate physicians about its medicines. He also makes the point that the recent pricing scandals have shifted attention away from the vibrant medical innovation ecosystem that develops new life-saving and life-enhancing drugs. Saunders contends that the focus on pricing by regulators and the public has incited suspicions about this innovation ecosystem: “This ecosystem can quickly fall apart if it is not continually nourished with the confidence that there will be a longer term opportunity for appropriate return on investment in the long R&D journey.”

Policy-makers and the public would be wise to focus on the importance of brand drug innovation. Brand drug companies are largely responsible for pharmaceutical innovation. Since 2000, brand companies have spent over half a trillion dollars on R&D, and they currently account for over 90 percent of the spending on the clinical trials necessary to bring new drugs to market. As a result of this spending, over 550 new drugs have been approved by the FDA since 2000, and another 7,000 are currently in development globally. And this innovation is directly tied to health advances. Empirical estimates of the benefits of pharmaceutical innovation indicate that each new drug brought to market saves 11,200 life-years each year.  Moreover, new drugs save money by reducing doctor visits, hospitalizations, and other medical procedures, ultimately for every $1 spent on new drugs, total medical spending decreases by more than $7.

But, as Saunders suggests, this innovation depends on drugmakers earning a sufficient return on their investment in R&D. The costs to bring a new drug to market with FDA approval are now estimated at over $2 billion, and only 1 in 10 drugs that begin clinical trials are ever approved by the FDA. Brand drug companies must price a drug not only to recoup the drug’s own costs, they must also consider the costs of all the product failures in their pricing decisions. However, they have a very limited window to recoup these costs before generic competition destroys brand profits: within three months of the first generic entry, generics have already captured over 70 percent of the brand drugs’ market. Drug companies must be able to price drugs at a level where they can earn profits sufficient to offset their R&D costs and the risk of failures. Failure to cover these costs will slow investment in R&D; drug companies will not spend millions and billions of dollars developing drugs if they cannot recoup the costs of that development.

Yet several recent proposals threaten to control prices in a way that could prevent drug companies from earning a sufficient return on their investment in R&D. Ultimately, we must remember that a social contract involves commitment from all members of a group; it should involve commitments from drug companies to price responsibly, and commitments from the public and policy makers to protect innovation. Hopefully, more drug companies will follow Allergan’s lead and renounce the exorbitant price increases we’ve seen in recent times. But in return, we should all remember that innovation and, in turn, health improvements, depend on drug companies’ profitability.

Since the European Commission (EC) announced its first inquiry into Google’s business practices in 2010, the company has been the subject of lengthy investigations by courts and competition agencies around the globe. Regulatory authorities in the United States, France, the United Kingdom, Canada, Brazil, and South Korea have all opened and rejected similar antitrust claims.

And yet the EC marches on, bolstered by Google’s myriad competitors, who continue to agitate for further investigations and enforcement actions, even as we — companies and consumers alike — enjoy the benefits of an increasingly dynamic online marketplace.

Indeed, while the EC has spent more than half a decade casting about for some plausible antitrust claim, the online economy has thundered ahead. Since 2010, Facebook has tripled its active users and multiplied its revenue ninefold; the number of apps available in the Amazon app store has grown from less than 4000 to over 400,000 today; and there are almost 1.5 billion more Internet users globally than there were in 2010. And consumers are increasingly using new and different ways to search for information: Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, and Facebook’s Messenger are a few of the many new innovations challenging traditional search engines.

Advertisers have adapted to this evolution, moving increasingly online, and from search to display ads as mobile adoption has skyrocketedSocial networks like Twitter and Snapchat have come into their own, competing for the same (and ever-increasing) advertising dollars. For marketers, advertising on social networks is now just as important as advertising in search. No wonder e-commerce sales have more than doubled, to almost $2 trillion worldwide; for the first time, consumers purchased more online than in stores this past year.

To paraphrase Louis C.K.: Everything is amazing — and no one at the European Commission is happy.

The EC’s market definition is fatally flawed

Like its previous claims, the Commission’s most recent charges are rooted in the assertion that Google abuses its alleged dominance in “general search” advertising to unfairly benefit itself and to monopolize other markets. But European regulators continue to miss the critical paradigm shift among online advertisers and consumers that has upended this stale view of competition on the Internet. The reality is that Google’s competition may not, and need not, look exactly like Google itself, but it is competition nonetheless. And it’s happening in spades.

The key to understanding why the European Commission’s case is fundamentally flawed lies in an examination of how it defines the relevant market. Through a series of economically and factually unjustified assumptions, the Commission defines search as a distinct market in which Google faces limited competition and enjoys an 80% market share. In other words, for the EC, “general search” apparently means only nominal search providers like Google and Bing; it doesn’t mean companies like Amazon, Facebook and Twitter — Google’s biggest competitors.  

But the reality is that “general search” is just one technology among many for serving information and ads to consumers online. Defining the relevant market or limiting the definition of competition in terms of the particular mechanism that Google happens to use to match consumers and advertisers doesn’t reflect the substitutability of other mechanisms that do the same thing — merely because these mechanisms aren’t called “search.”

Properly defined, the market in which Google competes online is not search, but something more like online “matchmaking” between advertisers, retailers and consumers. And this market is enormously competitive.

Consumers today are increasingly using platforms like Amazon and Facebook as substitutes for the searches they might have run on Google or Bing. “Closed” platforms like the iTunes store and innumerable apps handle copious search traffic but also don’t figure in the EC’s market calculations. And so-called “dark social” interactions like email, text messages, and IMs, drive huge amounts of some of the most valuable traffic on the Internet. This, in turn, has led to a competitive scramble to roll out completely new technologies like chatbots to meet consumers’ informational (and merchants’ advertising) needs.

Properly construed, Google’s market position is precarious

Like Facebook and Twitter (and practically every other Internet platform), advertising is Google’s primary source of revenue. Instead of charging for fancy hardware or offering services to users for a fee, Google offers search, the Android operating system, and a near-endless array of other valuable services for free to users. The company’s very existence relies on attracting Internet users and consumers to its properties in order to effectively connect them with advertisers.

But being an online matchmaker is a difficult and competitive enterprise. Among other things, the ability to generate revenue turns crucially on the quality of the match: All else equal, an advertiser interested in selling widgets will pay more for an ad viewed by a user who can be reliably identified as being interested in buying widgets.

Google’s primary mechanism for attracting users to match with advertisers — general search — is substantially about information, not commerce, and the distinction between product and informational searches is crucially important to understanding Google’s market and the surprisingly limited and tenuous market power it possesses.

General informational queries aren’t nearly as valuable to advertisers: Significantly, only about 30 percent of Google’s searches even trigger any advertising at all. Meanwhile, as of 2012, one-third of product searches started on Amazon while only 13% started on a general search engine.

As economist Hal Singer aptly noted in 2012,

[the data] suggest that Google lacks market power in a critical segment of search — namely, product searches. Even though searches for items such as power tools or designer jeans account for only 10 to 20 percent of all searches, they are clearly some of the most important queries for search engines from a business perspective, as they are far easier to monetize than informational queries like “Kate Middleton.”

While Google Search clearly offers substantial value to advertisers, its ability to continue to do so is precarious when confronted with the diverse array of competitors that, like Facebook, offer a level of granularity in audience targeting that general search can’t match, or that, like Amazon, systematically offer up the most valuable searchers.

In order to compete in this market — one properly defined to include actual competitors — Google has had to constantly innovate to maintain its position. Unlike a complacent monopolist, it has evolved to meet changing consumer demand, shifting technology and inventive competitors. Thus, Google’s search algorithm has changed substantially over the years to make more effective use of the information available to ensure relevance; search results have evolved to give consumers answers to queries rather than just links, and to provide more-direct access to products and services; and, as users have shifted more and more of their time and attention to mobile devices, search has incorporated more-localized results.

Competitors want a free lunch

Critics complain, nevertheless, that these developments have made it harder, in one way or another, for rivals to compete. And the EC has provided a willing ear. According to Commissioner Vestager last week:

Google has come up with many innovative products that have made a difference to our lives. But that doesn’t give Google the right to deny other companies the chance to compete and innovate. Today, we have further strengthened our case that Google has unduly favoured its own comparison shopping service in its general search result pages…. (Emphasis added).

Implicit in this statement is the remarkable assertion that by favoring its own comparison shopping services, Google “den[ies] other companies the chance to compete and innovate.” Even assuming Google does “favor” its own results, this is an astounding claim.

First, it is not a violation of competition law simply to treat competitors’ offerings differently than one’s own, even for a dominant firm. Instead, conduct must actually exclude competitors from the market, without offering countervailing advantages to consumers. But Google’s conduct is not exclusionary, and there are many benefits to consumers.

As it has from the start of its investigations of Google, the EC begins with a flawed assumption: that Google’s competitors both require, and may be entitled to, unfettered access to Google’s property in order to compete. But this is patently absurd. Google is not an essential facility: Billions of users reach millions of companies everyday through direct browser navigation, apps, email links, review sites and blogs, and countless other means — all without once touching

Google Search results do not exclude competitors, whether comparison shopping sites or others. For example, 72% of TripAdvisor’s U.S. traffic comes from search, and almost all of that from organic results; other specialized search sites see similar traffic volumes.

More important, however, in addition to continuing to reach rival sites through Google Search, billions of consumers access rival services directly through their mobile apps. In fact, for Yelp,

Approximately 21 million unique devices accessed Yelp via the mobile app on a monthly average basis in the first quarter of 2016, an increase of 32% compared to the same period in 2015. App users viewed approximately 70% of page views in the first quarter and were more than 10 times as engaged as website users, as measured by number of pages viewed. (Emphasis added).

And a staggering 40 percent of mobile browsing is now happening inside the Facebook app, competing with the browsers and search engines pre-loaded on smartphones.

Millions of consumers also directly navigate to Google’s rivals via their browser by simply typing, for example, “” in their address bar. And as noted above, consumers are increasingly using Google rivals’ new disruptive information engines like Alexa and Siri for their search needs. Even the traditional search engine space is competitive — in fact, according to Wired, as of July 2016:

Microsoft has now captured more than one-third of Internet searches. Microsoft’s transformation from a company that sells boxed software to one that sells services in the cloud is well underway. (Emphasis added).

With such numbers, it’s difficult to see how rivals are being foreclosed from reaching consumers in any meaningful way.

Meanwhile, the benefits to consumers are obvious: Google is directly answering questions for consumers rather than giving them a set of possible links to click through and further search. In some cases its results present entirely new and valuable forms of information (e.g., search trends and structured data); in others they serve to hone searches by suggesting further queries, or to help users determine which organic results (including those of its competitors) may be most useful. And, of course, consumers aren’t forced to endure these innovations if they don’t find them useful, as they can quickly switch to other providers.  

Nostalgia makes for bad regulatory policy

Google is not the unstoppable monopolist of the EU competition regulators’ imagining. Rather, it is a continual innovator, forced to adapt to shifting consumer demand, changing technology, and competitive industry dynamics. And, instead of trying to hamstring Google, if they are to survive, Google’s competitors (and complainants) must innovate as well.

Dominance in technology markets — especially online — has always been ephemeral. Once upon a time, MySpace, AOL, and Yahoo were the dominant Internet platforms. Kodak, once practically synonymous with “instant camera” let the digital revolution pass it by. The invincible Sony Walkman was upended by mp3s and the iPod. Staid, keyboard-operated Blackberries and Nokias simply couldn’t compete with app-driven, graphical platforms from Apple and Samsung. Even today, startups like Snapchat, Slack, and Spotify gain massive scale and upend entire industries with innovative new technology that can leave less-nimble incumbents in the dustbin of tech history.

Put differently, companies that innovate are able to thrive, while those that remain dependent on yesterday’s technology and outdated business models usually fail — and deservedly so. It should never be up to regulators to pick winners and losers in a highly dynamic and competitive market, particularly if doing so constrains the market’s very dynamism. As Alfonso Lamadrid has pointed out:

It is companies and not competition enforcers which will strive or fail in the adoption of their business models, and it is therefore companies and not competition enforcers who are to decide on what business models to use. Some will prove successful and others will not; some companies will thrive and some will disappear, but with experimentation with business models, success and failure are and have always been part of the game.

In other words, we should not forget that competition law is, or should be, business-model agnostic, and that regulators are – like anyone else – far from omniscient.

Like every other technology company before them, Google and its competitors must be willing and able to adapt in order to keep up with evolving markets — just as for Lewis Carroll’s Red Queen, “it takes all the running you can do, to keep in the same place.” Google confronts a near-constantly evolving marketplace and fierce competition from unanticipated quarters; companies that build their businesses around Google face a near-constantly evolving Google. In the face of such relentless market dynamism, neither consumers nor firms are well served by regulatory policy rooted in nostalgia.  

Please Join Us For A Conference On Intellectual Property Law


Keynote Speaker: Dean Kamen

October 6-7, 2016

Antonin Scalia Law School
George Mason University
Arlington, Virginia


**9 Hours CLE**

Brand drug manufacturers are no strangers to antitrust accusations when it comes to their complicated relationship with generic competitors — most obviously with respect to reverse payment settlements. But the massive and massively complex regulatory scheme under which drugs are regulated has provided other opportunities for regulatory legerdemain with potentially anticompetitive effect, as well.

In particular, some FTC Commissioners have raised concerns that brand drug companies have been taking advantage of an FDA drug safety program — the Risk Evaluation and Mitigation Strategies program, or “REMS” — to delay or prevent generic entry.

Drugs subject to a REMS restricted distribution program are difficult to obtain through market channels and not otherwise readily available, even for would-be generic manufacturers that need samples in order to perform the tests required to receive FDA approval to market their products. REMS allows (requires, in fact) brand manufacturers to restrict the distribution of certain drugs that present safety or abuse risks, creating an opportunity for branded drug manufacturers to take advantage of imprecise regulatory requirements by inappropriately limiting access by generic manufacturers.

The FTC has not (yet) brought an enforcement action, but it has opened several investigations, and filed an amicus brief in a private-party litigation. Generic drug companies have filed several antitrust claims against branded drug companies and raised concerns with the FDA.

The problem, however, is that even if these companies are using REMS to delay generics, such a practice makes for a terrible antitrust case. Not only does the existence of a regulatory scheme arguably set Trinko squarely in the way of a successful antitrust case, but the sort of refusal to deal claims at issue here (as in Trinko) are rightly difficult to win because, as the DOJ’s Section 2 Report notes, “there likely are few circumstances where forced sharing would help consumers in the long run.”

But just because there isn’t a viable antitrust case doesn’t mean there isn’t still a competition problem. But in this case, it’s a problem of regulatory failure. Companies rationally take advantage of poorly written federal laws and regulations in order to tilt the market to their own advantage. It’s no less problematic for the market, but its solution is much more straightforward, if politically more difficult.

Thus it’s heartening to see that Senator Mike Lee (R-UT), along with three of his colleagues (Patrick Leahy (D-VT), Chuck Grassley (R-IA), and Amy Klobuchar (D-MN)), has proposed a novel but efficient way to correct these bureaucracy-generated distortions in the pharmaceutical market without resorting to the “blunt instrument” of antitrust law. As the bill notes:

While the antitrust laws may address actions by license holders who impede the prompt negotiation and development on commercially reasonable terms of a single, shared system of elements to assure safe use, a more tailored legal pathway would help ensure that license holders negotiate such agreements in good faith and in a timely manner, facilitating competition in the marketplace for drugs and biological products.

The legislative solution put forward by the Creating and Restoring Equal Access to Equivalent Samples (CREATES) Act of 2016 targets the right culprit: the poor regulatory drafting that permits possibly anticompetitive conduct to take place. Moreover, the bill refrains from creating a per se rule, instead implementing several features that should still enable brand manufacturers to legitimately restrict access to drug samples when appropriate.

In essence, Senator Lee’s bill introduces a third party (in this case, the Secretary of Health and Human Services) who is capable of determining whether an eligible generic manufacturer is able to comply with REMS restrictions — thus bypassing any bias on the part of the brand manufacturer. Where the Secretary determines that a generic firm meets the REMS requirements, the bill also creates a narrow cause of action for this narrow class of plaintiffs, allowing suits against certain brand manufacturers who — despite the prohibition on using REMS to delay generics — nevertheless misuse the process to delay competitive entry.

Background on REMS

The REMS program was introduced as part of the Food and Drug Administration Amendments Act of 2007 (FDAAA). Following the withdrawal of Vioxx, an arthritis pain reliever, from the market because of a post-approval linkage of the drug to heart attacks, the FDA was under considerable fire, and there was a serious risk that fewer and fewer net beneficial drugs would be approved. The REMS program was introduced by Congress as a mechanism to ensure that society could reap the benefits from particularly risky drugs and biologics — rather than the FDA preventing them from entering the market at all. It accomplishes this by ensuring (among other things) that brands and generics adopt appropriate safety protocols for distribution and use of drugs — particularly when a drug has the potential to cause serious side effects, or has an unusually high abuse profile.

The FDA-determined REMS protocols can range from the simple (e.g., requiring a medication guide or a package insert about potential risks) to the more burdensome (including restrictions on a drug’s sale and distribution, or what the FDA calls “Elements to Assure Safe Use” (“ETASU”)). Most relevant here, the REMS process seems to allow brands considerable leeway to determine whether generic manufacturers are compliant or able to comply with ETASUs. Given this discretion, it is no surprise that brand manufacturers may be tempted to block competition by citing “safety concerns.”

Although the FDA specifically forbids the use of REMS to block lower-cost, generic alternatives from entering the market (of course), almost immediately following the law’s enactment, certain less-scrupulous branded pharmaceutical companies began using REMS for just that purpose (also, of course).

REMS abuse

To enter into pharmaceutical markets that no longer have any underlying IP protections, manufactures must submit to the FDA an Abbreviated New Drug Application (ANDA) for a generic, or an Abbreviated Biologic License Application (ABLA) for a biosimilar, of the brand drug. The purpose is to prove to the FDA that the competing product is as safe and effective as the branded reference product. In order to perform the testing sufficient to prove efficacy and safety, generic and biosimilar drug manufacturers must acquire a sample (many samples, in fact) of the reference product they are trying to replicate.

For the narrow class of dangerous or highly abused drugs, generic manufacturers are forced to comply with any REMS restrictions placed upon the brand manufacturer — even when the terms require the brand manufacturer to tightly control the distribution of its product.

And therein lies the problem. Because the brand manufacturer controls access to its products, it can refuse to provide the needed samples, using REMS as an excuse. In theory, it may be true in certain cases that a brand manufacturer is justified in refusing to distribute samples of its product, of course; some would-be generic manufacturers certainly may not meet the requisite standards for safety and security.

But in practice it turns out that most of the (known) examples of brands refusing to provide samples happen across the board — they preclude essentially all generic competition, not just the few firms that might have insufficient safeguards. It’s extremely difficult to justify such refusals on the basis of a generic manufacturer’s suitability when all would-be generic competitors are denied access, including well-established, high-quality manufacturers.

But, for a few brand manufacturers, at least, that seems to be how the REMS program is implemented. Thus, for example, Jon Haas, director of patient access at Turing Pharmaceuticals, referred to the practice of denying generics samples this way:

Most likely I would block that purchase… We spent a lot of money for this drug. We would like to do our best to avoid generic competition. It’s inevitable. They seem to figure out a way [to make generics], no matter what. But I’m certainly not going to make it easier for them. We’re spending millions and millions in research to find a better Daraprim, if you will.

As currently drafted, the REMS program gives branded manufacturers the ability to limit competition by stringing along negotiations for product samples for months, if not years. Although access to a few samples for testing is seemingly such a small, trivial thing, the ability to block this access allows a brand manufacturer to limit competition (at least from bioequivalent and generic drugs; obviously competition between competing branded drugs remains).

And even if a generic competitor manages to get ahold of samples, the law creates an additional wrinkle by imposing a requirement that brand and generic manufacturers enter into a single shared REMS plan for bioequivalent and generic drugs. But negotiating the particulars of the single, shared program can drag on for years. Consequently, even when a generic manufacturer has received the necessary samples, performed the requisite testing, and been approved by the FDA to sell a competing drug, it still may effectively be barred from entering the marketplace because of REMS.

The number of drugs covered by REMS is small: fewer than 100 in a universe of several thousand FDA-approved drugs. And the number of these alleged to be subject to abuse is much smaller still. Nonetheless, abuse of this regulation by certain brand manufacturers has likely limited competition and increased prices.

Antitrust is not the answer

Whether the complex, underlying regulatory scheme that allocates the relative rights of brands and generics — and that balances safety against access — gets the balance correct or not is an open question, to be sure. But given the regulatory framework we have and the perceived need for some sort of safety controls around access to samples and for shared REMS plans, the law should at least work to do what it intends, without creating an opportunity for harmful manipulation. Yet it appears that the ambiguity of the current law has allowed some brand manufacturers to exploit these safety protections to limit competition.

As noted above, some are quite keen to make this an antitrust issue. But, as also noted, antitrust is a poor fit for handling such abuses.

First, antitrust law has an uneasy relationship with other regulatory schemes. Not least because of Trinko, it is a tough case to make that brand manufacturers are violating antitrust laws when they rely upon legal obligations under a safety program that is essentially designed to limit generic entry on safety grounds. The issue is all the more properly removed from the realm of antitrust enforcement given that the problem is actually one of regulatory failure, not market failure.

Second, antitrust law doesn’t impose a duty to deal with rivals except in very limited circumstances. In Trinko, for example, the Court rejected the invitation to extend a duty to deal to situations where an existing, voluntary economic relationship wasn’t terminated. By definition this is unlikely to be the case here where the alleged refusal to deal is what prevents the generic from entering the market in the first place. The logic behind Trinko (and a host of other cases that have limited competitors’ obligations to assist their rivals) was to restrict duty to deal cases to those rare circumstances where it reliably leads to long-term competitive harm — not where it amounts to a perfectly legitimate effort to compete without giving rivals a leg-up.

But antitrust is such a powerful tool and such a flexible “catch-all” regulation, that there are always efforts to thwart reasonable limits on its use. As several of us at TOTM have written about at length in the past, former FTC Commissioner Rosch and former FTC Chairman Leibowitz were vocal proponents of using Section 5 of the FTC Act to circumvent sensible judicial limits on making out and winning antitrust claims, arguing that the limits were meant only for private plaintiffs — not (implicitly infallible) government enforcers. Although no one at the FTC has yet (publicly) suggested bringing a REMS case as a standalone Section 5 case, such a case would be consistent with the sorts of theories that animated past standalone Section 5 cases.

Again, this approach serves as an end-run around the reasonable judicial constraints that evolved as a result of judges actually examining the facts of individual cases over time, and is a misguided way of dealing with what is, after all, fundamentally a regulatory design problem.


Senator Lee’s bill, on the other hand, aims to solve the problem with a more straightforward approach by improving the existing regulatory mechanism and by adding a limited judicial remedy to incentivize compliance under the amended regulatory scheme. In summary:

  • The bill creates a cause of action for a refusal to deal only where plaintiff can prove, by a preponderance of the evidence, that certain well-defined conditions are met.
  • For samples, if a drug is not covered by a REMS, or if the generic manufacturer is specifically authorized, then the generic can sue if it doesn’t receive sufficient quantities of samples on commercially reasonable terms. This is not a per se offense subject to outsized antitrust damages. Instead, the remedy is a limited injunction ensuring the sale of samples on commercially reasonable terms, reasonable attorneys’ fees, and a monetary fine limited to revenue earned from sale of the drug during the refusal period.
  • The bill also gives a brand manufacturer an affirmative defense if it can prove by a preponderance of the evidence that, regardless of its own refusal to supply them, samples were nevertheless available elsewhere on commercially reasonable terms, or where the brand manufacturer is unable to supply the samples because it does not actually produce or market the drug.
  • In order to deal with the REMS process problems, the bill creates similar rights with similar limitations when the license holders and generics cannot come to an agreement about a shared REMS on commercially reasonable terms within 120 days of first contact by an eligible developer.
  • The bill also explicitly limits brand manufacturers’ liability for claims “arising out of the failure of an [eligible generic manufacturer] to follow adequate safeguards,” thus removing one of the (perfectly legitimate) objections to the bill pressed by brand manufacturers.

The primary remedy is limited, injunctive relief to end the delay. And brands are protected from frivolous litigation by an affirmative defense under which they need only show that the product is available for purchase on reasonable terms elsewhere. Damages are similarly limited and are awarded only if a court finds that the brand manufacturer lacked a legitimate business justification for its conduct (which, under the drug safety regime, means essentially a reasonable belief that its own REMS plan would be violated by dealing with the generic entrant). And monetary damages do not include punitive damages.

Finally, the proposed bill completely avoids the question of whether antitrust laws are applicable, leaving that possibility open to determination by courts — as is appropriate. Moreover, by establishing even more clearly the comprehensive regulatory regime governing potential generic entrants’ access to dangerous drugs, the bill would, given the holding in Trinko, probably make application of antitrust laws here considerably less likely.

Ultimately Senator Lee’s bill is a well-thought-out and targeted fix to an imperfect regulation that seems to be facilitating anticompetitive conduct by a few bad actors. It does so without trampling on the courts’ well-established antitrust jurisprudence, and without imposing excessive cost or risk on the majority of brand manufacturers that behave perfectly appropriately under the law.

In an effort to control drug spending, several states are considering initiatives that will impose new price controls on prescription drugs. Ballot measures under consideration in California and Ohio will require drug companies to sell drugs under various state programs at a mandated discount. And legislators in Massachusetts and Pennsylvania have drafted bills that would create new government commissions to regulate the price of drugs. These state initiatives have followed proposals by presidential nominees to enact new price controls to address the high costs of pharmaceuticals.

As I explain in a new study, further price controls are a bad idea for several reasons.

First, as I discussed in a previous post, several government programs, such as Medicaid, the 340B Program, the Department of Defense and Veterans Affairs drug programs, and spending in the coverage gap of Medicare Part D, already impose price controls. Under these programs, required rebates are typically calculated as set percentages off of a drug company’s average drug price. But this approach gives drug companies an incentive to raise prices; a required percentage rebate off of a higher average price can serve to offset the mandated price control.

Second, over 40 percent of drugs sold in the U.S. are sold under government programs that mandate price controls. With such a large share of their drugs sold at significant discounts, drug companies have the incentive to charge even higher prices to other non-covered patients to offset the discounts. Indeed, numerous studies and government analyses have concluded that required discounts under Medicaid and Medicare have resulted in increased prices for other consumers as manufacturers seek to offset revenue lost under price controls.

Third, evidence suggests that price controls contribute to significant drug shortages: at a below-market price, the demand for drugs exceeds the amount of drugs that manufacturers are willing or able to sell.

Fourth, price controls hinder innovation in the pharmaceutical industry. Brand drug companies incur an average of $2.6 billion in costs to bring each new drug to market with FDA approval. They must offset these significant costs with revenues earned during the patent period; within 3 months after patent expiry, generic competitors will have already captured over 70 percent of the brand drugs’ market share and significantly eroded their profits. But price controls imposed on drugs under patent increase the risk that drug companies will not earn the profits they need to offset their development costs (only 20% of marketed brand drugs ever earn enough sales to cover their development cost). The result will be less R&D spending and less innovation. Indeed, a substantial body of empirical literature establishes that pharmaceutical firms’ profitability is linked to their research and development efforts and innovation.

Instead of imposing price controls, the government should increase drug competition in order to reduce drug spending without these negative consequences. Increased drug competition will expand product offerings, giving consumers more choice in the drugs they take. It will also lower prices and spur innovation as suppliers compete to attain or protect valuable market share from rivals.

First, the FDA should reduce the backlog of generic drugs awaiting approval. The single most important factor in controlling drug spending in recent decades has been the dramatic increase in generic drug usage; generic drugs have saved consumers $1.68 trillion over the past decade. But the degree to which generics reduce drug prices depends on the number of generic competitors in the market; the more competitors, the more price competition and downward pressure on prices. Unfortunately, a backlog of generic drug approvals at the FDA has restricted generic competition in many important market segments. There are currently over 3,500 generic applications pending approval; fast-tracking these FDA approvals will provide consumers with many new lower-priced drug options.

Second, regulators should expedite the approval and acceptance of biosimilars—the generic counterparts to high-priced biologic drugs. Biologic drugs are different from traditional medications because they are based on living organisms and, as a result, are far more complex and expensive to develop. By 2013, spending on biologic drugs comprised a quarter of all drug spending in the U.S., and their share of drug spending is expected increase significantly over the next decade. Unfortunately, the average cost of a biologic drug is 22 times greater than a traditional drug, making them prohibitively expensive for many consumers.

Fortunately, Congress has recognized the need for cheaper, “generic” substitutes for biologic drugs—or biosimilars. As part of the Affordable Care Act, Congress created a biosimilars approval pathway that would enable these cheaper biologic drugs to obtain FDA approval and reach patients more quickly. Nevertheless, the FDA has approved only one biosimilar for use in the U.S. despite several pending biosimilar applications. The agency has also yet to provide any meaningful guidance as to what standards it will employ in determining whether a biosimilar is interchangeable with a biologic. Burdensome requirements for interchangeability increase the difficulty and cost of biosimilar approval and limit the ease of biosimilar substitution at pharmacies.

Expediting the approval of biosimilars will increase competition in the market for biologic drugs, reducing prices and allowing more patients access to these life-saving and life-enhancing treatments. Estimates suggest that a biosimilar approval pathway at the FDA will save U.S. consumers between $44 billion and $250 billion over the next decade.

The recent surge in drug spending must be addressed to ensure that patients can continue to afford life-saving and life-enhancing medications. However, proposals calling for new price controls are the wrong approach. While superficially appealing, price controls may have unintended consequences—less innovation, drug shortages, and higher prices for some consumers—that could harm consumers rather than helping them. In contrast, promoting competition will lower pharmaceutical prices and drug spending without these deleterious effects.




As ICLE argued in its amicus brief, the Second Circuit’s ruling in United States v. Apple Inc. is in direct conflict with the Supreme Court’s 2007 Leegin decision, and creates a circuit split with the Third Circuit based on that court’s Toledo Mack ruling. Moreover, the negative consequences of the court’s ruling will be particularly acute for modern, high-technology sectors of the economy, where entrepreneurs planning to deploy new business models will now face exactly the sort of artificial deterrents that the Court condemned in Trinko:

Mistaken inferences and the resulting false condemnations are especially costly, because they chill the very conduct the antitrust laws are designed to protect.

Absent review by the Supreme Court to correct the Second Circuit’s error, the result will be less-vigorous competition and a reduction in consumer welfare. The Court should grant certiorari.

The Second Circuit committed a number of important errors in its ruling.

First, as the Supreme Court held in Leegin, condemnation under the per se rule is appropriate

only for conduct that would always or almost always tend to restrict competition… [and] only after courts have had considerable experience with the type of restraint at issue.

Neither is true in this case. The use of MFNs in Apple’s contracts with the publishers and its adoption of the so-called “agency model” for e-book pricing have never been reviewed by the courts in a setting like this one, let alone found to “always or almost always tend to restrict competition.” There is no support in the case law or economic literature for the proposition that agency models or MFNs used to facilitate entry by new competitors in platform markets like this one are anticompetitive.

Second, the court of appeals emphasized that in some cases e-book prices increased after Apple’s entry, and it viewed that fact as strong support for application of the per se rule. But the Court in Leegin made clear that the per se rule is inappropriate where, as here, “prices can be increased in the course of promoting procompetitive effects.”  

What the Second Circuit missed is that competition occurs on many planes other than price; higher prices do not necessarily suggest decreased competition or anticompetitive effects. As Josh Wright points out:

[T]the multi-dimensional nature of competition implies that antitrust analysis seeking to maximize consumer or total welfare must inevitably calculate welfare tradeoffs when innovation and price effects run in opposite directions.

Higher prices may accompany welfare-enhancing “competition on the merits,” resulting in greater investment in product quality, reputation, innovation, or distribution mechanisms.

While the court acknowledged that “[n]o court can presume to know the proper price of an ebook,” its analysis nevertheless rested on the presumption that Amazon’s prices before Apple’s entry were competitive. The record, however, offered no support for that presumption, and thus no support for the inference that post-entry price increases were anticompetitive.

In fact, as Alan Meese has pointed out, a restraint might increase prices precisely because it overcomes a market failure:

[P]roof that a restraint alters price or output when compared to the status quo ante is at least equally consistent with an alternative explanation, namely, that the agreement under scrutiny corrects a market failure and does not involve the exercise or creation of market power. Because such failures can result in prices that are below the optimum, or output that is above it, contracts that correct or attenuate market failure will often increase prices or reduce output when compared to the status quo ante. As a result, proof that such a restraint alters price or other terms of trade is at least equally consistent with a procompetitive explanation, and thus cannot give rise to a prima facie case under settled antitrust doctrine.

Before Apple’s entry, Amazon controlled 90% of the e-books market, and the publishers had for years been unable to muster sufficient bargaining power to renegotiate the terms of their contracts with Amazon. At the same time, Amazon’s pricing strategies as a nascent platform developer in a burgeoning market (that it was, in practical effect, trying to create) likely did not always produce prices that would be optimal under evolving market conditions as the market matured. The fact that prices may have increased following the alleged anticompetitive conduct cannot support an inference that the conduct was anticompetitive.

Third, the Second Circuit also made a mistake in dismissing Apple’s defenses. The court asserted that

this defense — that higher prices enable more competitors to enter a market — is no justification for a horizontal price‐fixing conspiracy.

But the court is incorrect. As Bill Kolasky points out in his post, it is well-accepted that otherwise-illegal agreements that are ancillary to a procompetitive transaction should be evaluated under the rule of reason.

It was not that Apple couldn’t enter unless Amazon’s prices (and its own) were increased. Rather, the contention made by Apple was that it could not enter unless it was able to attract a critical mass of publishers to its platform – a task which required some sharing of information among the publishers – and unless it was able to ensure that Amazon would not artificially lower its prices to such an extent that it would prevent Apple from attracting a critical mass of readers to its platform. The MFN and the agency model were thus ancillary restraints that facilitated the transactions between Apple and the publishers and between Apple and iPad purchasers. In this regard they are appropriately judged under the rule of reason and, under the rule of reason, offer a valid procompetitive justification for the restraints.

And it was the fact of Apple’s entry, not the use of vertical restraints in its contracts, that enabled the publishers to wield the bargaining power sufficient to move Amazon to the agency model. The court itself noted that the introduction of the iPad and iBookstore “gave publishers more leverage to negotiate for alternative sales models or different pricing.” And as Ben Klein noted at trial,

Apple’s entry probably gave the publishers an increased ability to threaten [Amazon sufficiently that it accepted the agency model]…. The MFN [made] a trivial change in the publishers’ incentives…. The big change that occurs is the change on the other side of the bargaining situation after Apple comes in where Amazon now cannot just tell them no.

Fourth, the purpose of applying the per se rule is to root out activities that always or almost always harm competition. Although it’s possible that a horizontal agreement that facilitates entry and increases competition could be subject to the per se rule, in this case its application was inappropriate. The novelty of Apple’s arrangement with the publishers, coupled with the weakness of proof of any sort of actual price fixing fails to meet even a minimal threshold that would require application of the per se rule.

Not all horizontal arrangements are per se illegal. If an arrangement is relatively novel, facilitates entry, and is patently different from naked price fixing, it should be reviewed under the rule of reason. See BMI. All of those conditions are met here.

The conduct of the publishers – distinct from their agreements with Apple – to find some manner of changing their contracts with Amazon is not itself price fixing, either. The prices themselves would be set only subsequent to whatever new contracts were adopted. At worst, the conduct of the publishers in working toward new contracts with Amazon can be characterized as a facilitating practice.

But even then, the precedent of the Court counsels against applying the per se rule to facilitating practices such as the mere dissemination of price information or, as in this case, information regarding the parties’ preferred, bilateral, contractual relationships. As the Second Circuit itself once held, following the Supreme Court,  

[the] exchange of information is not illegal per se, but can be found unlawful under a rule of reason analysis.

In other words, even the behavior of the publishers should be analyzed under a rule of reason – and Apple’s conduct in facilitating that behavior cannot be imbued with complicity in a price-fixing scheme that may not have existed at all.

Fifth, in order for conduct to “eliminate price competition,” there must be price competition to begin with. But as the district court itself noted, the publishers do not compete on price. This point is oft-overlooked in discussions of the case. It is perhaps possible to say that the contract terms at issue and the publishers’ pressure on Amazon affected price competition between Apple and Amazon – but even then it cannot be said to have reduced competition, because, absent Apple’s entry, there was no competition at all between Apple and Amazon.

It’s true that, if all Apple’s entry did was to transfer identical e-book sales from Amazon to Apple, at higher prices and therefore lower output, it might be difficult to argue that Apple’s entry was procompetitive. But the myopic focus on e-book titles without consideration of product differentiation is mistaken, as well.

The relevant competition here is between Apple and Amazon at the platform level. As explained above, it is misleading to look solely at prices in evaluating the market’s competitiveness. Provided that switching costs are low enough and information about the platforms is available to consumers, consumer welfare may have been enhanced by competition between the platforms on a range of non-price dimensions, including, for example: the Apple iBookstore’s distinctive design, Apple’s proprietary file format, features on Apple’s iPad that were unavailable on Kindle Readers, Apple’s use of a range of marketing incentives unavailable to Amazon, and Apple’s algorithmic matching between its data and consumers’ e-book purchases.

While it’s difficult to disentangle Apple’s entry from other determinants of consumers’ demand for e-books, and even harder to establish with certainty the “but-for” world, it is nonetheless telling that the e-book market has expanded significantly since Apple’s entry, and that purchases of both iPads and Kindles have increased, as well.

There is, in other words, no clear evidence that consumers viewed the two products as perfect substitutes, and thus there is no evidence that Apple’s entry merely caused a non-welfare-enhancing substitution from Amazon to Apple. At minimum, there is no basis for treating the contract terms that facilitated Apple’s entry under a per se standard.


The point, in sum, is that there is in fact substantial evidence that Apple’ entry was pro-competitive, that there was no price-fixing scheme of which Apple was a part, and absolutely no evidence that the vertical restraints at issue in the case were the sort that should presumptively give rise to liability. Not only was application of the per se rule inappropriate, but, to answer Richard Epstein, there is strong evidence that Apple should win under a rule of reason analysis, as well.

On balance the Second Circuit was right to apply the antitrust laws to Apple.

Right now the Supreme Court has before it a petition for Certiorari, brought by Apple, Inc., which asks the Court to reverse the decision of the Second Circuit. That decision found per se illegality under the Sherman Act, for Apple’s efforts to promote cooperation among a group of six major publishers, who desperately sought to break Amazon’s dominant position in the ebook market. At that time, Amazon employed a wholesale model for ebooks under which it bought them for a fixed price, but could sell them for whatever price it wanted, including sales at below cost of popular books treated as loss leaders. These sales particularly frustrated publishers because of the extra pressure they placed on the sale of hard cover and paper back books. That problem disappeared under the agency relationship model that Apple pioneered. Now the publishers would set the prices for the sale of their own volumes, and then pay Apple a fixed commission for its services in selling the ebooks.

This agency model gives the publishers a price freedom, but it would fall apart at the seams if Amazon could continue to sell ebooks under the wholesale model at prices below those that were set by publishers for ebook sales by Apple. To deal with this complication, Apple insisted that all publishers that sold to it through the agency model require Amazon to purchase the ebooks on the same terms. Apple also insisted that it receive a most-favored-nation clause so that it would not find itself undercut either by Amazon or by a new entrant that also used the agency model.

There is little question that Apple would be in fine shape if it had proposed this model to each of the publishers separately, for then its action would be a form of ordinary competition of the sort permitted to every new entrant. Competition often takes place in terms of price, where the terms of the contracts are standard between competitors. That common state of affairs makes it easier for customers to compare prices with each other, and—sigh—for competitors to collude with each other. But without some evidence of collusion, the price parallelism should be regarded as per se legal, as it is routinely today. The decision to adopt a new form of pricing makes cross-product comparisons more difficult, but, by the same token, it offers a wider range of choice to customers. Again there is nothing in the antitrust laws that does, or should, prevent nonprice competition, including a radical shift in business model.

As it happened, once Apple imposed its model, the older wholesale model gave way, because it could not survive anywhere once the agency model was introduced. In the short run, this tectonic market shift has resulted in an increase in the price of ebooks and a corresponding decline in revenue, which is just what one would expect when prices are raised. It is therefore difficult to defend the case on the ground that it produces, in either the long or the short run, lower prices that benefit consumers. But it is difficult in the abstract to find that higher prices themselves are the hallmark of an antitrust violation.

At root the main considerations should be structural. What makes the ebooks case so hard is that it arises at the cross-currents of two different antitrust approaches. The general view is that horizontal arrangements are per se illegal, which means that it is necessary to show some very specific justifications to defeat a charge under Section 1 of the Sherman Act. No such arguments — like the need to share information in order to operate in a network industry — present themselves here. Yet by the same token, the general view on vertical arrangements is that they offer efficiencies by reducing the bottlenecks that could be created if players at different levels of the distribution system seek to hold out for a larger share of the gain, thereby creating a serious double marginalization problem. In these cases, the modern view is that vertical arrangements are in general governed by rule of reason considerations. The question now is what happens where there is an inevitable confluence of the vertical and horizontal arrangements.

In preparing for this short column, I read the petition for certiorari by Apple, and the two separate briefs prepared in support of Apple by a set of law professors and economists respectively. Both urge that this case be evaluated under a rule of reason, not the per se rule that applies to horizontal price-fixing. Both these briefs are excellently done. But I confess that my current view is that they miss the central difficulty in this case. Any argument for a rule of reason has to be able to identify in advance the gains and losses that justify some kind of balancing act. That standard can be met in merger cases, where under the standard Williamson model one is asked to compare the social gains from lower costs with the social losses from increased competition. These are not decisions that can be made well within the judicial context, so a separate administrative procedure is set up under the premerger notification program established under the 1976 Hart-Scott-Rodino Act. The administrative setting makes it possible to collect the needed information, and to decide whether to allow the merger to go through, and if so, subject to what conditions on matters such as partial divestiture to avoid excessive concentration in relevant submarkets. The task is always messy, but the rule-of-thumb that five-to-four is generally fine and three-to-two is not, shows that it is possible to hone in on an answer in most cases, but not all.

But what is troublesome in Apple is that, though the briefs are very persuasive in arguing that mixed vertical and horizontal arrangements might fit better into a rule of reason framework, they do not indicate what metric the parties should use to determine, once the case is remanded, how the rule of reason plays out. That is to say, there is no clear theory of what should be traded off against what. To put the point another way, none of these briefs argues that the transaction in question should be regarded as per se legal, so my fear is this: all the relevant information is already made available in the case, so that, on remand, the only task left to be done is to decide whether Apple should be protected because its own conduct disrupts a near-monopoly position that is held by Amazon. But that argument is at least a little dicey given that no one could argue that Amazon has obtained its dominant position by any unlawful means, which undercuts (but does not destroy) the argument that cutting Amazon down to size is necessarily a good thing. It might not be if the willingness to allow a collusive collateral attack orchestrated by Apple would reduce ex ante the gains from innovation that Amazon surely created when it pioneered its own wholesale ebook model. Facilitation is often regarded as criminal and tortious conduct in other areas. So at the moment, and subject to revision, my view is that the Second Circuit got it right. The vertical assist to the horizontal arrangement increased the odds of the horizontal deal that was illegal, and probably shares in that taint.

In making this judgment I think of the decision in Fashion Originators’ Guild of America, Inc. v. FTC (FOGA) which did address the question of whether the defendants could resist a cease and desist order by the FTC, which had attacked as per se illegal a decision of the manufacturers whose comparative advantage was to act as sellers of original and distinctive designs that at the time received neither patent nor copyright protection. The defendants entered into a limited form of collusion whereby they agreed not to sell to any retailer who carried a knock-off of their creations. They did not extend their cooperative activity into any other area. In essence, they sought only to protect what they regarded as their intellectual property. Justice Black held that the case did not fall outside the per se Section 1 prohibition even though it could easily have been argued that these decisions were undertaken to protect the labor that these individuals had placed in their creations. In addition, the opinion concluded with this passage:

even if copying were an acknowledged tort under the law of every state, that situation would not justify petitioners in combining together to regulate and restrain interstate commerce in violation of federal law. And for these same reasons, the principles declared in International News Service v. Associated Press, 248 U.S. 215, [1918], cannot serve to legalize petitioners’ unlawful combination.

I think that the first sentence here is wrong if self-help is cheaper and more reliable in dealing with the threat. But Justice Black flatly rejected the INS decision, which in my view represents a highly sophisticated effort to develop a tort of unfair competition between direct competitors. It reaches the correct result by defining the protected right narrowly—publication for one news cycle only. That move guards against misappropriation when it matters most, but by design prevents the creation of any long-term monopoly on anything like the copyright model. The limited and proportionate response in FOGA, however, did not cut any ice.

In addition, the defendants in FOGA have a respectable case on the merits that some protection of these design elements should be provided under either the patent or copyright laws, precisely because the appropriation is so difficult to guard against by any other means. Probably, the statutory length of such protection should not be as long as that offered by standard patents and copyrights, but that matter could be settled by statute. Accordingly, if antitrust law turns a blind eye to these justifications, is the nonspecific concern raised, but not spelled out, by Apple any stronger?

Finally, what should be the bottom line? It is worth noting that in FOGA the government was seeking only an injunction against the conduct, without asking for any damages. In Apple, the co-plaintiff states are seeking damage awards. Perhaps the simplest solution is to allow the injunction and to deny the damages, in part because of the clear complexity of the underlying legal issues. In this case, King Solomon might be wise to split the baby.

Politicians have recently called for price controls to address the high costs of pharmaceuticals. Price controls are government-mandated limits on prices, or government-required discounts on prices. On the campaign trail, Hillary Clinton has called for price controls for lower-income Medicare patients while Donald Trump has recently joined Clinton, Bernie Sanders, and President Obama in calling for more government intervention in the Medicare Part D program. Before embarking upon additional price controls for the drug industry, policymakers and presidential candidates would do well to understand the impacts and problems arising from existing controls.

Unbeknownst to many, a vast array of price controls are already in place in the pharmaceutical market. Over 40 percent of outpatient drug spending is spent in public programs that use price controls. In order to sell drugs to consumers covered by these public programs, manufacturers must agree to offer certain rebates or discounts on drug prices. The calculations are generally based on the Average Manufacturer Price (AMP–the average price wholesalers pay manufacturers for drugs that are sold to retail pharmacies) or the Best Price (the lowest price the manufacturer offers the drug to any purchaser including all rebates and discounts). The most significant public programs using some form of price control are described below.

  1. Medicaid

The Medicaid program provides health insurance for low-income and medically needy individuals. The legally-required rebate depends on the specific category of drug; for example, brand manufacturers are required to sell drugs for the lesser of 23.1% off AMP or the best price offered to any purchaser.

The Affordable Care Act significantly expanded Medicaid eligibility so that in 2014, the program covered approximately 64.9 million individuals, or 20 percent of the U.S. population. State Medicaid data indicates that manufacturers paid an enormous sum — in excess of $16.7 billion — in Medicaid rebates in 2012.

  1. 340B Program

The “340B Program”, created by Congress in 1992, requires drug manufacturers to provide outpatient drugs at significantly reduced prices to 340B-eligible entities—entities that serve a high proportion of low-income or uninsured patients. Like Medicaid, the 340B discount must be at least 23.1 percent off AMP. However, the statutory formula calculates different discounts for different products and is estimated to produce discounts that average 45 percent off average prices. Surprisingly, the formulas can even result in a negative 340B selling price for a drug, in which case manufacturers are instructed to set the drug price at a penny.

The Affordable Care Act broadened the definition of qualified buyers to include many additional types of hospitals. As a result, both the number of 340B-eligible hospitals and the money spent on 340B drugs tripled between 2005 and 2014. By 2014, there were over 14,000 hospitals and affiliated sites in the 340B program, representing about one-third of all U.S. hospitals.

The 340B program has a glaring flaw that punishes the pharmaceutical industry without any offsetting benefits for low-income patients. The 340B statute does NOT require that providers only dispense 340B drugs to needy patients. In what amounts to merely shifting profits from pharmaceutical companies to other health care providers, providers may also sell drugs purchased at the steep 340B discount to non-qualified patients and pocket the difference between the 340B discounted price and the reimbursement of the non-qualified patients’ private insurance companies. About half of the 340B entities generate significant revenues from private insurer reimbursements that exceed 340B prices.

  1. Departments of Defense and Veterans Affairs Drug Programs

In order to sell drugs through the Medicaid program, drug manufacturers must also provide drugs to four government agencies—the VA, Department of Defense, Public Health Service and Coast Guard—at statutorily-imposed discounts. The required discounted price is the lesser of 24% off AMP or the lowest price manufacturers charge their most-favored nonfederal customers under comparable terms. Because of additional contracts that generate pricing concessions from specific vendors, studies indicate that VA and DOD pricing for brand pharmaceuticals was approximately 41-42% of the average wholesale price.

  1. Medicare Part D

An optional Medicare prescription drug benefit (Medicare Part D) was enacted in 2005 to offer coverage to many of the nation’s retirees and disabled persons. Unlike Medicaid and the 340B program, there is no statutory rebate level on prescription drugs covered under the program. Instead, private Medicare Part D plans, acting on behalf of the Medicare program, negotiate prices with pharmaceutical manufacturers and may obtain price concessions in the form of rebates. Manufacturers are willing to offer significant rebates and discounts in order to provide drugs to the millions of covered participants. The rebates often amount to as much as a 20-30 percent discount on brand medicines. CMS reported that manufacturers paid in excess of $10.3 billion in Part D rebates in 2012.

The Medicare Part D program does include direct price controls on drugs sold in the coverage gap. The coverage gap (or “donut hole”) is a spending level in which enrollees are responsible for a larger share of their total drug costs. For 2016, the coverage gap begins when the individual and the plan have spent $3,310 on covered drugs and ends when $7,515 has been spent. Medicare Part D requires brand drug manufacturers to offer 50 percent discounts on drugs sold during the coverage gap. These required discounts will cost drug manufacturers approximately $41 billion between 2012-2021.

While existing price controls do produce lower prices for some consumers, they may also result in increased prices for others, and in the long-term may drive up prices for all.  Many of the required rebates under Medicaid, the 340B program, and VA and DOD programs are based on drugs’ AMP.  Calculating rebates from average drug prices gives manufactures an incentive to charge higher prices to wholesalers and pharmacies in order to offset discounts. Moreover, with at least 40% of drugs sold under price controls, and some programs even requiring drugs to be sold for a penny, manufacturers are forced to sell many drugs at significant discounts.  This creates incentives to charge higher prices to other non-covered patients to offset the discounts.  Further price controls will only amplify these incentives and create inefficient market imbalances.