Archives For antitrust

On August 6, the Global Antitrust Institute (the GAI, a division of the Antonin Scalia Law School at George Mason University) submitted a filing (GAI filing or filing) in response to the Japan Fair Trade Commission’s (JFTC’s) consultation on reforms to the Japanese system of administrative surcharges assessed for competition law violations (see here for a link to the GAI’s filing).  The GAI’s outstanding filing was authored by GAI Director Koren Wong Ervin and Professors Douglas Ginsburg, Joshua Wright, and Bruce Kobayashi of the Scalia Law School.

The GAI filing’s three sets of major recommendations, set forth in italics, are as follows:

(1)   Due Process

 While the filing recognizes that the process may vary depending on the jurisdiction, the filing strongly urges the JFTC to adopt the core features of a fair and transparent process, including:   

(a)        Legal representation for parties under investigation, allowing the participation of local and foreign counsel of the parties’ choosing;

(b)        Notifying the parties of the legal and factual bases of an investigation and sharing the evidence on which the agency relies, including any exculpatory evidence and excluding only confidential business information;

(c)        Direct and meaningful engagement between the parties and the agency’s investigative staff and decision-makers;

(d)        Allowing the parties to present their defense to the ultimate decision-makers; and

(e)        Ensuring checks and balances on agency decision-making, including meaningful access to independent courts.

(2)   Calculation of Surcharges

The filing agrees with the JFTC that Japan’s current inflexible system of surcharges is unlikely to accurately reflect the degree of economic harm caused by anticompetitive practices.  As a general matter, the filing recommends that under Japan’s new surcharge system, surcharges imposed should rely upon economic analysis, rather than using sales volume as a proxy, to determine the harm caused by violations of Japan’s Antimonopoly Act.   

In that light, and more specifically, the filing therefore recommends that the JFTC limit punitive surcharges to matters in which:

(a)          the antitrust violation is clear (i.e., if considered at the time the conduct is undertaken, and based on existing laws, rules, and regulations, a reasonable party should expect the conduct at issue would likely be illegal) and is without any plausible efficiency justification;

(b)          it is feasible to articulate and calculate the harm caused by the violation;

(c)           the measure of harm calculated is the basis for any fines or penalties imposed; and

(d)          there are no alternative remedies that would adequately deter future violations of the law. 

In the alternative, and at the very least, the filing urges the JFTC to expand the circumstances under which it will not seek punitive surcharges to include two types of conduct that are widely recognized as having efficiency justifications:

  • unilateral conduct, such as refusals to deal and discriminatory dealing; and
  • vertical restraints, such as exclusive dealing, tying and bundling, and resale price maintenance.

(3)   Settlement Process

The filing recommends that the JFTC consider incorporating safeguards that prevent settlement provisions unrelated to the violation and limit the use of extended monitoring programs.  The filing notes that consent decrees and commitments extracted to settle a case too often end up imposing abusive remedies that undermine the welfare-enhancing goals of competition policy.  An agency’s ability to obtain in terrorem concessions reflects a party’s weighing of the costs and benefits of litigating versus the costs and benefits of acquiescing in the terms sought by the agency.  When firms settle merely to avoid the high relative costs of litigation and regulatory procedures, an agency may be able to extract more restrictive terms on firm behavior by entering into an agreement than by litigating its accusations in a court.  In addition, while settlements may be a more efficient use of scarce agency resources, the savings may come at the cost of potentially stunting the development of the common law arising through adjudication.

In sum, the latest filing maintains the GAI’s practice of employing law and economics analysis to recommend reforms in the imposition of competition law remedies (see here, here, and here for summaries of prior GAI filings that are in the same vein).  The GAI’s dispassionate analysis highlights principles of universal application – principles that may someday point the way toward greater economically-sensible convergence among national antitrust remedial systems.

Background

In addition to reforming substantive antitrust doctrine, the Supreme Court in recent decades succeeded in curbing the unwarranted costs of antitrust litigation by erecting new procedural barriers to highly questionable antitrust suits.  It did this principally through three key “gatekeeper” decisions, Monsanto (1984), Matsushita (1986), and Twombly (2007).

Prior to those holdings, bare allegations in a complaint typically were sufficient to avoid dismissal.  Furthermore, summary judgment was very hard to obtain, given the Supreme Court’s pronouncement in Poller v. CBS (1962) that “summary procedures should be used sparingly in complex antitrust litigation.”  Thus, plaintiffs had a strong incentive to file dubious (if not meritless) antitrust suits, in the hope of coercing unwarranted settlements from defendants faced with the prospect of burdensome, extended antitrust litigation – litigation that could impose serious business reputational costs over time, in addition to direct and indirect litigation costs.

This all changed starting in 1984.  Monsanto required that a plaintiff show a “conscious commitment to a common scheme designed to achieve an unlawful objective” to support a Sherman Act Section 1 (Section 1) antitrust conspiracy allegation.  Building on Monsanto, Matsushita held that “conduct as consistent with permissible competition as with illegal conspiracy does not, standing alone, support an inference of antitrust conspiracy.”  In Twombly, the Supreme Court made it easier to succeed on a motion to dismiss a Section 1 complaint, holding that mere evidence of parallel conduct does not establish a conspiracy.  Rather, under Twombly, a plaintiff seeking relief under Section 1 must allege, at a minimum, the general contours of when an agreement was made and must support those allegations with a context that tends to make such an agreement plausible.  (The Twombly Court’s approval of motions to dismiss as a tool to rein in excessive antitrust litigation costs was implicit in its admonition not to “forget that proceeding to antitrust discovery can be expensive.”)

In sum, as Professor Herbert Hovenkamp has put it, “[t]he effects of Twombly and Matsushita has [sic] been a far-reaching shift in the way antitrust cases proceed, and today a likely majority are dismissed on the pleadings or summary judgment before going to trial.”

Visa v. Osborn

So far, so good.  Trial lawyers never rest, however, and old lessons sometimes need to be relearned, as demonstrated by the D.C. Circuit’s strange opinion in Visa v. Osborn (2015).

Visa v. Osborn involves a putative class action filed against Visa, MasterCard, and three banks, essentially involving a bare bones complaint alleging that similar automatic teller machine pricing rules imposed by Visa and MasterCard were part of a price-fixing conspiracy among the banks and the credit card companies.  As I explained in my recent Competition Policy International article discussing this case, plaintiffs neither alleged any facts indicating any communications among defendants, nor did they suggest anything to undermine the very real possibility that the credit card firms separately adopted the rules as being in their independent self-interest.  In short, there is nothing in the complaint indicating that allegations of an anticompetitive agreement are plausible, and, as such, Twombly dictates that the complaint must be dismissed.  Amazingly, however, a D.C. Circuit panel held that the mere allegation “that the member banks used the bankcard associations to adopt and enforce” the purportedly anticompetitive access fee rule was “enough to satisfy the plausibility standard” required to survive a motion to dismiss.

Fortunately, the D.C. Circuit’s Osborn holding (which, in addition to being ill-reasoned, is inconsistent with Third, Fourth, and Ninth Circuit precedents) attracted the eye of the Supreme Court, which granted certiorari on June 28.  Specifically, the Supreme Court agreed to resolve the question “[w]hether allegations that members of a business association agreed to adhere to the association’s rules and possess governance rights in the association, without more, are sufficient to plead the element of conspiracy in violation of Section 1 of the Sherman Act, . . . or are insufficient, as the Third, Fourth, and Ninth Circuits have held.”

Conclusion

As I concluded in my Competition Policy International article:

Business associations bestow economic benefits on society through association rules that enable efficient cooperative activities.  Subjecting association members to potential antitrust liability merely for signing on to such rules and participating in association governance would substantially chill participation in associations and undermine the development of new and efficient forms of collaboration among businesses.  Such a development would reduce economic dynamism and harm both producers and consumers.  By decisively overruling the D.C. Circuit’s flawed decision in Osborn, the Supreme Court would preclude a harmful form of antitrust risk and establish an environment in which fruitful business association decision-making is granted greater freedom, to the benefit of the business community, consumers, and the overall economy.  

In addition, and more generally, the Court may wish to remind litigants that the antitrust litigation gatekeeper function laid out in Monsanto, Matsushita, and Twombly remains as strong and as vital as ever.  In so doing, the Court would reaffirm that motions to dismiss and summary judgment motions remain critically important tools needed to curb socially costly abusive antitrust litigation.

Since the European Commission (EC) announced its first inquiry into Google’s business practices in 2010, the company has been the subject of lengthy investigations by courts and competition agencies around the globe. Regulatory authorities in the United States, France, the United Kingdom, Canada, Brazil, and South Korea have all opened and rejected similar antitrust claims.

And yet the EC marches on, bolstered by Google’s myriad competitors, who continue to agitate for further investigations and enforcement actions, even as we — companies and consumers alike — enjoy the benefits of an increasingly dynamic online marketplace.

Indeed, while the EC has spent more than half a decade casting about for some plausible antitrust claim, the online economy has thundered ahead. Since 2010, Facebook has tripled its active users and multiplied its revenue ninefold; the number of apps available in the Amazon app store has grown from less than 4000 to over 400,000 today; and there are almost 1.5 billion more Internet users globally than there were in 2010. And consumers are increasingly using new and different ways to search for information: Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, and Facebook’s Messenger are a few of the many new innovations challenging traditional search engines.

Advertisers have adapted to this evolution, moving increasingly online, and from search to display ads as mobile adoption has skyrocketedSocial networks like Twitter and Snapchat have come into their own, competing for the same (and ever-increasing) advertising dollars. For marketers, advertising on social networks is now just as important as advertising in search. No wonder e-commerce sales have more than doubled, to almost $2 trillion worldwide; for the first time, consumers purchased more online than in stores this past year.

To paraphrase Louis C.K.: Everything is amazing — and no one at the European Commission is happy.

The EC’s market definition is fatally flawed

Like its previous claims, the Commission’s most recent charges are rooted in the assertion that Google abuses its alleged dominance in “general search” advertising to unfairly benefit itself and to monopolize other markets. But European regulators continue to miss the critical paradigm shift among online advertisers and consumers that has upended this stale view of competition on the Internet. The reality is that Google’s competition may not, and need not, look exactly like Google itself, but it is competition nonetheless. And it’s happening in spades.

The key to understanding why the European Commission’s case is fundamentally flawed lies in an examination of how it defines the relevant market. Through a series of economically and factually unjustified assumptions, the Commission defines search as a distinct market in which Google faces limited competition and enjoys an 80% market share. In other words, for the EC, “general search” apparently means only nominal search providers like Google and Bing; it doesn’t mean companies like Amazon, Facebook and Twitter — Google’s biggest competitors.  

But the reality is that “general search” is just one technology among many for serving information and ads to consumers online. Defining the relevant market or limiting the definition of competition in terms of the particular mechanism that Google happens to use to match consumers and advertisers doesn’t reflect the substitutability of other mechanisms that do the same thing — merely because these mechanisms aren’t called “search.”

Properly defined, the market in which Google competes online is not search, but something more like online “matchmaking” between advertisers, retailers and consumers. And this market is enormously competitive.

Consumers today are increasingly using platforms like Amazon and Facebook as substitutes for the searches they might have run on Google or Bing. “Closed” platforms like the iTunes store and innumerable apps handle copious search traffic but also don’t figure in the EC’s market calculations. And so-called “dark social” interactions like email, text messages, and IMs, drive huge amounts of some of the most valuable traffic on the Internet. This, in turn, has led to a competitive scramble to roll out completely new technologies like chatbots to meet consumers’ informational (and merchants’ advertising) needs.

Properly construed, Google’s market position is precarious

Like Facebook and Twitter (and practically every other Internet platform), advertising is Google’s primary source of revenue. Instead of charging for fancy hardware or offering services to users for a fee, Google offers search, the Android operating system, and a near-endless array of other valuable services for free to users. The company’s very existence relies on attracting Internet users and consumers to its properties in order to effectively connect them with advertisers.

But being an online matchmaker is a difficult and competitive enterprise. Among other things, the ability to generate revenue turns crucially on the quality of the match: All else equal, an advertiser interested in selling widgets will pay more for an ad viewed by a user who can be reliably identified as being interested in buying widgets.

Google’s primary mechanism for attracting users to match with advertisers — general search — is substantially about information, not commerce, and the distinction between product and informational searches is crucially important to understanding Google’s market and the surprisingly limited and tenuous market power it possesses.

General informational queries aren’t nearly as valuable to advertisers: Significantly, only about 30 percent of Google’s searches even trigger any advertising at all. Meanwhile, as of 2012, one-third of product searches started on Amazon while only 13% started on a general search engine.

As economist Hal Singer aptly noted in 2012,

[the data] suggest that Google lacks market power in a critical segment of search — namely, product searches. Even though searches for items such as power tools or designer jeans account for only 10 to 20 percent of all searches, they are clearly some of the most important queries for search engines from a business perspective, as they are far easier to monetize than informational queries like “Kate Middleton.”

While Google Search clearly offers substantial value to advertisers, its ability to continue to do so is precarious when confronted with the diverse array of competitors that, like Facebook, offer a level of granularity in audience targeting that general search can’t match, or that, like Amazon, systematically offer up the most valuable searchers.

In order to compete in this market — one properly defined to include actual competitors — Google has had to constantly innovate to maintain its position. Unlike a complacent monopolist, it has evolved to meet changing consumer demand, shifting technology and inventive competitors. Thus, Google’s search algorithm has changed substantially over the years to make more effective use of the information available to ensure relevance; search results have evolved to give consumers answers to queries rather than just links, and to provide more-direct access to products and services; and, as users have shifted more and more of their time and attention to mobile devices, search has incorporated more-localized results.

Competitors want a free lunch

Critics complain, nevertheless, that these developments have made it harder, in one way or another, for rivals to compete. And the EC has provided a willing ear. According to Commissioner Vestager last week:

Google has come up with many innovative products that have made a difference to our lives. But that doesn’t give Google the right to deny other companies the chance to compete and innovate. Today, we have further strengthened our case that Google has unduly favoured its own comparison shopping service in its general search result pages…. (Emphasis added).

Implicit in this statement is the remarkable assertion that by favoring its own comparison shopping services, Google “den[ies] other companies the chance to compete and innovate.” Even assuming Google does “favor” its own results, this is an astounding claim.

First, it is not a violation of competition law simply to treat competitors’ offerings differently than one’s own, even for a dominant firm. Instead, conduct must actually exclude competitors from the market, without offering countervailing advantages to consumers. But Google’s conduct is not exclusionary, and there are many benefits to consumers.

As it has from the start of its investigations of Google, the EC begins with a flawed assumption: that Google’s competitors both require, and may be entitled to, unfettered access to Google’s property in order to compete. But this is patently absurd. Google is not an essential facility: Billions of users reach millions of companies everyday through direct browser navigation, apps, email links, review sites and blogs, and countless other means — all without once touching Google.com.

Google Search results do not exclude competitors, whether comparison shopping sites or others. For example, 72% of TripAdvisor’s U.S. traffic comes from search, and almost all of that from organic results; other specialized search sites see similar traffic volumes.

More important, however, in addition to continuing to reach rival sites through Google Search, billions of consumers access rival services directly through their mobile apps. In fact, for Yelp,

Approximately 21 million unique devices accessed Yelp via the mobile app on a monthly average basis in the first quarter of 2016, an increase of 32% compared to the same period in 2015. App users viewed approximately 70% of page views in the first quarter and were more than 10 times as engaged as website users, as measured by number of pages viewed. (Emphasis added).

And a staggering 40 percent of mobile browsing is now happening inside the Facebook app, competing with the browsers and search engines pre-loaded on smartphones.

Millions of consumers also directly navigate to Google’s rivals via their browser by simply typing, for example, “Yelp.com” in their address bar. And as noted above, consumers are increasingly using Google rivals’ new disruptive information engines like Alexa and Siri for their search needs. Even the traditional search engine space is competitive — in fact, according to Wired, as of July 2016:

Microsoft has now captured more than one-third of Internet searches. Microsoft’s transformation from a company that sells boxed software to one that sells services in the cloud is well underway. (Emphasis added).

With such numbers, it’s difficult to see how rivals are being foreclosed from reaching consumers in any meaningful way.

Meanwhile, the benefits to consumers are obvious: Google is directly answering questions for consumers rather than giving them a set of possible links to click through and further search. In some cases its results present entirely new and valuable forms of information (e.g., search trends and structured data); in others they serve to hone searches by suggesting further queries, or to help users determine which organic results (including those of its competitors) may be most useful. And, of course, consumers aren’t forced to endure these innovations if they don’t find them useful, as they can quickly switch to other providers.  

Nostalgia makes for bad regulatory policy

Google is not the unstoppable monopolist of the EU competition regulators’ imagining. Rather, it is a continual innovator, forced to adapt to shifting consumer demand, changing technology, and competitive industry dynamics. And, instead of trying to hamstring Google, if they are to survive, Google’s competitors (and complainants) must innovate as well.

Dominance in technology markets — especially online — has always been ephemeral. Once upon a time, MySpace, AOL, and Yahoo were the dominant Internet platforms. Kodak, once practically synonymous with “instant camera” let the digital revolution pass it by. The invincible Sony Walkman was upended by mp3s and the iPod. Staid, keyboard-operated Blackberries and Nokias simply couldn’t compete with app-driven, graphical platforms from Apple and Samsung. Even today, startups like Snapchat, Slack, and Spotify gain massive scale and upend entire industries with innovative new technology that can leave less-nimble incumbents in the dustbin of tech history.

Put differently, companies that innovate are able to thrive, while those that remain dependent on yesterday’s technology and outdated business models usually fail — and deservedly so. It should never be up to regulators to pick winners and losers in a highly dynamic and competitive market, particularly if doing so constrains the market’s very dynamism. As Alfonso Lamadrid has pointed out:

It is companies and not competition enforcers which will strive or fail in the adoption of their business models, and it is therefore companies and not competition enforcers who are to decide on what business models to use. Some will prove successful and others will not; some companies will thrive and some will disappear, but with experimentation with business models, success and failure are and have always been part of the game.

In other words, we should not forget that competition law is, or should be, business-model agnostic, and that regulators are – like anyone else – far from omniscient.

Like every other technology company before them, Google and its competitors must be willing and able to adapt in order to keep up with evolving markets — just as for Lewis Carroll’s Red Queen, “it takes all the running you can do, to keep in the same place.” Google confronts a near-constantly evolving marketplace and fierce competition from unanticipated quarters; companies that build their businesses around Google face a near-constantly evolving Google. In the face of such relentless market dynamism, neither consumers nor firms are well served by regulatory policy rooted in nostalgia.  

The Global Antitrust Institute (GAI) at George Mason University Law School (officially the “Antonin Scalia Law School at George Mason University” as of July 1st) is doing an outstanding job at providing sound law and economics-centered advice to foreign governments regarding their proposed antitrust laws and guidelines.

The GAI’s latest inspired filing, released on July 9 (July 9 Comment), concerns guidelines on the disgorgement of illegal gains and punitive fines for antitrust violations proposed by China’s National Development and Reform Commission (NDRC) – a powerful agency that has broad planning and administrative authority over the Chinese economy.  With respect to antitrust, the NDRC is charged with investigating price-related anticompetitive behavior and abuses of dominance.  (China has two other antitrust agencies, the State Administration of Industry and Commerce (SAIC) that investigates non-price-related monopolistic behavior, and the Ministry of Foreign Commerce (MOFCOM) that reviews mergers.)  The July 9 Comment stresses that the NDRC’s proposed Guidelines call for Chinese antitrust enforcers to impose punitive financial sanctions on conduct that is not necessarily anticompetitive and may be efficiency-enhancing – an approach that is contrary to sound economics.  In so doing, the July 9 Comment summarizes the economics of penalties, recommends that the NDRD employ economic analysis in considering sanctions, and provides specific suggested changes to the NDRC’s draft.  The July 9 Comment provides a helpful summary of its analysis:

We respectfully recommend that the Draft Guidelines be revised to limit the application of disgorgement (or the confiscating of illegal gain) and punitive fines to matters in which: (1) the antitrust violation is clear (i.e., if measured at the time the conduct is undertaken, and based on existing laws, rules, and regulations, a reasonable party should expect that the conduct at issue would likely be found to be illegal) and without any plausible efficiency justifications; (2) it is feasible to articulate and calculate the harm caused by the violation; (3) the measure of harm calculated is the basis for any fines or penalties imposed; and (4) there are no alternative remedies that would adequately deter future violations of the law.  In the alternative, and at the very least, we strongly urge the NDRC to expand the circumstances under which the Anti-Monopoly Enforcement Agencies (AMEAs) will not seek punitive sanctions such as disgorgement or fines to include two conduct categories that are widely recognized as having efficiency justifications: unilateral conduct such as refusals to deal and discriminatory dealing and vertical restraints such as exclusive dealing, tying and bundling, and resale price maintenance.

We also urge the NDRC to clarify how the total penalty, including disgorgement and fines, relate to the specific harm at issue and the theoretical optimal penalty.  As explained below, the economic analysis determines the total optimal penalties, which includes any disgorgement and fines.  When fines are calculated consistent with the optimal penalty framework, disgorgement should be a component of the total fine as opposed to an additional penalty on top of an optimal fine.  If disgorgement is an additional penalty, then any fines should be reduced relative to the optimal penalty.

Lastly, we respectfully recommend that the AMEAs rely on economic analysis to determine the harm caused by any violation.  When using proxies for the harm caused by the violation, such as using the illegal gains from the violations as the basis for fines or disgorgement, such calculations should be limited to those costs and revenues that are directly attributable to a clear violation.  This should be done in order to ensure that the resulting fines or disgorgement track the harms caused by the violation.  To that end, we recommend that the Draft Guidelines explicitly state that the AMEAs will use economic analysis to determine the but-for world, and will rely wherever possible on relevant market data.  When the calculation of illegal gain is unclear due to a lack of relevant information, we strongly recommend that the AMEAs refrain from seeking disgorgement.

The lack of careful economic analysis of the implications of disgorgement (which is really a financial penalty, viewed through an economic lens) is not confined to Chinese antitrust enforcers.  In recent years, the U.S. Federal Trade Commission (FTC) has shown an interest in more broadly employing disgorgement as an antitrust remedy, without fully weighing considerations of error costs and the deterrence of efficient business practices (see, for example, here and here).  Relatedly, the U.S. Department of Justice’s Antitrust Division has determined that disgorgement may be invoked as a remedy for a Sherman Antitrust Act violation, a position confirmed by a lower court (see, for example, here).  The general principles informing the thoughtful analysis delineated in the July 9 Comment could profitably be consulted by FTC and DOJ policy officials should they choose to reexamine their approach to disgorgement and other financial penalties.

More broadly, emphasizing the importantance of optimal sanctions and the economic analysis of business conduct, the July 9 Comment is in line with a cost-benefit framework for antitrust enforcement policy, rooted in decision theory – an approach that all antitrust agencies (including United States enforcers) should seek to adopt (see also here for an evaluation of the implicit decision-theoretic approach to antitrust employed by the U.S. Supreme Court under Chief Justice John Roberts).  Let us hope that DOJ, the FTC, and other government antitrust authorities around the world take to heart the benefits of decision-theoretic antitrust policy in evaluating (and, as appropriate, reforming) their enforcement norms.  Doing so would promote beneficial international convergence toward better enforcement policy and redound to the economic benefit of both producers and consumers.

As regulatory review of the merger between Aetna and Humana hits the homestretch, merger critics have become increasingly vocal in their opposition to the deal. This is particularly true of a subset of healthcare providers concerned about losing bargaining power over insurers.

Fortunately for consumers, the merger appears to be well on its way to approval. California recently became the 16th of 20 state insurance commissions that will eventually review the merger to approve it. The U.S. Department of Justice is currently reviewing the merger and may issue its determination as early as July.

Only Missouri has issued a preliminary opinion that the merger might lead to competitive harm. But Missouri is almost certain to remain an outlier, and its analysis simply doesn’t hold up to scrutiny.

The Missouri opinion echoed the Missouri Hospital Association’s (MHA) concerns about the effect of the merger on Medicare Advantage (MA) plans. It’s important to remember, however, that hospital associations like the MHA are not consumer advocacy groups. They are trade organizations whose primary function is to protect the interests of their member hospitals.

In fact, the American Hospital Association (AHA) has mounted continuous opposition to the deal. This is itself a good indication that the merger will benefit consumers, in part by reducing hospital reimbursement costs under MA plans.

More generally, critics have argued that history proves that health insurance mergers lead to higher premiums, without any countervailing benefits. Merger opponents place great stock in a study by economist Leemore Dafny and co-authors that purports to show that insurance mergers have historically led to seven percent higher premiums.

But that study, which looked at a pre-Affordable Care Act (ACA) deal and assessed its effects only on premiums for traditional employer-provided plans, has little relevance today.

The Dafny study first performed a straightforward statistical analysis of overall changes in concentration (that is, the number of insurers in a given market) and price, and concluded that “there is no significant association between concentration levels and premium growth.” Critics never mention this finding.

The study’s secondary, more speculative, analysis took the observed effects of a single merger — the 1999 merger between Prudential and Aetna — and extrapolated for all changes in concentration (i.e., the number of insurers in a given market) and price over an eight-year period. It concluded that, on average, seven percent of the cumulative increase in premium prices between 1998 and 2006 was the result of a reduction in the number of insurers.

But what critics fail to mention is that when the authors looked at the actual consequences of the 1999 Prudential/Aetna merger, they found effects lasting only two years — and an average price increase of only one half of one percent. And these negligible effects were restricted to premiums paid under plans purchased by large employers, a critical limitation of the studies’ relevance to today’s proposed mergers.

Moreover, as the study notes in passing, over the same eight-year period, average premium prices increased in total by 54 percent. Yet the study offers no insights into what was driving the vast bulk of premium price increases — or whether those factors are still present today.  

Few sectors of the economy have changed more radically in the past few decades than healthcare has. While extrapolated effects drawn from 17-year-old data may grab headlines, they really don’t tell us much of anything about the likely effects of a particular merger today.

Indeed, the ACA and current trends in healthcare policy have dramatically altered the way health insurance markets work. Among other things, the advent of new technologies and the move to “value-based” care are redefining the relationship between insurers and healthcare providers. Nowhere is this more evident than in the Medicare and Medicare Advantage market at the heart of the Aetna/Humana merger.

In an effort to stop the merger on antitrust grounds, critics claim that Medicare and MA are distinct products, in distinct markets. But it is simply incorrect to claim that Medicare Advantage and traditional Medicare aren’t “genuine alternatives.”

In fact, as the Office of Insurance Regulation in Florida — a bellwether state for healthcare policy — concluded in approving the merger: “Medicare Advantage, the private market product, competes directly with Traditional Medicare.”

Consumers who search for plans at Medicare.gov are presented with a direct comparison between traditional Medicare and available MA plans. And the evidence suggests that they regularly switch between the two. Today, almost a third of eligible Medicare recipients choose MA plans, and the majority of current MA enrollees switched to MA from traditional Medicare.

True, Medicare and MA plans are not identical. But for antitrust purposes, substitutes need not be perfect to exert pricing discipline on each other. Take HMOs and PPOs, for example. No one disputes that they are substitutes, and that prices for one constrain prices for the other. But as anyone who has considered switching between an HMO and a PPO knows, price is not the only variable that influences consumers’ decisions.

The same is true for MA and traditional Medicare. For many consumers, Medicare’s standard benefits, more-expensive supplemental benefits, plus a wider range of provider options present a viable alternative to MA’s lower-cost expanded benefits and narrower, managed provider network.

The move away from a traditional fee-for-service model changes how insurers do business. It requires larger investments in technology, better tracking of preventive care and health outcomes, and more-holistic supervision of patient care by insurers. Arguably, all of this may be accomplished most efficiently by larger insurers with more resources and a greater ability to work with larger, more integrated providers.

This is exactly why many hospitals, which continue to profit from traditional, fee-for-service systems, are opposed to a merger that promises to expand these value-based plans. Significantly, healthcare providers like Encompass Medical Group, which have done the most to transition their services to the value-based care model, have offered letters of support for the merger.

Regardless of their rhetoric — whether about market definition or historic precedent — the most vocal merger critics are opposed to the deal for a very simple reason: They stand to lose money if the merger is approved. That may be a good reason for some hospitals to wish the merger would go away, but it is a terrible reason to actually stop it.

[This post was first published on June 27, 2016 in The Hill as “Don’t believe the critics, Aetna-Humana merger a good deal for consumers“]

A key issue raised by the United Kingdom’s (UK) withdrawal from the European Union (EU) – popularly referred to as Brexit – is its implications for competition and economic welfare.  The competition issue is rather complex.  Various potentially significant UK competition policy reforms flowing from Brexit that immediately suggest themselves are briefly summarized below.  (These are merely examples – further evaluation may point to additional significant competition policy changes that Brexit is likely to inspire.)

First, UK competition policy will no longer be subject to European Commission (EC) competition law strictures, but will be guided instead solely by UK institutions, led by the UK Competition and Markets Authority (CMA).  The CMA is a free market-oriented, well-run agency that incorporates careful economic analysis into its enforcement investigations and industry studies.  It is widely deemed to be one of the world’s best competition and consumer protection enforcers, and has first-rate leadership.  (Former U.S. Federal Trade Commission Chairman William Kovacic, a very sound antitrust scholar, professor, and head of George Washington University Law School’s Competition Law Center, serves as one of the CMA’s “Non-Executive Directors,” who set the CMA’s policies.)  Post-Brexit, the CMA will no longer have to conform its policies to the approaches adopted by the EC’s Directorate General for Competition (DG Comp) and determinations by European courts.   Despite its recent increased reliance on an “economic effects-based” analytical approach, DG-Comp still suffers from excessive formalism and an over-reliance on pure theories of harm, rather than hard empiricism.  Moreover, EU courts still tend to be overly formalistic and deferential to EC administrative determinations.  In short, CMA decision-making in the competition and consumer protection spheres, free from constraining EU influences, should (at least marginally) prove to be more welfare-enhancing within the UK post-Brexit.  (For a more detailed discussion of Brexit’s implication for EU and UK competition law, see here.)  There is a countervailing risk that Brexit might marginally worsen EU competition policy by eliminating UK pro-free market influence on EU policies, but the likelihood and scope of such a marginal effect is not readily measurable.

Second, Brexit will allow the UK to escape participation in the protectionist, wasteful, output-limiting European agricultural cartel knows as the “Common Agricultural Policy,” or CAP, which involves inefficient subsidies whose costs are borne by consumers.  This would be a clearly procompetitive and welfare-enhancing result, to the extent that it undermined the CAP.  In the near term, however, its net effects on CAP financing and on the welfare of UK farmers appear to be relatively small.

Third, the UK may be able to avoid the restrictive EU Common Fisheries Policy and exercise greater control over its coastal fisheries.  In so doing, the UK could choose to authorize the creation of a market-based tradable fisheries permit system that would enhance consumer and producer welfare and increase competition.

Fourth, Brexit will free the UK economy from one-size-fits-all supervisory regulatory frameworks in such areas as the environment, broadband policy (“digital Europe”), labor, food and consumer products, among others.  This regulatory freedom, properly handled, could prove a major force for economic flexibility, reductions in regulatory burdens, and enhanced efficiency.

Fifth, Brexit will enable the UK to enter into true free trade pacts with the United States and other nations that avoid the counterproductive bells and whistles of EU industrial policy.  For example, a “zero tariffs” agreement with the United States that featured reciprocal mutual recognition of health, safety, and other regulatory standards would avoid heavy-handed regulatory harmonization features of the Transatlantic Trade and Investment Policy agreement being negotiated between the EU and the United States.  (As I explained in a previous Truth on the Market post, “a TTIP focus on ‘harmonizing’ regulations could actually lower economic freedom (and welfare) by ‘regulating upward’ through acceptance of [a] more intrusive approach, and by precluding future competition among alternative regulatory models that could lead to welfare-enhancing regulatory improvements.”)

In sum, while Brexit’s implications for other economic factors, such as macroeconomic stability, remain to be seen, Brexit will likely prove to have an economic welfare-enhancing influence on key aspects of competition policy.

P.S.  Notably, a recent excellent study by Iain Murray and Rory Broomfield of Brexit’s implications for various UK industry sectors (commissioned by the London-based Institute of Economic Affairs) concluded “that in almost every area we have examined the benefit: cost trade-off [of Brexit] is positive. . . .  Overall, the UK will benefit substantially from a reduction in regulation, a better fisheries management system, a market-based immigration system, a free market in agriculture, a globally-focused free trade policy, control over extradition, and a shale gas-based energy policy.”

Thanks to Geoff for the introduction. I look forward to posting a few things over the summer.

I’d like to begin by discussing Geoff’s post on the pending legislative proposals designed to combat strategic abuse of drug safety regulations to prevent generic competition. Specifically, I’d like to address the economic incentive structure that is in effect in this highly regulated market.

Like many others, I first noticed the abuse of drug safety regulations to prevent competition when Turing Pharmaceuticals—then led by now infamous CEO Martin Shkreli—acquired the manufacturing rights for the anti-parasitic drug Daraprim, and raised the price of the drug by over 5,000%. The result was a drug that cost $750 per tablet. Daraprim (pyrimethamine) is used to combat malaria and toxoplasma gondii infections in immune-compromised patients, especially those with HIV. The World Health Organization includes Daraprim on its “List of Essential Medicines” as a medicine important to basic health systems. After the huge price hike, the drug was effectively out of reach for many insurance plans and uninsured patients who needed it for the six to eight week course of treatment for toxoplasma gondii infections.

It’s not unusual for drugs to sell at huge multiples above their manufacturing cost. Indeed, a primary purpose of patent law is to allow drug companies to earn sufficient profits to engage in the expensive and risky business of developing new drugs. But Daraprim was first sold in 1953 and thus has been off patent for decades. With no intellectual property protection Daraprim should, theoretically, now be available from generic drug manufactures for only a little above cost. Indeed, this is what we see in the rest of the world. Daraprim is available all over the world for very cheap prices. The per tablet price is 3 rupees (US$0.04) in India, R$0.07 (US$0.02) in Brazil, US$0.18 in Australia, and US$0.66 in the UK.

So what gives in the U.S.? Or rather, what does not give? What in our system of drug distribution has gotten stuck and is preventing generic competition from swooping in to compete down the high price of off-patent drugs like Daraprim? The answer is not market failure, but rather regulatory failure, as Geoff noted in his post. While generics would love to enter a market where a drug is currently selling for high profits, they cannot do so without getting FDA approval for their generic version of the drug at issue. To get approval, a generic simply has to file an Abbreviated New Drug Application (“ANDA”) that shows that its drug is equivalent to the branded drug with which it wants to compete. There’s no need for the generic to repeat the safety and efficacy tests that the brand manufacturer originally conducted. To test for equivalence, the generic needs samples of the brand drug. Without those samples, the generic cannot meet its burden of showing equivalence. This is where the strategic use of regulation can come into play.

Geoff’s post explains the potential abuse of Risk Evaluation and Mitigation Strategies (“REMS”). REMS are put in place to require certain safety steps (like testing a woman for pregnancy before prescribing a drug that can cause birth defects) or to restrict the distribution channels for dangerous or addictive drugs. As Geoff points out, there is evidence that a few brand name manufacturers have engaged in bad-faith refusals to provide samples using the excuse of REMS or restricted distribution programs to (1) deny requests for samples, (2) prevent generic manufacturers from buying samples from resellers, and (3) deny generics whose drugs have won approval access to the REMS system that is required for generics to distribute their drugs. Once the FDA has certified that a generic manufacturer can safely handle the drug at issue, there is no legitimate basis for the owners of brand name drugs to deny samples to the generic maker. Expressed worries about liability from entering joint REMS programs with generics also ring hollow, for the most part, and would be ameliorated by the pending legislation.

It’s important to note that this pricing situation is unique to drugs because of the regulatory framework surrounding drug manufacture and distribution. If a manufacturer of, say, an off-patent vacuum cleaner wants to prevent competitors from copying its vacuum cleaner design, it is unlikely to be successful. Even if the original manufacturer refuses to sell any vacuum cleaners to a competitor, and instructs its retailers not to sell either, this will be very difficult to monitor and enforce. Moreover, because of an unrestricted resale market, a competitor would inevitably be able to obtain samples of the vacuum cleaner it wishes to copy. Only patent law can successfully protect against the copying of a product sold to the general public, and when the patent expires, so too will the ability to prevent copying.

Drugs are different. The only way a consumer can resell prescription drugs is by breaking the law. Pills bought from an illegal secondary market would be useless to generics for purposes of FDA approval anyway, because the chain of custody would not exist to prove that the samples are the real thing. This means generics need to get samples from the authorized manufacturer or distribution company. When a drug is subject to a REMS-required restricted distribution program, it is even more difficult, if not impossible, for a generic maker to get samples of the drugs for which it wants to make generic versions. Restricted distribution programs, which are used for dangerous or addictive drugs, by design very tightly control the chain of distribution so that the drugs go only to patients with proper prescriptions from authorized doctors.

A troubling trend has arisen recently in which drug owners put their branded drugs into restricted distribution programs not because of any FDA REMS requirement, but instead as a method to prevent generics from obtaining samples and making generic versions of the drugs. This is the strategy that Turing used before it raised prices over 5,000% on Daraprim. And Turing isn’t the only company to use this strategy. It is being emulated by others, although perhaps not so conspicuously. For instance, in 2015, Valeant Pharmaceuticals completed a hostile takeover of Allergan Pharmaceuticals, with the help of the hedge fund, Pershing Square. Once Valeant obtained ownership of Allergan and its drug portfolio, it adopted restricted distribution programs and raised the prices on its off-patent drugs substantially. It raised the price of two life-saving heart drugs by 212% and 525% respectively. Others have followed suit.

A key component of the strategy to profit from hiking prices on off-patent drugs while avoiding competition from generics is to select drugs that do not currently have generic competitors. Sometimes this is because a drug has recently come off patent, and sometimes it is because the drug is for a small patient population, and thus generics haven’t bothered to enter the market given that brand name manufacturers generally drop their prices to close to cost after the drug comes off patent. But with the strategic control of samples and refusals to allow generics to enter REMS programs, the (often new) owners of the brand name drugs seek to prevent the generic competition that we count on to make products cheap and plentiful once their patent protection expires.

Most brand name drug makers do not engage in withholding samples from generics and abusing restricted distribution and REMS programs. But the few that do cost patients and insurers dearly for important medicines that should be much cheaper once they go off patent. More troubling still is the recent strategy of taking drugs that have been off patent and cheap for years, and abusing the regulatory regime to raise prices and block competition. This growing trend of abusing restricted distribution and REMS to facilitate rent extraction from drug purchasers needs to be corrected.

Two bills addressing this issue are pending in Congress. Both bills (1) require drug companies to provide samples to generics after the FDA has certified the generic, (2) require drug companies to enter into shared REMS programs with generics, (3) allow generics to set up their own REMS compliant systems, and (4) exempt drug companies from liability for sharing products and REMS-compliant systems with generic companies in accordance with the steps set out in the bills. When it comes to remedies, however, the Senate version is significantly better. The penalties provided in the House bill are both vague and overly broad. The bill provides for treble damages and costs against the drug company “of the kind described in section 4(a) of the Clayton Act.” Not only is the application of the Clayton Act unclear in the context of the heavily regulated market for drugs (see Trinko), but treble damages may over-deter reasonably restrictive behavior by drug companies when it comes to distributing dangerous drugs.

The remedies in the Senate version are very well crafted to deter rent seeking behavior while not overly deterring reasonable behavior. The remedial scheme is particularly good, because it punishes most those companies that attempt to make exorbitant profits on drugs by denying generic entry. The Senate version provides as a remedy for unreasonable delay that the plaintiff shall be awarded attorneys’ fees, costs, and the defending drug company’s profits on the drug at issue during the time of the unreasonable delay. This means that a brand name drug company that sells an old drug for a low price and delays sharing only because of honest concern about the safety standards of a particular generic company will not face terribly high damages if it is found unreasonable. On the other hand, a company that sends the price of an off-patent drug soaring and then attempts to block generic entry will know that it can lose all of its rent-seeking profits, plus the cost of the victorious generic company’s attorneys fees. This vastly reduces the incentive for the company owning the brand name drug to raise prices and keep competitors out. It likewise greatly increases the incentive of a generic company to enter the market and–if it is unreasonably blocked–to file a civil action the result of which would be to transfer the excess profits to the generic. This provides a rather elegant fix to the regulatory gaming in this area that has become an increasing problem. The balancing of interests and incentives in the Senate bill should leave many congresspersons feeling comfortable to support the bill.

Brand drug manufacturers are no strangers to antitrust accusations when it comes to their complicated relationship with generic competitors — most obviously with respect to reverse payment settlements. But the massive and massively complex regulatory scheme under which drugs are regulated has provided other opportunities for regulatory legerdemain with potentially anticompetitive effect, as well.

In particular, some FTC Commissioners have raised concerns that brand drug companies have been taking advantage of an FDA drug safety program — the Risk Evaluation and Mitigation Strategies program, or “REMS” — to delay or prevent generic entry.

Drugs subject to a REMS restricted distribution program are difficult to obtain through market channels and not otherwise readily available, even for would-be generic manufacturers that need samples in order to perform the tests required to receive FDA approval to market their products. REMS allows (requires, in fact) brand manufacturers to restrict the distribution of certain drugs that present safety or abuse risks, creating an opportunity for branded drug manufacturers to take advantage of imprecise regulatory requirements by inappropriately limiting access by generic manufacturers.

The FTC has not (yet) brought an enforcement action, but it has opened several investigations, and filed an amicus brief in a private-party litigation. Generic drug companies have filed several antitrust claims against branded drug companies and raised concerns with the FDA.

The problem, however, is that even if these companies are using REMS to delay generics, such a practice makes for a terrible antitrust case. Not only does the existence of a regulatory scheme arguably set Trinko squarely in the way of a successful antitrust case, but the sort of refusal to deal claims at issue here (as in Trinko) are rightly difficult to win because, as the DOJ’s Section 2 Report notes, “there likely are few circumstances where forced sharing would help consumers in the long run.”

But just because there isn’t a viable antitrust case doesn’t mean there isn’t still a competition problem. But in this case, it’s a problem of regulatory failure. Companies rationally take advantage of poorly written federal laws and regulations in order to tilt the market to their own advantage. It’s no less problematic for the market, but its solution is much more straightforward, if politically more difficult.

Thus it’s heartening to see that Senator Mike Lee (R-UT), along with three of his colleagues (Patrick Leahy (D-VT), Chuck Grassley (R-IA), and Amy Klobuchar (D-MN)), has proposed a novel but efficient way to correct these bureaucracy-generated distortions in the pharmaceutical market without resorting to the “blunt instrument” of antitrust law. As the bill notes:

While the antitrust laws may address actions by license holders who impede the prompt negotiation and development on commercially reasonable terms of a single, shared system of elements to assure safe use, a more tailored legal pathway would help ensure that license holders negotiate such agreements in good faith and in a timely manner, facilitating competition in the marketplace for drugs and biological products.

The legislative solution put forward by the Creating and Restoring Equal Access to Equivalent Samples (CREATES) Act of 2016 targets the right culprit: the poor regulatory drafting that permits possibly anticompetitive conduct to take place. Moreover, the bill refrains from creating a per se rule, instead implementing several features that should still enable brand manufacturers to legitimately restrict access to drug samples when appropriate.

In essence, Senator Lee’s bill introduces a third party (in this case, the Secretary of Health and Human Services) who is capable of determining whether an eligible generic manufacturer is able to comply with REMS restrictions — thus bypassing any bias on the part of the brand manufacturer. Where the Secretary determines that a generic firm meets the REMS requirements, the bill also creates a narrow cause of action for this narrow class of plaintiffs, allowing suits against certain brand manufacturers who — despite the prohibition on using REMS to delay generics — nevertheless misuse the process to delay competitive entry.

Background on REMS

The REMS program was introduced as part of the Food and Drug Administration Amendments Act of 2007 (FDAAA). Following the withdrawal of Vioxx, an arthritis pain reliever, from the market because of a post-approval linkage of the drug to heart attacks, the FDA was under considerable fire, and there was a serious risk that fewer and fewer net beneficial drugs would be approved. The REMS program was introduced by Congress as a mechanism to ensure that society could reap the benefits from particularly risky drugs and biologics — rather than the FDA preventing them from entering the market at all. It accomplishes this by ensuring (among other things) that brands and generics adopt appropriate safety protocols for distribution and use of drugs — particularly when a drug has the potential to cause serious side effects, or has an unusually high abuse profile.

The FDA-determined REMS protocols can range from the simple (e.g., requiring a medication guide or a package insert about potential risks) to the more burdensome (including restrictions on a drug’s sale and distribution, or what the FDA calls “Elements to Assure Safe Use” (“ETASU”)). Most relevant here, the REMS process seems to allow brands considerable leeway to determine whether generic manufacturers are compliant or able to comply with ETASUs. Given this discretion, it is no surprise that brand manufacturers may be tempted to block competition by citing “safety concerns.”

Although the FDA specifically forbids the use of REMS to block lower-cost, generic alternatives from entering the market (of course), almost immediately following the law’s enactment, certain less-scrupulous branded pharmaceutical companies began using REMS for just that purpose (also, of course).

REMS abuse

To enter into pharmaceutical markets that no longer have any underlying IP protections, manufactures must submit to the FDA an Abbreviated New Drug Application (ANDA) for a generic, or an Abbreviated Biologic License Application (ABLA) for a biosimilar, of the brand drug. The purpose is to prove to the FDA that the competing product is as safe and effective as the branded reference product. In order to perform the testing sufficient to prove efficacy and safety, generic and biosimilar drug manufacturers must acquire a sample (many samples, in fact) of the reference product they are trying to replicate.

For the narrow class of dangerous or highly abused drugs, generic manufacturers are forced to comply with any REMS restrictions placed upon the brand manufacturer — even when the terms require the brand manufacturer to tightly control the distribution of its product.

And therein lies the problem. Because the brand manufacturer controls access to its products, it can refuse to provide the needed samples, using REMS as an excuse. In theory, it may be true in certain cases that a brand manufacturer is justified in refusing to distribute samples of its product, of course; some would-be generic manufacturers certainly may not meet the requisite standards for safety and security.

But in practice it turns out that most of the (known) examples of brands refusing to provide samples happen across the board — they preclude essentially all generic competition, not just the few firms that might have insufficient safeguards. It’s extremely difficult to justify such refusals on the basis of a generic manufacturer’s suitability when all would-be generic competitors are denied access, including well-established, high-quality manufacturers.

But, for a few brand manufacturers, at least, that seems to be how the REMS program is implemented. Thus, for example, Jon Haas, director of patient access at Turing Pharmaceuticals, referred to the practice of denying generics samples this way:

Most likely I would block that purchase… We spent a lot of money for this drug. We would like to do our best to avoid generic competition. It’s inevitable. They seem to figure out a way [to make generics], no matter what. But I’m certainly not going to make it easier for them. We’re spending millions and millions in research to find a better Daraprim, if you will.

As currently drafted, the REMS program gives branded manufacturers the ability to limit competition by stringing along negotiations for product samples for months, if not years. Although access to a few samples for testing is seemingly such a small, trivial thing, the ability to block this access allows a brand manufacturer to limit competition (at least from bioequivalent and generic drugs; obviously competition between competing branded drugs remains).

And even if a generic competitor manages to get ahold of samples, the law creates an additional wrinkle by imposing a requirement that brand and generic manufacturers enter into a single shared REMS plan for bioequivalent and generic drugs. But negotiating the particulars of the single, shared program can drag on for years. Consequently, even when a generic manufacturer has received the necessary samples, performed the requisite testing, and been approved by the FDA to sell a competing drug, it still may effectively be barred from entering the marketplace because of REMS.

The number of drugs covered by REMS is small: fewer than 100 in a universe of several thousand FDA-approved drugs. And the number of these alleged to be subject to abuse is much smaller still. Nonetheless, abuse of this regulation by certain brand manufacturers has likely limited competition and increased prices.

Antitrust is not the answer

Whether the complex, underlying regulatory scheme that allocates the relative rights of brands and generics — and that balances safety against access — gets the balance correct or not is an open question, to be sure. But given the regulatory framework we have and the perceived need for some sort of safety controls around access to samples and for shared REMS plans, the law should at least work to do what it intends, without creating an opportunity for harmful manipulation. Yet it appears that the ambiguity of the current law has allowed some brand manufacturers to exploit these safety protections to limit competition.

As noted above, some are quite keen to make this an antitrust issue. But, as also noted, antitrust is a poor fit for handling such abuses.

First, antitrust law has an uneasy relationship with other regulatory schemes. Not least because of Trinko, it is a tough case to make that brand manufacturers are violating antitrust laws when they rely upon legal obligations under a safety program that is essentially designed to limit generic entry on safety grounds. The issue is all the more properly removed from the realm of antitrust enforcement given that the problem is actually one of regulatory failure, not market failure.

Second, antitrust law doesn’t impose a duty to deal with rivals except in very limited circumstances. In Trinko, for example, the Court rejected the invitation to extend a duty to deal to situations where an existing, voluntary economic relationship wasn’t terminated. By definition this is unlikely to be the case here where the alleged refusal to deal is what prevents the generic from entering the market in the first place. The logic behind Trinko (and a host of other cases that have limited competitors’ obligations to assist their rivals) was to restrict duty to deal cases to those rare circumstances where it reliably leads to long-term competitive harm — not where it amounts to a perfectly legitimate effort to compete without giving rivals a leg-up.

But antitrust is such a powerful tool and such a flexible “catch-all” regulation, that there are always efforts to thwart reasonable limits on its use. As several of us at TOTM have written about at length in the past, former FTC Commissioner Rosch and former FTC Chairman Leibowitz were vocal proponents of using Section 5 of the FTC Act to circumvent sensible judicial limits on making out and winning antitrust claims, arguing that the limits were meant only for private plaintiffs — not (implicitly infallible) government enforcers. Although no one at the FTC has yet (publicly) suggested bringing a REMS case as a standalone Section 5 case, such a case would be consistent with the sorts of theories that animated past standalone Section 5 cases.

Again, this approach serves as an end-run around the reasonable judicial constraints that evolved as a result of judges actually examining the facts of individual cases over time, and is a misguided way of dealing with what is, after all, fundamentally a regulatory design problem.

The CREATES Act

Senator Lee’s bill, on the other hand, aims to solve the problem with a more straightforward approach by improving the existing regulatory mechanism and by adding a limited judicial remedy to incentivize compliance under the amended regulatory scheme. In summary:

  • The bill creates a cause of action for a refusal to deal only where plaintiff can prove, by a preponderance of the evidence, that certain well-defined conditions are met.
  • For samples, if a drug is not covered by a REMS, or if the generic manufacturer is specifically authorized, then the generic can sue if it doesn’t receive sufficient quantities of samples on commercially reasonable terms. This is not a per se offense subject to outsized antitrust damages. Instead, the remedy is a limited injunction ensuring the sale of samples on commercially reasonable terms, reasonable attorneys’ fees, and a monetary fine limited to revenue earned from sale of the drug during the refusal period.
  • The bill also gives a brand manufacturer an affirmative defense if it can prove by a preponderance of the evidence that, regardless of its own refusal to supply them, samples were nevertheless available elsewhere on commercially reasonable terms, or where the brand manufacturer is unable to supply the samples because it does not actually produce or market the drug.
  • In order to deal with the REMS process problems, the bill creates similar rights with similar limitations when the license holders and generics cannot come to an agreement about a shared REMS on commercially reasonable terms within 120 days of first contact by an eligible developer.
  • The bill also explicitly limits brand manufacturers’ liability for claims “arising out of the failure of an [eligible generic manufacturer] to follow adequate safeguards,” thus removing one of the (perfectly legitimate) objections to the bill pressed by brand manufacturers.

The primary remedy is limited, injunctive relief to end the delay. And brands are protected from frivolous litigation by an affirmative defense under which they need only show that the product is available for purchase on reasonable terms elsewhere. Damages are similarly limited and are awarded only if a court finds that the brand manufacturer lacked a legitimate business justification for its conduct (which, under the drug safety regime, means essentially a reasonable belief that its own REMS plan would be violated by dealing with the generic entrant). And monetary damages do not include punitive damages.

Finally, the proposed bill completely avoids the question of whether antitrust laws are applicable, leaving that possibility open to determination by courts — as is appropriate. Moreover, by establishing even more clearly the comprehensive regulatory regime governing potential generic entrants’ access to dangerous drugs, the bill would, given the holding in Trinko, probably make application of antitrust laws here considerably less likely.

Ultimately Senator Lee’s bill is a well-thought-out and targeted fix to an imperfect regulation that seems to be facilitating anticompetitive conduct by a few bad actors. It does so without trampling on the courts’ well-established antitrust jurisprudence, and without imposing excessive cost or risk on the majority of brand manufacturers that behave perfectly appropriately under the law.

Earlier this week I testified before the U.S. House Subcommittee on Commerce, Manufacturing, and Trade regarding several proposed FTC reform bills.

You can find my written testimony here. That testimony was drawn from a 100 page report, authored by Berin Szoka and me, entitled “The Federal Trade Commission: Restoring Congressional Oversight of the Second National Legislature — An Analysis of Proposed Legislation.” In the report we assess 9 of the 17 proposed reform bills in great detail, and offer a host of suggested amendments or additional reform proposals that, we believe, would help make the FTC more accountable to the courts. As I discuss in my oral remarks, that judicial oversight was part of the original plan for the Commission, and an essential part of ensuring that its immense discretion is effectively directed toward protecting consumers as technology and society evolve around it.

The report is “Report 2.0” of the FTC: Technology & Reform Project, which was convened by the International Center for Law & Economics and TechFreedom with an inaugural conference in 2013. Report 1.0 lays out some background on the FTC and its institutional dynamics, identifies the areas of possible reform at the agency, and suggests the key questions/issues each of them raises.

The text of my oral remarks follow, or, if you prefer, you can watch them here:

Chairman Burgess, Ranking Member Schakowsky, and Members of the Subcommittee, thank you for the opportunity to appear before you today.

I’m Executive Director of the International Center for Law & Economics, a non-profit, non-partisan research center. I’m a former law professor, I used to work at Microsoft, and I had what a colleague once called the most illustrious FTC career ever — because, at approximately 2 weeks, it was probably the shortest.

I’m not typically one to advocate active engagement by Congress in anything (no offense). But the FTC is different.

Despite Congressional reforms, the FTC remains the closest thing we have to a second national legislature. Its jurisdiction covers nearly every company in America. Section 5, at its heart, runs just 20 words — leaving the Commission enormous discretion to make policy decisions that are essentially legislative.

The courts were supposed to keep the agency on course. But they haven’t. As Former Chairman Muris has written, “the agency has… traditionally been beyond judicial control.”

So it’s up to Congress to monitor the FTC’s processes, and tweak them when the FTC goes off course, which is inevitable.

This isn’t a condemnation of the FTC’s dedicated staff. Rather, this one way ratchet of ever-expanding discretion is simply the nature of the beast.

Yet too many people lionize the status quo. They see any effort to change the agency from the outside as an affront. It’s as if Congress was struck by a bolt of lightning in 1914 and the Perfect Platonic Agency sprang forth.

But in the real world, an agency with massive scope and discretion needs oversight — and feedback on how its legal doctrines evolve.

So why don’t the courts play that role? Companies essentially always settle with the FTC because of its exceptionally broad investigatory powers, its relatively weak standard for voting out complaints, and the fact that those decisions effectively aren’t reviewable in federal court.

Then there’s the fact that the FTC sits in judgment of its own prosecutions. So even if a company doesn’t settle and actually wins before the ALJ, FTC staff still wins 100% of the time before the full Commission.

Able though FTC staffers are, this can’t be from sheer skill alone.

Whether by design or by neglect, the FTC has become, as Chairman Muris again described it, “a largely unconstrained agency.”

Please understand: I say this out of love. To paraphrase Churchill, the FTC is the “worst form of regulatory agency — except for all the others.”

Eventually Congress had to course-correct the agency — to fix the disconnect and to apply its own pressure to refocus Section 5 doctrine.

So a heavily Democratic Congress pressured the Commission to adopt the Unfairness Policy Statement in 1980. The FTC promised to restrain itself by balancing the perceived benefits of its unfairness actions against the costs, and not acting when injury is insignificant or consumers could have reasonably avoided injury on their own. It is, inherently, an economic calculus.

But while the Commission pays lip service to the test, you’d be hard-pressed to identify how (or whether) it’s implemented it in practice. Meanwhile, the agency has essentially nullified the “materiality” requirement that it volunteered in its 1983 Deception Policy Statement.

Worst of all, Congress failed to anticipate that the FTC would resume exercising its vast discretion through what it now proudly calls its “common law of consent decrees” in data security cases.

Combined with a flurry of recommended best practices in reports that function as quasi-rulemakings, these settlements have enabled the FTC to circumvent both Congressional rulemaking reforms and meaningful oversight by the courts.

The FTC’s data security settlements aren’t an evolving common law. They’re a static statement of “reasonable” practices, repeated about 55 times over the past 14 years. At this point, it’s reasonable to assume that they apply to all circumstances — much like a rule (which is, more or less, the opposite of the common law).

Congressman Pompeo’s SHIELD Act would help curtail this practice, especially if amended to include consent orders and reports. It would also help focus the Commission on the actual elements of the Unfairness Policy Statement — which should be codified through Congressman Mullins’ SURE Act.

Significantly, only one data security case has actually come before an Article III court. The FTC trumpets Wyndham as an out-and-out win. But it wasn’t. In fact, the court agreed with Wyndham on the crucial point that prior consent orders were of little use in trying to understand the requirements of Section 5.

More recently the FTC suffered another rebuke. While it won its product design suit against Amazon, the Court rejected the Commission’s “fencing in” request to permanently hover over the company and micromanage practices that Amazon had already ended.

As the FTC grapples with such cutting-edge legal issues, it’s drifting away from the balance it promised Congress.

But Congress can’t fix these problems simply by telling the FTC to take its bedrock policy statements more seriously. Instead it must regularly reassess the process that’s allowed the FTC to avoid meaningful judicial scrutiny. The FTC requires significant course correction if its model is to move closer to a true “common law.”

While we all wait on pins and needles for the DC Circuit to issue its long-expected ruling on the FCC’s Open Internet Order, another federal appeals court has pushed back on Tom Wheeler’s FCC for its unremitting “just trust us” approach to federal rulemaking.

The case, round three of Prometheus, et al. v. FCC, involves the FCC’s long-standing rules restricting common ownership of local broadcast stations and their extension by Tom Wheeler’s FCC to the use of joint sales agreements (JSAs). (For more background see our previous post here). Once again the FCC lost (it’s now only 1 for 3 in this case…), as the Third Circuit Court of Appeals took the Commission to task for failing to establish that its broadcast ownership rules were still in the public interest, as required by law, before it decided to extend those rules.

While much of the opinion deals with the FCC’s unreasonable delay (of more than 7 years) in completing two Quadrennial Reviews in relation to its diversity rules, the court also vacated the FCC’s rule expanding its duopoly rule (or local television ownership rule) to ban joint sales agreements without first undertaking the reviews.

We (the International Center for Law and Economics, along with affiliated scholars of law, economics, and communications) filed an amicus brief arguing for precisely this result, noting that

the 2014 Order [] dramatically expands its scope by amending the FCC’s local ownership attribution rules to make the rule applicable to JSAs, which had never before been subject to it. The Commission thereby suddenly declares unlawful JSAs in scores of local markets, many of which have been operating for a decade or longer without any harm to competition. Even more remarkably, it does so despite the fact that both the DOJ and the FCC itself had previously reviewed many of these JSAs and concluded that they were not likely to lessen competition. In doing so, the FCC also fails to examine the empirical evidence accumulated over the nearly two decades some of these JSAs have been operating. That evidence shows that many of these JSAs have substantially reduced the costs of operating TV stations and improved the quality of their programming without causing any harm to competition, thereby serving the public interest.

The Third Circuit agreed that the FCC utterly failed to justify its continued foray into banning potentially pro-competitive arrangements, finding that

the Commission violated § 202(h) by expanding the reach of the ownership rules without first justifying their preexisting scope through a Quadrennial Review. In Prometheus I we made clear that § 202(h) requires that “no matter what the Commission decides to do to any particular rule—retain, repeal, or modify (whether to make more or less stringent)—it must do so in the public interest and support its decision with a reasoned analysis.” Prometheus I, 373 F.3d at 395. Attribution of television JSAs modifies the Commission’s ownership rules by making them more stringent. And, unless the Commission determines that the preexisting ownership rules are sound, it cannot logically demonstrate that an expansion is in the public interest. Put differently, we cannot decide whether the Commission’s rationale—the need to avoid circumvention of ownership rules—makes sense without knowing whether those rules are in the public interest. If they are not, then the public interest might not be served by closing loopholes to rules that should no longer exist.

Perhaps this decision will be a harbinger of good things to come. The FCC — and especially Tom Wheeler’s FCC — has a history of failing to justify its rules with anything approaching rigorous analysis. The Open Internet Order is a case in point. We will all be better off if courts begin to hold the Commission’s feet to the fire and throw out their rules when the FCC fails to do the work needed to justify them.