Archives For anticompetitive market distortions

In a recent post at the (appallingly misnamed) ProMarket blog (the blog of the Stigler Center at the University of Chicago Booth School of Business — George Stigler is rolling in his grave…), Marshall Steinbaum keeps alive the hipster-antitrust assertion that lax antitrust enforcement — this time in the labor market — is to blame for… well, most? all? of what’s wrong with “the labor market and the broader macroeconomic conditions” in the country.

In this entry, Steinbaum takes particular aim at the US enforcement agencies, which he claims do not consider monopsony power in merger review (and other antitrust enforcement actions) because their current consumer welfare framework somehow doesn’t recognize monopsony as a possible harm.

This will probably come as news to the agencies themselves, whose Horizontal Merger Guidelines devote an entire (albeit brief) section (section 12) to monopsony, noting that:

Mergers of competing buyers can enhance market power on the buying side of the market, just as mergers of competing sellers can enhance market power on the selling side of the market. Buyer market power is sometimes called “monopsony power.”

* * *

Market power on the buying side of the market is not a significant concern if suppliers have numerous attractive outlets for their goods or services. However, when that is not the case, the Agencies may conclude that the merger of competing buyers is likely to lessen competition in a manner harmful to sellers.

Steinbaum fails to mention the HMGs, but he does point to a US submission to the OECD to make his point. In that document, the agencies state that

The U.S. Federal Trade Commission (“FTC”) and the Antitrust Division of the Department of Justice (“DOJ”) [] do not consider employment or other non-competition factors in their antitrust analysis. The antitrust agencies have learned that, while such considerations “may be appropriate policy objectives and worthy goals overall… integrating their consideration into a competition analysis… can lead to poor outcomes to the detriment of both businesses and consumers.” Instead, the antitrust agencies focus on ensuring robust competition that benefits consumers and leave other policies such as employment to other parts of government that may be specifically charged with or better placed to consider such objectives.

Steinbaum, of course, cites only the first sentence. And he uses it as a launching-off point to attack the notion that antitrust is an improper tool for labor market regulation. But if he had just read a little bit further in the (very short) document he cites, Steinbaum might have discovered that the US antitrust agencies have, in fact, challenged the exercise of collusive monopsony power in labor markets. As footnote 19 of the OECD submission notes:

Although employment is not a relevant policy goal in antitrust analysis, anticompetitive conduct affecting terms of employment can violate the Sherman Act. See, e.g., DOJ settlement with eBay Inc. that prevents the company from entering into or maintaining agreements with other companies that restrain employee recruiting or hiring; FTC settlement with ski equipment manufacturers settling charges that companies illegally agreed not to compete for one another’s ski endorsers or employees. (Emphasis added).

And, ironically, while asserting that labor market collusion doesn’t matter to the agencies, Steinbaum himself points to “the Justice Department’s 2010 lawsuit against Silicon Valley employers for colluding not to hire one another’s programmers.”

Steinbaum instead opts for a willful misreading of the first sentence of the OECD submission. But what the OECD document refers to, of course, are situations where two firms merge, no market power is created (either in input or output markets), but people are laid off because the merged firm does not need all of, say, the IT and human resources employees previously employed in the pre-merger world.

Does Steinbaum really think this is grounds for challenging the merger on antitrust grounds?

Actually, his post suggests that he does indeed think so, although he doesn’t come right out and say it. What he does say — as he must in order to bring antitrust enforcement to bear on the low- and unskilled labor markets (e.g., burger flippers; retail cashiers; Uber drivers) he purports to care most about — is that:

Employers can have that control [over employees, as opposed to independent contractors] without first establishing themselves as a monopoly—in fact, reclassification [of workers as independent contractors] is increasingly standard operating procedure in many industries, which means that treating it as a violation of Section 2 of the Sherman Act should not require that outright monopolization must first be shown. (Emphasis added).

Honestly, I don’t have any idea what he means. Somehow, because firms hire independent contractors where at one time long ago they might have hired employees… they engage in Sherman Act violations, even if they don’t have market power? Huh?

I get why he needs to try to make this move: As I intimated above, there is probably not a single firm in the world that hires low- or unskilled workers that has anything approaching monopsony power in those labor markets. Even Uber, the example he uses, has nothing like monopsony power, unless perhaps you define the market (completely improperly) as “drivers already working for Uber.” Even then Uber doesn’t have monopsony power: There can be no (or, at best, virtually no) markets in the world where an Uber driver has no other potential employment opportunities but working for Uber.

Moreover, how on earth is hiring independent contractors evidence of anticompetitive behavior? ”Reclassification” is not, in fact, “standard operating procedure.” It is the case that in many industries firms (unilaterally) often decide to contract out the hiring of low- and unskilled workers over whom they do not need to exercise direct oversight to specialized firms, thus not employing those workers directly. That isn’t “reclassification” of existing workers who have no choice but to accept their employer’s terms; it’s a long-term evolution of the economy toward specialization, enabled in part by technology.

And if we’re really concerned about what “employee” and “independent contractor” mean for workers and employment regulation, we should reconsider those outdated categories. Firms are faced with a binary choice: hire workers or independent contractors. Neither really fits many of today’s employment arrangements very well, but that’s the choice firms are given. That they sometimes choose “independent worker” over “employee” is hardly evidence of anticompetitive conduct meriting antitrust enforcement.

The point is: The notion that any of this is evidence of monopsony power, or that the antitrust enforcement agencies don’t care about monopsony power — because, Bork! — is absurd.

Even more absurd is the notion that the antitrust laws should be used to effect Steinbaum’s preferred market regulations — independent of proof of actual anticompetitive effect. I get that it’s hard to convince Congress to pass the precise laws you want all the time. But simply routing around Congress and using the antitrust statutes as a sort of meta-legislation to enact whatever happens to be Marshall Steinbaum’s preferred regulation du jour is ridiculous.

Which is a point the OECD submission made (again, if only Steinbaum had read beyond the first sentence…):

[T]wo difficulties with expanding the scope of antitrust analysis to include employment concerns warrant discussion. First, a full accounting of employment effects would require consideration of short-term effects, such as likely layoffs by the merged firm, but also long-term effects, which could include employment gains elsewhere in the industry or in the economy arising from efficiencies generated by the merger. Measuring these effects would [be extremely difficult.]. Second, unless a clear policy spelling out how the antitrust agency would assess the appropriate weight to give employment effects in relation to the proposed conduct or transaction’s procompetitive and anticompetitive effects could be developed, [such enforcement would be deeply problematic, and essentially arbitrary].

To be sure, the agencies don’t recognize enough that they already face the problem of reconciling multidimensional effects — e.g., short-, medium-, and long-term price effects, innovation effects, product quality effects, etc. But there is no reason to exacerbate the problem by asking them to also consider employment effects. Especially not in Steinbaum’s world in which certain employment effects are problematic even without evidence of market power or even actual anticompetitive harm, just because he says so.

Consider how this might play out:

Suppose that Pepsi, Coca-Cola, Dr. Pepper… and every other soft drink company in the world attempted to merge, creating a monopoly soft drink manufacturer. In what possible employment market would even this merger create a monopsony in which anticompetitive harm could be tied to the merger? In the market for “people who know soft drink secret formulas?” Yet Steinbaum would have the Sherman Act enforced against such a merger not because it might create a product market monopoly, but because the existence of a product market monopoly means the firm must be able to bad things in other markets, as well. For Steinbaum and all the other scolds who see concentration as the source of all evil, the dearth of evidence to support such a claim is no barrier (on which, see, e.g., this recent, content-less NYT article (that, naturally, quotes Steinbaum) on how “big business may be to blame” for the slowing rate of startups).

The point is, monopoly power in a product market does not necessarily have any relationship to monopsony power in the labor market. Simply asserting that it does — and lambasting the enforcement agencies for not just accepting that assertion — is farcical.

The real question, however, is what has happened to the University of Chicago that it continues to provide a platform for such nonsense?

Regardless of the merits and soundness (or lack thereof) of this week’s European Commission Decision in the Google Shopping case — one cannot assess this until we have the text of the decision — two comments really struck me during the press conference.

First, it was said that Google’s conduct had essentially reduced innovation. If I heard correctly, this is a formidable statement. In 2016, another official EU service published stats that described Alphabet as increasing its R&D by 22% and ranked it as the world’s 4th top R&D investor. Sure it can always be better. And sure this does not excuse everything. But still. The press conference language on incentives to innovate was a bit of an oversell, to say the least.

Second, the Commission views this decision as a “precedent” or as a “framework” that will inform the way dominant Internet platforms should display, intermediate and market their services and those of their competitors. This may fuel additional complaints by other vertical search rivals against (i) Google in relation to other product lines, but also against (ii) other large platform players.

Beyond this, the Commission’s approach raises a gazillion questions of law and economics. Pending the disclosure of the economic evidence in the published decision, let me share some thoughts on a few (arbitrarily) selected legal issues.

First, the Commission has drawn the lesson of the Microsoft remedy quagmire. The Commission refrains from using a trustee to ensure compliance with the decision. This had been a bone of contention in the 2007 Microsoft appeal. Readers will recall that the Commission had imposed on Microsoft to appoint a monitoring trustee, who was supposed to advise on possible infringements in the implementation of the decision. On appeal, the Court eventually held that the Commission was solely responsible for this, and could not delegate those powers. Sure, the Commission could “retai[n] its own external expert to provide advice when it investigates the implementation of the remedies.” But no more than that.

Second, we learn that the Commission is no longer in the business of software design. Recall the failed untying of WMP and Windows — Windows Naked sold only 11,787 copies, likely bought by tech bootleggers willing to acquire the first piece of software ever designed by antitrust officials — or the browser “Choice Screen” compliance saga which eventually culminated with a €561 million fine. Nothing of this can be found here. The Commission leaves remedial design to the abstract concept of “equal treatment”.[1] This, certainly, is a (relatively) commendable approach, and one that could inspire remedies in other unilateral conduct cases, in particular, exploitative conduct ones where pricing remedies are both costly, impractical, and consequentially inefficient.

On the other hand, readers will also not fail to see the corollary implication of “equal treatment”: search neutrality could actually cut both ways, and lead to a lawful degradation in consumer welfare if Google were ever to decide to abandon rich format displays for both its own shopping services and those of rivals.

Third, neither big data nor algorithmic design is directly vilified in the case (“The Commission Decision does not object to the design of Google’s generic search algorithms or to demotions as such, nor to the way that Google displays or organises its search results pages”). In fact, the Commission objects to the selective application of Google’s generic search algorithms to its own products. This is an interesting, and subtle, clarification given all the coverage that this topic has attracted in recent antitrust literature. We are in fact very close to a run of the mill claim of disguised market manipulation, not causally related to data or algorithmic technology.

Fourth, Google said it contemplated a possible appeal of the decision. Now, here’s a challenging question: can an antitrust defendant effectively exercise its right to judicial review of an administrative agency (and more generally its rights of defense), when it operates under the threat of antitrust sanctions in ongoing parallel cases investigated by the same agency (i.e., the antitrust inquiries related to Android and Ads)? This question cuts further than the Google Shopping case. Say firm A contemplates a merger with firm B in market X, while it is at the same time subject to antitrust investigations in market Z. And assume that X and Z are neither substitutes nor complements so there is little competitive relationship between both products. Can the Commission leverage ongoing antitrust investigations in market Z to extract merger concessions in market X? Perhaps more to the point, can the firm interact with the Commission as if the investigations are completely distinct, or does it have to play a more nuanced game and consider the ramifications of its interactions with the Commission in both markets?

Fifth, as to the odds of a possible appeal, I don’t believe that arguments on the economic evidence or legal theory of liability will ever be successful before the General Court of the EU. The law and doctrine in unilateral conduct cases are disturbingly — and almost irrationally — severe. As I have noted elsewhere, the bottom line in the EU case-law on unilateral conduct is to consider the genuine requirement of “harm to competition” as a rhetorical question, not an empirical one. In EU unilateral conduct law, exclusion of every and any firm is a per se concern, regardless of evidence of efficiency, entry or rivalry.

In turn, I tend to opine that Google has a stronger game from a procedural standpoint, having been left with (i) the expectation of a settlement (it played ball three times by making proposals); (ii) a corollary expectation of the absence of a fine (settlement discussions are not appropriate for cases that could end with fines); and (iii) a full seven long years of an investigatory cloud. We know from the past that EU judges like procedural issues, but like comparably less to debate the substance of the law in unilateral conduct cases. This case could thus be a test case in terms of setting boundaries on how freely the Commission can U-turn a case (the Commissioner said “take the case forward in a different way”).

Today, the Senate Committee on Health, Education, Labor, and Pensions (HELP) enters the drug pricing debate with a hearing on “The Cost of Prescription Drugs: How the Drug Delivery System Affects What Patients Pay.”  By questioning the role of the drug delivery system in pricing, the hearing goes beyond the more narrow focus of recent hearings that have explored how drug companies set prices.  Instead, today’s hearing will explore how pharmacy benefit managers, insurers, providers, and others influence the amounts that patients pay.

In 2016, net U.S. drug spending increased by 4.8% to $323 billion (after adjusting for rebates and off-invoice discounts).  This rate of growth slowed to less than half the rates of 2014 and 2015, when net drug spending grew at rates of 10% and 8.9% respectively.  Yet despite the slowing in drug spending, the public outcry over the cost of prescription drugs continues.

In today’s hearing, there will be testimony both on the various causes of drug spending increases and on various proposals that could reduce the cost of drugs.  Several of the proposals will focus on ways to increase competition in the pharmaceutical industry, and in turn, reduce drug prices.  I have previously explained several ways that the government could reduce prices through enhanced competition, including reducing the backlog of generic drugs awaiting FDA approval and expediting the approval and acceptance of biosimilars.  Other proposals today will likely call for regulatory reforms to enable innovative contractual arrangements that allow for outcome- or indication-based pricing and other novel reimbursement designs.

However, some proposals will undoubtedly return to the familiar call for more government negotiation of drug prices, especially drugs covered under Medicare Part D.  As I’ve discussed in a previous post, in order for government negotiation to significantly lower drug prices, the government must be able to put pressure on drug makers to secure price concessions. This could be achieved if the government could set prices administratively, penalize manufacturers that don’t offer price reductions, or establish a formulary.  Setting prices or penalizing drug makers that don’t reduce prices would produce the same disastrous effects as price controls: drug shortages in certain markets, increased prices for non-Medicare patients, and reduced incentives for innovation. A government formulary for Medicare Part D coverage would provide leverage to obtain discounts from manufacturers, but it would mean that many patients could no longer access some of their optimal drugs.

As lawmakers seriously consider changes that would produce these negative consequences, industry would do well to voluntarily constrain prices.  Indeed, in the last year, many drug makers have pledged to limit price increases to keep drug spending under control.  Allergan was first, with its “social contract” introduced last September that promised to keep price increases below 10 percent. Since then, Novo Nordisk, AbbVie, and Takeda, have also voluntarily committed to single-digit price increases.

So far, the evidence shows the drug makers are sticking to their promises. Allergan has raised the price of U.S. branded products by an average of 6.7% in 2017, and no drug’s list price has increased by more than single digits.  In contrast, Pfizer, who has made no pricing commitment, has raised the price of many of its drugs by 20%.

If more drug makers brought about meaningful change by committing to voluntary pricing restraints, the industry could prevent the market-distorting consequences of government intervention while helping patients afford the drugs they need.   Moreover, avoiding intrusive government mandates and price controls would preserve drug innovation that has brought life-saving and life-enhancing drugs to millions of Americans.

 

 

 

Nicolas Petit is Professor of Law at the University of Liege (Belgium) and Research Professor at the University of South Australia (UniSA)

This symposium offers a good opportunity to look again into the complex relation between concentration and innovation in antitrust policy. Whilst the details of the EC decision in Dow/Dupont remain unknown, the press release suggests that the issue of “incentives to innovate” was central to the review. Contrary to what had leaked in the antitrust press, the decision has apparently backed off from the introduction of a new “model”, and instead followed a more cautious approach. After a quick reminder of the conventional “appropriability v cannibalizationframework that drives merger analysis in innovation markets (1), I make two sets of hopefully innovative remarks on appropriability and IP rights (2) and on cannibalization in the ag-biotech sector (3).

Appropriability versus cannibalization

Antitrust economics 101 teach that mergers affect innovation incentives in two polar ways. A merger may increase innovation incentives. This occurs when the increment in power over price or output achieved through merger enhances the appropriability of the social returns to R&D. The appropriability effect of mergers is often tied to Joseph Schumpeter, who observed that the use of “protecting devices” for past investments like patent protection or trade secrecy constituted a “normal elemen[t] of rational management”. The appropriability effect can in principle be observed at firm – specific incentives – and industry – general incentives – levels, because actual or potential competitors can also use the M&A market to appropriate the payoffs of R&D investments.

But a merger may decrease innovation incentives. This happens when the increased industry position achieved through merger discourages the introduction of new products, processes or services. This is because an invention will cannibalize the merged entity profits in proportions larger as would be the case in a more competitive market structure. This idea is often tied to Kenneth Arrow who famously observed that a “preinvention monopoly power acts as a strong disincentive to further innovation”.

Schumpeter’s appropriability hypothesis and Arrow’s cannibalization theory continue to drive much of the discussion on concentration and innovation in antitrust economics. True, many efforts have been made to overcome, reconcile or bypass both views of the world. Recent studies by Carl Shapiro or Jon Baker are worth mentioning. But Schumpeter and Arrow remain sticky references in any discussion of the issue. Perhaps more than anything, the persistence of their ideas denotes that both touched a bottom point when they made their seminal contribution, laying down two systems of belief on the workings of innovation-driven markets.

Now beyond the theory, the appropriability v cannibalization gravitational models provide from the outset an appealing framework for the examination of mergers in R&D driven industries in general. From an operational perspective, the antitrust agency will attempt to understand if the transaction increases appropriability – which leans in favour of clearance – or cannibalization – which leans in favour of remediation. At the same time, however, the downside of the appropriability v cannibalization framework (and of any framework more generally) may be to oversimplify our understanding of complex phenomena. This, in turn, prompts two important observations on each branch of the framework.

Appropriability and IP rights

Any antitrust agency committed to promoting competition and innovation should consider mergers in light of the degree of appropriability afforded by existing protecting devices (essentially contracts and entitlements). This is where Intellectual Property (“IP”) rights become relevant to the discussion. In an industry with strong IP rights, the merging parties (and its rivals) may be able to appropriate the social returns to R&D without further corporate concentration. Put differently, the stronger the IP rights, the lower the incremental contribution of a merger transaction to innovation, and the higher the case for remediation.

This latter proposition, however, rests on a heavy assumption: that IP rights confer perfect appropriability. The point is, however, far from obvious. Most of us know that – and our antitrust agencies’ misgivings with other sectors confirm it – IP rights are probabilistic in nature. There is (i) no certainty that R&D investments will lead to commercially successful applications; (ii) no guarantee that IP rights will resist to invalidity proceedings in court; (iii) little safety to competition by other product applications which do not practice the IP but provide substitute functionality; and (iv) no inevitability that the environmental, toxicological and regulatory authorization rights that (often) accompany IP rights will not be cancelled when legal requirements change. Arrow himself called for caution, noting that “Patent laws would have to be unimaginably complex and subtle to permit [such] appropriation on a large scale”. A thorough inquiry into the specific industry-strength of IP rights that goes beyond patent data and statistics thus constitutes a necessary step in merger review.

But it is not a sufficient one. The proposition that strong IP rights provide appropriability is essentially valid if the observed pre-merger market situation is one where several IP owners compete on differentiated products and as a result wield a degree of market power. In contrast, the proposition is essentially invalid if the observed pre-merger market situation leans more towards the competitive equilibrium and IP owners compete at prices closer to costs. In both variants, the agency should thus look carefully at the level and evolution of prices and costs, including R&D ones, in the pre-merger industry. Moreover, in the second variant, the agency ought to consider as a favourable appropriability factor any increase of the merging entity’s power over price, but also any improvement of its power over cost. By this, I have in mind efficiency benefits, which can arise as the result of economies of scale (in manufacturing but also in R&D), but also when the transaction combines complementary technological and marketing assets. In Dow/Dupont, no efficiency argument has apparently been made by the parties, so it is difficult to understand if and how such issues have played a role in the Commission’s assessment.

Cannibalization, technological change, and drastic innovation

Arrow’s cannibalization theory – namely that a pre-invention monopoly acts as a strong disincentive to further innovation – fails to capture that successful inventions create new technology frontiers, and with them entirely novel needs that even a monopolist has an incentive to serve. This can be understood with an example taken from the ag-biotech field. It is undisputed that progress in crop protection science has led to an expanding range of resistant insects, weeds, and pathogens. This, in turn, is one (if not the main) key drivers of ag-tech research. In a 2017 paper published in Pest Management Science, Sparks and Lorsbach observe that:

resistance to agrochemicals is an ongoing driver for the development of new chemical control options, along with an increased emphasis on resistance management and how these new tools can fit into resistance management programs. Because resistance is such a key driver for the development of new agrochemicals, a highly prized attribute for a new agrochemical is a new MoA [method of action] that is ideally a new molecular target either in an existing target site (e.g., an unexploited binding site in the voltage-gated sodium channel), or new/under-utilized target site such as calcium channels.

This, and other factors, leads them to conclude that:

even with fewer companies overall involved in agrochemical discovery, innovation continues, as demonstrated by the continued introduction of new classes of agrochemicals with new MoAs.

Sparks, Hahn, and Garizi make a similar point. They stress in particular that the discovery of natural products (NPs) which are the “output of nature’s chemical laboratory” is today a main driver of crop protection research. According to them:

NPs provide very significant value in identifying new MoAs, with 60% of all agrochemical MoAs being, or could have been, defined by a NP. This information again points to the importance of NPs in agrochemical discovery, since new MoAs remain a top priority for new agrochemicals.

More generally, the point is not that Arrow’s cannibalization theory is wrong. Arrow’s work convincingly explains monopolists’ low incentives to invest in substitute invention. Instead, the point is that Arrow’s cannibalization theory is narrower than often assumed in the antitrust policy literature. Admittedly, Arrow’s cannibalization theory is relevant in industries primarily driven by a process of cumulative innovation. But it is much less helpful to understand the incentives of a monopolist in industries subject to technological change. As a result of this, the first question that should guide an antitrust agency investigation is empirical in nature: is the industry under consideration one driven by cumulative innovation, or one where technology disruption, shocks, and serendipity incentivize drastic innovation?

Note that exogenous factors beyond technological frontiers also promote drastic innovation. This point ought not to be overlooked. A sizeable amount of the specialist scientific literature stresses the powerful innovation incentives created by changing dietary habits, new diseases (e.g. the Zika virus), global population growth, and environmental challenges like climate change and weather extremes. In 2015, Jeschke noted:

In spite of the significant consolidation of the agrochemical companies, modern agricultural chemistry is vital and will have the opportunity to shape the future of agriculture by continuing to deliver further innovative integrated solutions. 

Words of wisdom caution for antitrust agencies tasked with the complex mission of reviewing mergers in the ag-biotech industry?

In a weekend interview with the Washington Post, Donald Trump vowed to force drug companies to negotiate directly with the government on prices in Medicare and Medicaid.  It’s unclear what, if anything, Trump intends for Medicaid; drug makers are already required to sell drugs to Medicaid at the lowest price they negotiate with any other buyer.  For Medicare, Trump didn’t offer any more details about the intended negotiations, but he’s referring to his campaign proposals to allow the Department of Health and Human Services (HHS) to negotiate directly with manufacturers the prices of drugs covered under Medicare Part D.

Such proposals have been around for quite a while.  As soon as the Medicare Modernization Act (MMA) of 2003 was enacted, creating the Medicare Part D prescription drug benefit, many lawmakers began advocating for government negotiation of drug prices. Both Hillary Clinton and Bernie Sanders favored this approach during their campaigns, and the Obama Administration’s proposed budget for fiscal years 2016 and 2017 included a provision that would have allowed the HHS to negotiate prices for a subset of drugs: biologics and certain high-cost prescription drugs.

However, federal law would have to change if there is to be any government negotiation of drug prices under Medicare Part D. Congress explicitly included a “noninterference” clause in the MMA that stipulates that HHS “may not interfere with the negotiations between drug manufacturers and pharmacies and PDP sponsors, and may not require a particular formulary or institute a price structure for the reimbursement of covered part D drugs.”

Most people don’t understand what it means for the government to “negotiate” drug prices and the implications of the various options.  Some proposals would simply eliminate the MMA’s noninterference clause and allow HHS to negotiate prices for a broad set of drugs on behalf of Medicare beneficiaries.  However, the Congressional Budget Office has already concluded that such a plan would have “a negligible effect on federal spending” because it is unlikely that HHS could achieve deeper discounts than the current private Part D plans (there are 746 such plans in 2017).  The private plans are currently able to negotiate significant discounts from drug manufacturers by offering preferred formulary status for their drugs and channeling enrollees to the formulary drugs with lower cost-sharing incentives. In most drug classes, manufacturers compete intensely for formulary status and offer considerable discounts to be included.

The private Part D plans are required to provide only two drugs in each of several drug classes, giving the plans significant bargaining power over manufacturers by threatening to exclude their drugs.  However, in six protected classes (immunosuppressant, anti-cancer, anti-retroviral, antidepressant, antipsychotic and anticonvulsant drugs), private Part D plans must include “all or substantially all” drugs, thereby eliminating their bargaining power and ability to achieve significant discounts.  Although the purpose of the limitation is to prevent plans from cherry-picking customers by denying coverage of certain high cost drugs, giving the private Part D plans more ability to exclude drugs in the protected classes should increase competition among manufacturers for formulary status and, in turn, lower prices.  And it’s important to note that these price reductions would not involve any government negotiation or intervention in Medicare Part D.  However, as discussed below, excluding more drugs in the protected classes would reduce the value of the Part D plans to many patients by limiting access to preferred drugs.

For government negotiation to make any real difference on Medicare drug prices, HHS must have the ability to not only negotiate prices, but also to put some pressure on drug makers to secure price concessions.  This could be achieved by allowing HHS to also establish a formulary, set prices administratively, or take other regulatory actions against manufacturers that don’t offer price reductions.  Setting prices administratively or penalizing manufacturers that don’t offer satisfactory reductions would be tantamount to a price control.  I’ve previously explained that price controls—whether direct or indirect—are a bad idea for prescription drugs for several reasons. Evidence shows that price controls lead to higher initial launch prices for drugs, increased drug prices for consumers with private insurance coverage,  drug shortages in certain markets, and reduced incentives for innovation.

Giving HHS the authority to establish a formulary for Medicare Part D coverage would provide leverage to obtain discounts from manufacturers, but it would produce other negative consequences.  Currently, private Medicare Part D plans cover an average of 85% of the 200 most popular drugs, with some plans covering as much as 93%.  In contrast, the drug benefit offered by the Department of Veterans Affairs (VA), one government program that is able to set its own formulary to achieve leverage over drug companies, covers only 59% of the 200 most popular drugs.  The VA’s ability to exclude drugs from the formulary has generated significant price reductions. Indeed, estimates suggest that if the Medicare Part D formulary was restricted to the VA offerings and obtained similar price reductions, it would save Medicare Part D $510 per beneficiary.  However, the loss of access to so many popular drugs would reduce the value of the Part D plans by $405 per enrollee, greatly narrowing the net gains.

History has shown that consumers don’t like their access to drugs reduced.  In 2014, Medicare proposed to take antidepressants, antipsychotic and immunosuppressant drugs off the protected list, thereby allowing the private Part D plans to reduce offerings of these drugs on the formulary and, in turn, reduce prices.  However, patients and their advocates were outraged at the possibility of losing access to their preferred drugs, and the proposal was quickly withdrawn.

Thus, allowing the government to negotiate prices under Medicare Part D could carry important negative consequences.  Policy-makers must fully understand what it means for government to negotiate directly with drug makers, and what the potential consequences are for price reductions, access to popular drugs, drug innovation, and drug prices for other consumers.

On November 9, pharmaceutical stocks soared as Donald Trump’s election victory eased concerns about government intervention in drug pricing. Shares of Pfizer rose 8.5%, Allergan PLC was up 8%, and biotech Celgene jumped 10.4%. Drug distributors also gained, with McKesson up 6.4% and Express Scripts climbing 3.4%. Throughout the campaign, Clinton had vowed to take on the pharmaceutical industry and proposed various reforms to reign in drug prices, from levying fines on drug companies that imposed unjustified price increases to capping patients’ annual expenditures on drugs. Pharmaceutical stocks had generally underperformed this year as the market, like much of America, awaited a Clinton victory.

In contrast, Trump generally had less to say on the subject of drug pricing, hence the market’s favorable response to his unexpected victory. Yet, as the end of the first post-election month draws near, we are still uncertain whether Trump is friend or foe to the pharmaceutical industry. Trump’s only proposal that directly impacts the industry would allow the government to negotiate the prices of Medicare Part D drugs with drug makers. Although this proposal would likely have little impact on prices because existing Part D plans already negotiate prices with drug makers, there is a risk that this “negotiation” could ultimately lead to price controls imposed on the industry. And as I have previously discussed, price controls—whether direct or indirect—are a bad idea for prescription drugs: they lead to higher initial launch prices for drugs, increased drug prices for consumers with private insurance coverage, drug shortages in certain markets, and reduced incentives for innovation.

Several of Trump’s other health proposals have mixed implications for the industry. For example, a repeal or overhaul of the Affordable Care Act could eliminate the current tax on drug makers and loosen requirements for Medicaid drug rebates and Medicare part D discounts. On the other hand, if repealing the ACA reduces the number of people insured, spending on pharmaceuticals would fall. Similarly, if Trump renegotiates international trade deals, pharmaceutical firms could benefit from stronger markets or longer patent exclusivity rights, or they could suffer if foreign countries abandon trade agreements altogether or retaliate with disadvantageous terms.

Yet, with drug spending up 8.5 percent last year and recent pricing scandals launched by 500+ percentage increases in individual drugs (i.e., Martin Shkreli, Valeant Pharmaceuticals, Mylan), the current debate over drug pricing is unlikely to fade. Even a Republican-led Congress and White House is likely to heed the public outcry and do something about drug prices.

Drug makers would be wise to stave off any government-imposed price restrictions by voluntarily limiting price increases on important drugs. Major pharmaceutical company Allergan has recently done just this by issuing a “social contract with patients” that made several drug pricing commitments to its customers. Among other assurances, Allergan has promised to limit price increases to single-digit percentage increases and no longer engage in the common industry tactic of dramatically increasing prices for branded drugs nearing patent expiry. Last year throughout the pharmaceutical industry, the prices of the most commonly-used brand drugs increased by over 16 percent and, in the last two years before patent expiry, drug makers increased the list prices of drugs by an average of 35 percent. Thus, Allergan’s commitment will produce significant savings over the life of a product, creating hundreds of millions of dollars in savings to health plans, patients, and the health care system.

If Allergan can make this commitment for its entire drug inventory—over 80+ drugs—why haven’t other companies done the same? Similar commitments by other drug makers might be enough to prevent lawmakers from turning to market-distorting reforms, such as price controls, that could end up doing more harm than good for consumers, the pharmaceutical industry, and long-term innovation.

As Truth on the Market readers prepare to enjoy their Thanksgiving dinners, let me offer some (hopefully palatable) “food for thought” on a competition policy for the new Trump Administration.  In referring to competition policy, I refer not just to lawsuits directed against private anticompetitive conduct, but more broadly to efforts aimed at curbing government regulatory barriers that undermine the competitive process.

Public regulatory barriers are a huge problem.  Their costs have been highlighted by prestigious international research bodies such as the OECD and World Bank, and considered by the International Competition Network’s Advocacy Working Group.  Government-imposed restrictions on competition benefit powerful incumbents and stymie entry by innovative new competitors.  (One manifestation of this that is particularly harmful for American workers and denies job opportunities to millions of lower-income Americans is occupational licensing, whose increasing burdens are delineated in a substantial body of research – see, for example, a 2015 Obama Administration White House Report and a 2016 Heritage Foundation Commentary that explore the topic.)  Federal Trade Commission (FTC) and Justice Department (DOJ) antitrust officials should consider emphasizing “state action” lawsuits aimed at displacing entry barriers and other unwarranted competitive burdens imposed by self-interested state regulatory boards.  When the legal prerequisites for such enforcement actions are not met, the FTC and the DOJ should ramp up their “competition advocacy” efforts, with the aim of convincing state regulators to avoid adopting new restraints on competition – and, where feasible, eliminating or curbing existing restraints.

The FTC and DOJ also should be authorized by the White House to pursue advocacy initiatives whose goal is to dismantle or lessen the burden of excessive federal regulations (such advocacy played a role in furthering federal regulatory reform during the Ford and Carter Administrations).  To bolster those initiatives, the Trump Administration should consider establishing a high-level federal task force on procompetitive regulatory reform, in the spirit of previous reform initiatives.  The task force would report to the president and include senior level representatives from all federal agencies with regulatory responsibilities.  The task force could examine all major regulatory and statutory schemes overseen by Executive Branch and independent agencies, and develop a list of specific reforms designed to reduce federal regulatory impediments to robust competition.  Those reforms could be implemented through specific regulatory changes or legislative proposals, as the case might require.  The task force would have ample material to work with – for example, anticompetitive cartel-like output restrictions, such as those allowed under federal agricultural orders, are especially pernicious.  In addition to specific cartel-like programs, scores of regulatory regimes administered by individual federal agencies impose huge costs and merit particular attention, as documented in the Heritage Foundation’s annual “Red Tape Rising” reports that document the growing burden of federal regulation (see, for example, the 2016 edition of Red Tape Rising).

With respect to traditional antitrust enforcement, the Trump Administration should emphasize sound, empirically-based economic analysis in merger and non-merger enforcement.  They should also adopt a “decision-theoretic” approach to enforcement, to the greatest extent feasible.  Specifically, in developing their enforcement priorities, in considering case selection criteria, and in assessing possible new (or amended) antitrust guidelines, DOJ and FTC antitrust enforcers should recall that antitrust is, like all administrative systems, inevitably subject to error costs.  Accordingly, Trump Administration enforcers should be mindful of the outstanding insights provide by Judge (and Professor) Frank Easterbrook on the harm from false positives in enforcement (which are more easily corrected by market forces than false negatives), and by Justice (and Professor) Stephen Breyer on the value of bright line rules and safe harbors, supported by sound economic analysis.  As to specifics, the DOJ and FTC should issue clear statements of policy on the great respect that should be accorded the exercise of intellectual property rights, to correct Obama antitrust enforcers’ poor record on intellectual property protection (see, for example, here).  The DOJ and the FTC should also accord greater respect to the efficiencies associated with unilateral conduct by firms possessing market power, and should consider reissuing an updated and revised version of the 2008 DOJ Report on Single Firm Conduct).

With regard to international competition policy, procedural issues should be accorded high priority.  Full and fair consideration by enforcers of all relevant evidence (especially economic evidence) and the views of all concerned parties ensures that sound analysis is brought to bear in enforcement proceedings and, thus, that errors in antitrust enforcement are minimized.  Regrettably, a lack of due process in foreign antitrust enforcement has become a matter of growing concern to the United States, as foreign competition agencies proliferate and increasingly bring actions against American companies.  Thus, the Trump Administration should make due process problems in antitrust a major enforcement priority.  White House-level support (ensuring the backing of other key Executive Branch departments engaged in foreign economic policy) for this priority may be essential, in order to strengthen the U.S. Government’s hand in negotiations and consultations with foreign governments on process-related concerns.

Finally, other international competition policy matters also merit close scrutiny by the new Administration.  These include such issues as the inappropriate imposition of extraterritorial remedies on American companies by foreign competition agencies; the harmful impact of anticompetitive foreign regulations on American businesses; and inappropriate attacks on the legitimate exercise of intellectual property by American firms (in particular, American patent holders).  As in the case of process-related concerns, White House attention and broad U.S. Government involvement in dealing with these problems may be essential.

That’s all for now, folks.  May you all enjoy your turkey and have a blessed Thanksgiving with friends and family.

Next week the FCC is slated to vote on the second iteration of Chairman Wheeler’s proposed broadband privacy rules. Of course, as has become all too common, none of us outside the Commission has actually seen the proposal. But earlier this month Chairman Wheeler released a Fact Sheet that suggests some of the ways it would update the rules he initially proposed.

According to the Fact Sheet, the new proposed rules are

designed to evolve with changing technologies and encourage innovation, and are in harmony with other key privacy frameworks and principles — including those outlined by the Federal Trade Commission and the Administration’s Consumer Privacy Bill of Rights.

Unfortunately, the Chairman’s proposal appears to fall short of the mark on both counts.

As I discuss in detail in a letter filed with the Commission yesterday, despite the Chairman’s rhetoric, the rules described in the Fact Sheet fail to align with the FTC’s approach to privacy regulation embodied in its 2012 Privacy Report in at least two key ways:

  • First, the Fact Sheet significantly expands the scope of information that would be considered “sensitive” beyond that contemplated by the FTC. That, in turn, would impose onerous and unnecessary consumer consent obligations on commonplace uses of data, undermining consumer welfare, depriving consumers of information and access to new products and services, and restricting competition.
  • Second, unlike the FTC’s framework, the proposal described by the Fact Sheet ignores the crucial role of “context” in determining the appropriate level of consumer choice before affected companies may use consumer data. Instead, the Fact Sheet takes a rigid, acontextual approach that would stifle innovation and harm consumers.

The Chairman’s proposal moves far beyond the FTC’s definition of “sensitive” information requiring “opt-in” consent

The FTC’s privacy guidance is, in its design at least, appropriately flexible, aimed at balancing the immense benefits of information flows with sensible consumer protections. Thus it eschews an “inflexible list of specific practices” that would automatically trigger onerous consent obligations and “risk[] undermining companies’ incentives to innovate and develop new products and services….”

Under the FTC’s regime, depending on the context in which it is used (on which see the next section, below), the sensitivity of data delineates the difference between data uses that require “express affirmative” (opt-in) consent and those that do not (requiring only “other protections” short of opt-in consent — e.g., opt-out).

Because the distinction is so important — because opt-in consent is much more likely to staunch data flows — the FTC endeavors to provide guidance as to what data should be considered sensitive, and to cabin the scope of activities requiring opt-in consent. Thus, the FTC explains that “information about children, financial and health information, Social Security numbers, and precise geolocation data [should be treated as] sensitive.” But beyond those instances, the FTC doesn’t consider any other type of data as inherently sensitive.

By contrast, and without explanation, Chairman Wheeler’s Fact Sheet significantly expands what constitutes “sensitive” information requiring “opt-in” consent by adding “web browsing history,” “app usage history,” and “the content of communications” to the list of categories of data deemed sensitive in all cases.

By treating some of the most common and important categories of data as always “sensitive,” and by making the sensitivity of data the sole determinant for opt-in consent, the Chairman’s proposal would make it almost impossible for ISPs to make routine (to say nothing of innovative), appropriate, and productive uses of data comparable to those undertaken by virtually every major Internet company.  This goes well beyond anything contemplated by the FTC — with no evidence of any corresponding benefit to consumers and with obvious harm to competition, innovation, and the overall economy online.

And because the Chairman’s proposal would impose these inappropriate and costly restrictions only on ISPs, it would create a barrier to competition by ISPs in other platform markets, without offering a defensible consumer protection rationale to justify either the disparate treatment or the restriction on competition.

As Fred Cate and Michael Staten have explained,

“Opt-in” offers no greater privacy protection than allowing consumers to “opt-out”…, yet it imposes significantly higher costs on consumers, businesses, and the economy.

Not surprisingly, these costs fall disproportionately on the relatively poor and the less technology-literate. In the former case, opt-in requirements may deter companies from offering services at all, even to people who would make a very different trade-off between privacy and monetary price. In the latter case, because an initial decision to opt-in must be taken in relative ignorance, users without much experience to guide their decisions will face effectively higher decision-making costs than more knowledgeable users.

The Chairman’s proposal ignores the central role of context in the FTC’s privacy framework

In part for these reasons, central to the FTC’s more flexible framework is the establishment of a sort of “safe harbor” for data uses where the benefits clearly exceed the costs and consumer consent may be inferred:

Companies do not need to provide choice before collecting and using consumer data for practices that are consistent with the context of the transaction or the company’s relationship with the consumer….

Thus for many straightforward uses of data, the “context of the transaction,” not the asserted “sensitivity” of the underlying data, is the threshold question in evaluating the need for consumer choice in the FTC’s framework.

Chairman Wheeler’s Fact Sheet, by contrast, ignores this central role of context in its analysis. Instead, it focuses solely on data sensitivity, claiming that doing so is “in line with customer expectations.”

But this is inconsistent with the FTC’s approach.

In fact, the FTC’s framework explicitly rejects a pure “consumer expectations” standard:

Rather than relying solely upon the inherently subjective test of consumer expectations, the… standard focuses on more objective factors related to the consumer’s relationship with a business.

And while everyone agrees that sensitivity is a key part of pegging privacy regulation to actual consumer and corporate relationships, the FTC also recognizes that the importance of the sensitivity of the underlying data varies with the context in which it is used. Or, in the words of the White House’s 2012 Consumer Data Privacy in a Networked World Report (introducing its Consumer Privacy Bill of Rights), “[c]ontext should shape the balance and relative emphasis of particular principles” guiding the regulation of privacy.

By contrast, Chairman Wheeler’s “sensitivity-determines-consumer-expectations” framing is a transparent attempt to claim fealty to the FTC’s (and the Administration’s) privacy standards while actually implementing a privacy regime that is flatly inconsistent with them.

The FTC’s approach isn’t perfect, but that’s no excuse to double down on its failings

The FTC’s privacy guidance, and even more so its privacy enforcement practices under Section 5, are far from perfect. The FTC should be commended for its acknowledgement that consumers’ privacy preferences and companies’ uses of data will change over time, and that there are trade-offs inherent in imposing any constraints on the flow of information. But even the FTC fails to actually assess the magnitude of the costs and benefits of, and the deep complexities involved in, the trade-off, and puts an unjustified thumb on the scale in favor of limiting data use.  

But that’s no excuse for Chairman Wheeler to ignore what the FTC gets right, and to double down on its failings. Based on the Fact Sheet (and the initial NPRM), it’s a virtual certainty that the Chairman’s proposal doesn’t heed the FTC’s refreshing call for humility and flexibility regarding the application of privacy rules to ISPs (and other Internet platforms):

These are complex and rapidly evolving areas, and more work should be done to learn about the practices of all large platform providers, their technical capabilities with respect to consumer data, and their current and expected uses of such data.

The rhetoric of the Chairman’s Fact Sheet is correct: the FCC should in fact conform its approach to privacy to the framework established by the FTC. Unfortunately, the reality of the Fact Sheet simply doesn’t comport with its rhetoric.

As the FCC’s vote on the Chairman’s proposal rapidly nears, and in light of its significant defects, we can only hope that the rest of the Commission refrains from reflexively adopting the proposed regime, and works to ensure that these problematic deviations from the FTC’s framework are addressed before moving forward.

Today ICLE released a white paper entitled, A critical assessment of the latest charge of Google’s anticompetitive bias from Yelp and Tim Wu.

The paper is a comprehensive response to a study by Michael Luca, Timothy Wu, Sebastian Couvidat, Daniel Frank, & William Seltzer, entitled, Is Google degrading search? Consumer harm from Universal Search.

The Wu, et al. paper will be one of the main topics of discussion at today’s Capitol Forum and George Washington Institute of Public Policy event on Dominant Platforms Under the Microscope: Policy Approaches in the US and EU, at which I will be speaking — along with a host of luminaries including, inter alia, Josh Wright, Jonathan Kanter, Allen Grunes, Catherine Tucker, and Michael Luca — one of the authors of the Universal Search study.

Follow the link above to register — the event starts at noon today at the National Press Club.

Meanwhile, here’s a brief description of our paper:

Late last year, Tim Wu of Columbia Law School (and now the White House Office of Management and Budget), Michael Luca of Harvard Business School (and a consultant for Yelp), and a group of Yelp data scientists released a study claiming that Google has been purposefully degrading search results from its more-specialized competitors in the area of local search. The authors’ claim is that Google is leveraging its dominant position in general search to thwart competition from specialized search engines by favoring its own, less-popular, less-relevant results over those of its competitors:

To improve the popularity of its specialized search features, Google has used the power of its dominant general search engine. The primary means for doing so is what is called the “universal search” or the “OneBox.”

This is not a new claim, and researchers have been attempting (and failing) to prove Google’s “bias” for some time. Likewise, these critics have drawn consistent policy conclusions from their claims, asserting that antitrust violations lie at the heart of the perceived bias. But the studies are systematically marred by questionable methodology and bad economics.

This latest study by Tim Wu, along with a cadre of researchers employed by Yelp (one of Google’s competitors and one of its chief antitrust provocateurs), fares no better, employing slightly different but equally questionable methodology, bad economics, and a smattering of new, but weak, social science. (For a thorough criticism of the inherent weaknesses of Wu et al.’s basic social science methodology, see Miguel de la Mano, Stephen Lewis, and Andrew Leyden, Focus on the Evidence: A Brief Rebuttal of Wu, Luca, et al (2016), available here).

The basic thesis of the study is that Google purposefully degrades its local searches (e.g., for restaurants, hotels, services, etc.) to the detriment of its specialized search competitors, local businesses, consumers, and even Google’s bottom line — and that this is an actionable antitrust violation.

But in fact the study shows nothing of the kind. Instead, the study is marred by methodological problems that, in the first instance, make it impossible to draw any reliable conclusions. Nor does the study show that Google’s conduct creates any antitrust-relevant problems. Rather, the construction of the study and the analysis of its results reflect a superficial and inherently biased conception of consumer welfare that completely undermines the study’s purported legal and economic conclusions.

Read the whole thing here.

Brand drug manufacturers are no strangers to antitrust accusations when it comes to their complicated relationship with generic competitors — most obviously with respect to reverse payment settlements. But the massive and massively complex regulatory scheme under which drugs are regulated has provided other opportunities for regulatory legerdemain with potentially anticompetitive effect, as well.

In particular, some FTC Commissioners have raised concerns that brand drug companies have been taking advantage of an FDA drug safety program — the Risk Evaluation and Mitigation Strategies program, or “REMS” — to delay or prevent generic entry.

Drugs subject to a REMS restricted distribution program are difficult to obtain through market channels and not otherwise readily available, even for would-be generic manufacturers that need samples in order to perform the tests required to receive FDA approval to market their products. REMS allows (requires, in fact) brand manufacturers to restrict the distribution of certain drugs that present safety or abuse risks, creating an opportunity for branded drug manufacturers to take advantage of imprecise regulatory requirements by inappropriately limiting access by generic manufacturers.

The FTC has not (yet) brought an enforcement action, but it has opened several investigations, and filed an amicus brief in a private-party litigation. Generic drug companies have filed several antitrust claims against branded drug companies and raised concerns with the FDA.

The problem, however, is that even if these companies are using REMS to delay generics, such a practice makes for a terrible antitrust case. Not only does the existence of a regulatory scheme arguably set Trinko squarely in the way of a successful antitrust case, but the sort of refusal to deal claims at issue here (as in Trinko) are rightly difficult to win because, as the DOJ’s Section 2 Report notes, “there likely are few circumstances where forced sharing would help consumers in the long run.”

But just because there isn’t a viable antitrust case doesn’t mean there isn’t still a competition problem. But in this case, it’s a problem of regulatory failure. Companies rationally take advantage of poorly written federal laws and regulations in order to tilt the market to their own advantage. It’s no less problematic for the market, but its solution is much more straightforward, if politically more difficult.

Thus it’s heartening to see that Senator Mike Lee (R-UT), along with three of his colleagues (Patrick Leahy (D-VT), Chuck Grassley (R-IA), and Amy Klobuchar (D-MN)), has proposed a novel but efficient way to correct these bureaucracy-generated distortions in the pharmaceutical market without resorting to the “blunt instrument” of antitrust law. As the bill notes:

While the antitrust laws may address actions by license holders who impede the prompt negotiation and development on commercially reasonable terms of a single, shared system of elements to assure safe use, a more tailored legal pathway would help ensure that license holders negotiate such agreements in good faith and in a timely manner, facilitating competition in the marketplace for drugs and biological products.

The legislative solution put forward by the Creating and Restoring Equal Access to Equivalent Samples (CREATES) Act of 2016 targets the right culprit: the poor regulatory drafting that permits possibly anticompetitive conduct to take place. Moreover, the bill refrains from creating a per se rule, instead implementing several features that should still enable brand manufacturers to legitimately restrict access to drug samples when appropriate.

In essence, Senator Lee’s bill introduces a third party (in this case, the Secretary of Health and Human Services) who is capable of determining whether an eligible generic manufacturer is able to comply with REMS restrictions — thus bypassing any bias on the part of the brand manufacturer. Where the Secretary determines that a generic firm meets the REMS requirements, the bill also creates a narrow cause of action for this narrow class of plaintiffs, allowing suits against certain brand manufacturers who — despite the prohibition on using REMS to delay generics — nevertheless misuse the process to delay competitive entry.

Background on REMS

The REMS program was introduced as part of the Food and Drug Administration Amendments Act of 2007 (FDAAA). Following the withdrawal of Vioxx, an arthritis pain reliever, from the market because of a post-approval linkage of the drug to heart attacks, the FDA was under considerable fire, and there was a serious risk that fewer and fewer net beneficial drugs would be approved. The REMS program was introduced by Congress as a mechanism to ensure that society could reap the benefits from particularly risky drugs and biologics — rather than the FDA preventing them from entering the market at all. It accomplishes this by ensuring (among other things) that brands and generics adopt appropriate safety protocols for distribution and use of drugs — particularly when a drug has the potential to cause serious side effects, or has an unusually high abuse profile.

The FDA-determined REMS protocols can range from the simple (e.g., requiring a medication guide or a package insert about potential risks) to the more burdensome (including restrictions on a drug’s sale and distribution, or what the FDA calls “Elements to Assure Safe Use” (“ETASU”)). Most relevant here, the REMS process seems to allow brands considerable leeway to determine whether generic manufacturers are compliant or able to comply with ETASUs. Given this discretion, it is no surprise that brand manufacturers may be tempted to block competition by citing “safety concerns.”

Although the FDA specifically forbids the use of REMS to block lower-cost, generic alternatives from entering the market (of course), almost immediately following the law’s enactment, certain less-scrupulous branded pharmaceutical companies began using REMS for just that purpose (also, of course).

REMS abuse

To enter into pharmaceutical markets that no longer have any underlying IP protections, manufactures must submit to the FDA an Abbreviated New Drug Application (ANDA) for a generic, or an Abbreviated Biologic License Application (ABLA) for a biosimilar, of the brand drug. The purpose is to prove to the FDA that the competing product is as safe and effective as the branded reference product. In order to perform the testing sufficient to prove efficacy and safety, generic and biosimilar drug manufacturers must acquire a sample (many samples, in fact) of the reference product they are trying to replicate.

For the narrow class of dangerous or highly abused drugs, generic manufacturers are forced to comply with any REMS restrictions placed upon the brand manufacturer — even when the terms require the brand manufacturer to tightly control the distribution of its product.

And therein lies the problem. Because the brand manufacturer controls access to its products, it can refuse to provide the needed samples, using REMS as an excuse. In theory, it may be true in certain cases that a brand manufacturer is justified in refusing to distribute samples of its product, of course; some would-be generic manufacturers certainly may not meet the requisite standards for safety and security.

But in practice it turns out that most of the (known) examples of brands refusing to provide samples happen across the board — they preclude essentially all generic competition, not just the few firms that might have insufficient safeguards. It’s extremely difficult to justify such refusals on the basis of a generic manufacturer’s suitability when all would-be generic competitors are denied access, including well-established, high-quality manufacturers.

But, for a few brand manufacturers, at least, that seems to be how the REMS program is implemented. Thus, for example, Jon Haas, director of patient access at Turing Pharmaceuticals, referred to the practice of denying generics samples this way:

Most likely I would block that purchase… We spent a lot of money for this drug. We would like to do our best to avoid generic competition. It’s inevitable. They seem to figure out a way [to make generics], no matter what. But I’m certainly not going to make it easier for them. We’re spending millions and millions in research to find a better Daraprim, if you will.

As currently drafted, the REMS program gives branded manufacturers the ability to limit competition by stringing along negotiations for product samples for months, if not years. Although access to a few samples for testing is seemingly such a small, trivial thing, the ability to block this access allows a brand manufacturer to limit competition (at least from bioequivalent and generic drugs; obviously competition between competing branded drugs remains).

And even if a generic competitor manages to get ahold of samples, the law creates an additional wrinkle by imposing a requirement that brand and generic manufacturers enter into a single shared REMS plan for bioequivalent and generic drugs. But negotiating the particulars of the single, shared program can drag on for years. Consequently, even when a generic manufacturer has received the necessary samples, performed the requisite testing, and been approved by the FDA to sell a competing drug, it still may effectively be barred from entering the marketplace because of REMS.

The number of drugs covered by REMS is small: fewer than 100 in a universe of several thousand FDA-approved drugs. And the number of these alleged to be subject to abuse is much smaller still. Nonetheless, abuse of this regulation by certain brand manufacturers has likely limited competition and increased prices.

Antitrust is not the answer

Whether the complex, underlying regulatory scheme that allocates the relative rights of brands and generics — and that balances safety against access — gets the balance correct or not is an open question, to be sure. But given the regulatory framework we have and the perceived need for some sort of safety controls around access to samples and for shared REMS plans, the law should at least work to do what it intends, without creating an opportunity for harmful manipulation. Yet it appears that the ambiguity of the current law has allowed some brand manufacturers to exploit these safety protections to limit competition.

As noted above, some are quite keen to make this an antitrust issue. But, as also noted, antitrust is a poor fit for handling such abuses.

First, antitrust law has an uneasy relationship with other regulatory schemes. Not least because of Trinko, it is a tough case to make that brand manufacturers are violating antitrust laws when they rely upon legal obligations under a safety program that is essentially designed to limit generic entry on safety grounds. The issue is all the more properly removed from the realm of antitrust enforcement given that the problem is actually one of regulatory failure, not market failure.

Second, antitrust law doesn’t impose a duty to deal with rivals except in very limited circumstances. In Trinko, for example, the Court rejected the invitation to extend a duty to deal to situations where an existing, voluntary economic relationship wasn’t terminated. By definition this is unlikely to be the case here where the alleged refusal to deal is what prevents the generic from entering the market in the first place. The logic behind Trinko (and a host of other cases that have limited competitors’ obligations to assist their rivals) was to restrict duty to deal cases to those rare circumstances where it reliably leads to long-term competitive harm — not where it amounts to a perfectly legitimate effort to compete without giving rivals a leg-up.

But antitrust is such a powerful tool and such a flexible “catch-all” regulation, that there are always efforts to thwart reasonable limits on its use. As several of us at TOTM have written about at length in the past, former FTC Commissioner Rosch and former FTC Chairman Leibowitz were vocal proponents of using Section 5 of the FTC Act to circumvent sensible judicial limits on making out and winning antitrust claims, arguing that the limits were meant only for private plaintiffs — not (implicitly infallible) government enforcers. Although no one at the FTC has yet (publicly) suggested bringing a REMS case as a standalone Section 5 case, such a case would be consistent with the sorts of theories that animated past standalone Section 5 cases.

Again, this approach serves as an end-run around the reasonable judicial constraints that evolved as a result of judges actually examining the facts of individual cases over time, and is a misguided way of dealing with what is, after all, fundamentally a regulatory design problem.

The CREATES Act

Senator Lee’s bill, on the other hand, aims to solve the problem with a more straightforward approach by improving the existing regulatory mechanism and by adding a limited judicial remedy to incentivize compliance under the amended regulatory scheme. In summary:

  • The bill creates a cause of action for a refusal to deal only where plaintiff can prove, by a preponderance of the evidence, that certain well-defined conditions are met.
  • For samples, if a drug is not covered by a REMS, or if the generic manufacturer is specifically authorized, then the generic can sue if it doesn’t receive sufficient quantities of samples on commercially reasonable terms. This is not a per se offense subject to outsized antitrust damages. Instead, the remedy is a limited injunction ensuring the sale of samples on commercially reasonable terms, reasonable attorneys’ fees, and a monetary fine limited to revenue earned from sale of the drug during the refusal period.
  • The bill also gives a brand manufacturer an affirmative defense if it can prove by a preponderance of the evidence that, regardless of its own refusal to supply them, samples were nevertheless available elsewhere on commercially reasonable terms, or where the brand manufacturer is unable to supply the samples because it does not actually produce or market the drug.
  • In order to deal with the REMS process problems, the bill creates similar rights with similar limitations when the license holders and generics cannot come to an agreement about a shared REMS on commercially reasonable terms within 120 days of first contact by an eligible developer.
  • The bill also explicitly limits brand manufacturers’ liability for claims “arising out of the failure of an [eligible generic manufacturer] to follow adequate safeguards,” thus removing one of the (perfectly legitimate) objections to the bill pressed by brand manufacturers.

The primary remedy is limited, injunctive relief to end the delay. And brands are protected from frivolous litigation by an affirmative defense under which they need only show that the product is available for purchase on reasonable terms elsewhere. Damages are similarly limited and are awarded only if a court finds that the brand manufacturer lacked a legitimate business justification for its conduct (which, under the drug safety regime, means essentially a reasonable belief that its own REMS plan would be violated by dealing with the generic entrant). And monetary damages do not include punitive damages.

Finally, the proposed bill completely avoids the question of whether antitrust laws are applicable, leaving that possibility open to determination by courts — as is appropriate. Moreover, by establishing even more clearly the comprehensive regulatory regime governing potential generic entrants’ access to dangerous drugs, the bill would, given the holding in Trinko, probably make application of antitrust laws here considerably less likely.

Ultimately Senator Lee’s bill is a well-thought-out and targeted fix to an imperfect regulation that seems to be facilitating anticompetitive conduct by a few bad actors. It does so without trampling on the courts’ well-established antitrust jurisprudence, and without imposing excessive cost or risk on the majority of brand manufacturers that behave perfectly appropriately under the law.