Archives For anticompetitive market distortions

Today, the Senate Committee on Health, Education, Labor, and Pensions (HELP) enters the drug pricing debate with a hearing on “The Cost of Prescription Drugs: How the Drug Delivery System Affects What Patients Pay.”  By questioning the role of the drug delivery system in pricing, the hearing goes beyond the more narrow focus of recent hearings that have explored how drug companies set prices.  Instead, today’s hearing will explore how pharmacy benefit managers, insurers, providers, and others influence the amounts that patients pay.

In 2016, net U.S. drug spending increased by 4.8% to $323 billion (after adjusting for rebates and off-invoice discounts).  This rate of growth slowed to less than half the rates of 2014 and 2015, when net drug spending grew at rates of 10% and 8.9% respectively.  Yet despite the slowing in drug spending, the public outcry over the cost of prescription drugs continues.

In today’s hearing, there will be testimony both on the various causes of drug spending increases and on various proposals that could reduce the cost of drugs.  Several of the proposals will focus on ways to increase competition in the pharmaceutical industry, and in turn, reduce drug prices.  I have previously explained several ways that the government could reduce prices through enhanced competition, including reducing the backlog of generic drugs awaiting FDA approval and expediting the approval and acceptance of biosimilars.  Other proposals today will likely call for regulatory reforms to enable innovative contractual arrangements that allow for outcome- or indication-based pricing and other novel reimbursement designs.

However, some proposals will undoubtedly return to the familiar call for more government negotiation of drug prices, especially drugs covered under Medicare Part D.  As I’ve discussed in a previous post, in order for government negotiation to significantly lower drug prices, the government must be able to put pressure on drug makers to secure price concessions. This could be achieved if the government could set prices administratively, penalize manufacturers that don’t offer price reductions, or establish a formulary.  Setting prices or penalizing drug makers that don’t reduce prices would produce the same disastrous effects as price controls: drug shortages in certain markets, increased prices for non-Medicare patients, and reduced incentives for innovation. A government formulary for Medicare Part D coverage would provide leverage to obtain discounts from manufacturers, but it would mean that many patients could no longer access some of their optimal drugs.

As lawmakers seriously consider changes that would produce these negative consequences, industry would do well to voluntarily constrain prices.  Indeed, in the last year, many drug makers have pledged to limit price increases to keep drug spending under control.  Allergan was first, with its “social contract” introduced last September that promised to keep price increases below 10 percent. Since then, Novo Nordisk, AbbVie, and Takeda, have also voluntarily committed to single-digit price increases.

So far, the evidence shows the drug makers are sticking to their promises. Allergan has raised the price of U.S. branded products by an average of 6.7% in 2017, and no drug’s list price has increased by more than single digits.  In contrast, Pfizer, who has made no pricing commitment, has raised the price of many of its drugs by 20%.

If more drug makers brought about meaningful change by committing to voluntary pricing restraints, the industry could prevent the market-distorting consequences of government intervention while helping patients afford the drugs they need.   Moreover, avoiding intrusive government mandates and price controls would preserve drug innovation that has brought life-saving and life-enhancing drugs to millions of Americans.

 

 

 

Nicolas Petit is Professor of Law at the University of Liege (Belgium) and Research Professor at the University of South Australia (UniSA)

This symposium offers a good opportunity to look again into the complex relation between concentration and innovation in antitrust policy. Whilst the details of the EC decision in Dow/Dupont remain unknown, the press release suggests that the issue of “incentives to innovate” was central to the review. Contrary to what had leaked in the antitrust press, the decision has apparently backed off from the introduction of a new “model”, and instead followed a more cautious approach. After a quick reminder of the conventional “appropriability v cannibalizationframework that drives merger analysis in innovation markets (1), I make two sets of hopefully innovative remarks on appropriability and IP rights (2) and on cannibalization in the ag-biotech sector (3).

Appropriability versus cannibalization

Antitrust economics 101 teach that mergers affect innovation incentives in two polar ways. A merger may increase innovation incentives. This occurs when the increment in power over price or output achieved through merger enhances the appropriability of the social returns to R&D. The appropriability effect of mergers is often tied to Joseph Schumpeter, who observed that the use of “protecting devices” for past investments like patent protection or trade secrecy constituted a “normal elemen[t] of rational management”. The appropriability effect can in principle be observed at firm – specific incentives – and industry – general incentives – levels, because actual or potential competitors can also use the M&A market to appropriate the payoffs of R&D investments.

But a merger may decrease innovation incentives. This happens when the increased industry position achieved through merger discourages the introduction of new products, processes or services. This is because an invention will cannibalize the merged entity profits in proportions larger as would be the case in a more competitive market structure. This idea is often tied to Kenneth Arrow who famously observed that a “preinvention monopoly power acts as a strong disincentive to further innovation”.

Schumpeter’s appropriability hypothesis and Arrow’s cannibalization theory continue to drive much of the discussion on concentration and innovation in antitrust economics. True, many efforts have been made to overcome, reconcile or bypass both views of the world. Recent studies by Carl Shapiro or Jon Baker are worth mentioning. But Schumpeter and Arrow remain sticky references in any discussion of the issue. Perhaps more than anything, the persistence of their ideas denotes that both touched a bottom point when they made their seminal contribution, laying down two systems of belief on the workings of innovation-driven markets.

Now beyond the theory, the appropriability v cannibalization gravitational models provide from the outset an appealing framework for the examination of mergers in R&D driven industries in general. From an operational perspective, the antitrust agency will attempt to understand if the transaction increases appropriability – which leans in favour of clearance – or cannibalization – which leans in favour of remediation. At the same time, however, the downside of the appropriability v cannibalization framework (and of any framework more generally) may be to oversimplify our understanding of complex phenomena. This, in turn, prompts two important observations on each branch of the framework.

Appropriability and IP rights

Any antitrust agency committed to promoting competition and innovation should consider mergers in light of the degree of appropriability afforded by existing protecting devices (essentially contracts and entitlements). This is where Intellectual Property (“IP”) rights become relevant to the discussion. In an industry with strong IP rights, the merging parties (and its rivals) may be able to appropriate the social returns to R&D without further corporate concentration. Put differently, the stronger the IP rights, the lower the incremental contribution of a merger transaction to innovation, and the higher the case for remediation.

This latter proposition, however, rests on a heavy assumption: that IP rights confer perfect appropriability. The point is, however, far from obvious. Most of us know that – and our antitrust agencies’ misgivings with other sectors confirm it – IP rights are probabilistic in nature. There is (i) no certainty that R&D investments will lead to commercially successful applications; (ii) no guarantee that IP rights will resist to invalidity proceedings in court; (iii) little safety to competition by other product applications which do not practice the IP but provide substitute functionality; and (iv) no inevitability that the environmental, toxicological and regulatory authorization rights that (often) accompany IP rights will not be cancelled when legal requirements change. Arrow himself called for caution, noting that “Patent laws would have to be unimaginably complex and subtle to permit [such] appropriation on a large scale”. A thorough inquiry into the specific industry-strength of IP rights that goes beyond patent data and statistics thus constitutes a necessary step in merger review.

But it is not a sufficient one. The proposition that strong IP rights provide appropriability is essentially valid if the observed pre-merger market situation is one where several IP owners compete on differentiated products and as a result wield a degree of market power. In contrast, the proposition is essentially invalid if the observed pre-merger market situation leans more towards the competitive equilibrium and IP owners compete at prices closer to costs. In both variants, the agency should thus look carefully at the level and evolution of prices and costs, including R&D ones, in the pre-merger industry. Moreover, in the second variant, the agency ought to consider as a favourable appropriability factor any increase of the merging entity’s power over price, but also any improvement of its power over cost. By this, I have in mind efficiency benefits, which can arise as the result of economies of scale (in manufacturing but also in R&D), but also when the transaction combines complementary technological and marketing assets. In Dow/Dupont, no efficiency argument has apparently been made by the parties, so it is difficult to understand if and how such issues have played a role in the Commission’s assessment.

Cannibalization, technological change, and drastic innovation

Arrow’s cannibalization theory – namely that a pre-invention monopoly acts as a strong disincentive to further innovation – fails to capture that successful inventions create new technology frontiers, and with them entirely novel needs that even a monopolist has an incentive to serve. This can be understood with an example taken from the ag-biotech field. It is undisputed that progress in crop protection science has led to an expanding range of resistant insects, weeds, and pathogens. This, in turn, is one (if not the main) key drivers of ag-tech research. In a 2017 paper published in Pest Management Science, Sparks and Lorsbach observe that:

resistance to agrochemicals is an ongoing driver for the development of new chemical control options, along with an increased emphasis on resistance management and how these new tools can fit into resistance management programs. Because resistance is such a key driver for the development of new agrochemicals, a highly prized attribute for a new agrochemical is a new MoA [method of action] that is ideally a new molecular target either in an existing target site (e.g., an unexploited binding site in the voltage-gated sodium channel), or new/under-utilized target site such as calcium channels.

This, and other factors, leads them to conclude that:

even with fewer companies overall involved in agrochemical discovery, innovation continues, as demonstrated by the continued introduction of new classes of agrochemicals with new MoAs.

Sparks, Hahn, and Garizi make a similar point. They stress in particular that the discovery of natural products (NPs) which are the “output of nature’s chemical laboratory” is today a main driver of crop protection research. According to them:

NPs provide very significant value in identifying new MoAs, with 60% of all agrochemical MoAs being, or could have been, defined by a NP. This information again points to the importance of NPs in agrochemical discovery, since new MoAs remain a top priority for new agrochemicals.

More generally, the point is not that Arrow’s cannibalization theory is wrong. Arrow’s work convincingly explains monopolists’ low incentives to invest in substitute invention. Instead, the point is that Arrow’s cannibalization theory is narrower than often assumed in the antitrust policy literature. Admittedly, Arrow’s cannibalization theory is relevant in industries primarily driven by a process of cumulative innovation. But it is much less helpful to understand the incentives of a monopolist in industries subject to technological change. As a result of this, the first question that should guide an antitrust agency investigation is empirical in nature: is the industry under consideration one driven by cumulative innovation, or one where technology disruption, shocks, and serendipity incentivize drastic innovation?

Note that exogenous factors beyond technological frontiers also promote drastic innovation. This point ought not to be overlooked. A sizeable amount of the specialist scientific literature stresses the powerful innovation incentives created by changing dietary habits, new diseases (e.g. the Zika virus), global population growth, and environmental challenges like climate change and weather extremes. In 2015, Jeschke noted:

In spite of the significant consolidation of the agrochemical companies, modern agricultural chemistry is vital and will have the opportunity to shape the future of agriculture by continuing to deliver further innovative integrated solutions. 

Words of wisdom caution for antitrust agencies tasked with the complex mission of reviewing mergers in the ag-biotech industry?

In a weekend interview with the Washington Post, Donald Trump vowed to force drug companies to negotiate directly with the government on prices in Medicare and Medicaid.  It’s unclear what, if anything, Trump intends for Medicaid; drug makers are already required to sell drugs to Medicaid at the lowest price they negotiate with any other buyer.  For Medicare, Trump didn’t offer any more details about the intended negotiations, but he’s referring to his campaign proposals to allow the Department of Health and Human Services (HHS) to negotiate directly with manufacturers the prices of drugs covered under Medicare Part D.

Such proposals have been around for quite a while.  As soon as the Medicare Modernization Act (MMA) of 2003 was enacted, creating the Medicare Part D prescription drug benefit, many lawmakers began advocating for government negotiation of drug prices. Both Hillary Clinton and Bernie Sanders favored this approach during their campaigns, and the Obama Administration’s proposed budget for fiscal years 2016 and 2017 included a provision that would have allowed the HHS to negotiate prices for a subset of drugs: biologics and certain high-cost prescription drugs.

However, federal law would have to change if there is to be any government negotiation of drug prices under Medicare Part D. Congress explicitly included a “noninterference” clause in the MMA that stipulates that HHS “may not interfere with the negotiations between drug manufacturers and pharmacies and PDP sponsors, and may not require a particular formulary or institute a price structure for the reimbursement of covered part D drugs.”

Most people don’t understand what it means for the government to “negotiate” drug prices and the implications of the various options.  Some proposals would simply eliminate the MMA’s noninterference clause and allow HHS to negotiate prices for a broad set of drugs on behalf of Medicare beneficiaries.  However, the Congressional Budget Office has already concluded that such a plan would have “a negligible effect on federal spending” because it is unlikely that HHS could achieve deeper discounts than the current private Part D plans (there are 746 such plans in 2017).  The private plans are currently able to negotiate significant discounts from drug manufacturers by offering preferred formulary status for their drugs and channeling enrollees to the formulary drugs with lower cost-sharing incentives. In most drug classes, manufacturers compete intensely for formulary status and offer considerable discounts to be included.

The private Part D plans are required to provide only two drugs in each of several drug classes, giving the plans significant bargaining power over manufacturers by threatening to exclude their drugs.  However, in six protected classes (immunosuppressant, anti-cancer, anti-retroviral, antidepressant, antipsychotic and anticonvulsant drugs), private Part D plans must include “all or substantially all” drugs, thereby eliminating their bargaining power and ability to achieve significant discounts.  Although the purpose of the limitation is to prevent plans from cherry-picking customers by denying coverage of certain high cost drugs, giving the private Part D plans more ability to exclude drugs in the protected classes should increase competition among manufacturers for formulary status and, in turn, lower prices.  And it’s important to note that these price reductions would not involve any government negotiation or intervention in Medicare Part D.  However, as discussed below, excluding more drugs in the protected classes would reduce the value of the Part D plans to many patients by limiting access to preferred drugs.

For government negotiation to make any real difference on Medicare drug prices, HHS must have the ability to not only negotiate prices, but also to put some pressure on drug makers to secure price concessions.  This could be achieved by allowing HHS to also establish a formulary, set prices administratively, or take other regulatory actions against manufacturers that don’t offer price reductions.  Setting prices administratively or penalizing manufacturers that don’t offer satisfactory reductions would be tantamount to a price control.  I’ve previously explained that price controls—whether direct or indirect—are a bad idea for prescription drugs for several reasons. Evidence shows that price controls lead to higher initial launch prices for drugs, increased drug prices for consumers with private insurance coverage,  drug shortages in certain markets, and reduced incentives for innovation.

Giving HHS the authority to establish a formulary for Medicare Part D coverage would provide leverage to obtain discounts from manufacturers, but it would produce other negative consequences.  Currently, private Medicare Part D plans cover an average of 85% of the 200 most popular drugs, with some plans covering as much as 93%.  In contrast, the drug benefit offered by the Department of Veterans Affairs (VA), one government program that is able to set its own formulary to achieve leverage over drug companies, covers only 59% of the 200 most popular drugs.  The VA’s ability to exclude drugs from the formulary has generated significant price reductions. Indeed, estimates suggest that if the Medicare Part D formulary was restricted to the VA offerings and obtained similar price reductions, it would save Medicare Part D $510 per beneficiary.  However, the loss of access to so many popular drugs would reduce the value of the Part D plans by $405 per enrollee, greatly narrowing the net gains.

History has shown that consumers don’t like their access to drugs reduced.  In 2014, Medicare proposed to take antidepressants, antipsychotic and immunosuppressant drugs off the protected list, thereby allowing the private Part D plans to reduce offerings of these drugs on the formulary and, in turn, reduce prices.  However, patients and their advocates were outraged at the possibility of losing access to their preferred drugs, and the proposal was quickly withdrawn.

Thus, allowing the government to negotiate prices under Medicare Part D could carry important negative consequences.  Policy-makers must fully understand what it means for government to negotiate directly with drug makers, and what the potential consequences are for price reductions, access to popular drugs, drug innovation, and drug prices for other consumers.

On November 9, pharmaceutical stocks soared as Donald Trump’s election victory eased concerns about government intervention in drug pricing. Shares of Pfizer rose 8.5%, Allergan PLC was up 8%, and biotech Celgene jumped 10.4%. Drug distributors also gained, with McKesson up 6.4% and Express Scripts climbing 3.4%. Throughout the campaign, Clinton had vowed to take on the pharmaceutical industry and proposed various reforms to reign in drug prices, from levying fines on drug companies that imposed unjustified price increases to capping patients’ annual expenditures on drugs. Pharmaceutical stocks had generally underperformed this year as the market, like much of America, awaited a Clinton victory.

In contrast, Trump generally had less to say on the subject of drug pricing, hence the market’s favorable response to his unexpected victory. Yet, as the end of the first post-election month draws near, we are still uncertain whether Trump is friend or foe to the pharmaceutical industry. Trump’s only proposal that directly impacts the industry would allow the government to negotiate the prices of Medicare Part D drugs with drug makers. Although this proposal would likely have little impact on prices because existing Part D plans already negotiate prices with drug makers, there is a risk that this “negotiation” could ultimately lead to price controls imposed on the industry. And as I have previously discussed, price controls—whether direct or indirect—are a bad idea for prescription drugs: they lead to higher initial launch prices for drugs, increased drug prices for consumers with private insurance coverage, drug shortages in certain markets, and reduced incentives for innovation.

Several of Trump’s other health proposals have mixed implications for the industry. For example, a repeal or overhaul of the Affordable Care Act could eliminate the current tax on drug makers and loosen requirements for Medicaid drug rebates and Medicare part D discounts. On the other hand, if repealing the ACA reduces the number of people insured, spending on pharmaceuticals would fall. Similarly, if Trump renegotiates international trade deals, pharmaceutical firms could benefit from stronger markets or longer patent exclusivity rights, or they could suffer if foreign countries abandon trade agreements altogether or retaliate with disadvantageous terms.

Yet, with drug spending up 8.5 percent last year and recent pricing scandals launched by 500+ percentage increases in individual drugs (i.e., Martin Shkreli, Valeant Pharmaceuticals, Mylan), the current debate over drug pricing is unlikely to fade. Even a Republican-led Congress and White House is likely to heed the public outcry and do something about drug prices.

Drug makers would be wise to stave off any government-imposed price restrictions by voluntarily limiting price increases on important drugs. Major pharmaceutical company Allergan has recently done just this by issuing a “social contract with patients” that made several drug pricing commitments to its customers. Among other assurances, Allergan has promised to limit price increases to single-digit percentage increases and no longer engage in the common industry tactic of dramatically increasing prices for branded drugs nearing patent expiry. Last year throughout the pharmaceutical industry, the prices of the most commonly-used brand drugs increased by over 16 percent and, in the last two years before patent expiry, drug makers increased the list prices of drugs by an average of 35 percent. Thus, Allergan’s commitment will produce significant savings over the life of a product, creating hundreds of millions of dollars in savings to health plans, patients, and the health care system.

If Allergan can make this commitment for its entire drug inventory—over 80+ drugs—why haven’t other companies done the same? Similar commitments by other drug makers might be enough to prevent lawmakers from turning to market-distorting reforms, such as price controls, that could end up doing more harm than good for consumers, the pharmaceutical industry, and long-term innovation.

As Truth on the Market readers prepare to enjoy their Thanksgiving dinners, let me offer some (hopefully palatable) “food for thought” on a competition policy for the new Trump Administration.  In referring to competition policy, I refer not just to lawsuits directed against private anticompetitive conduct, but more broadly to efforts aimed at curbing government regulatory barriers that undermine the competitive process.

Public regulatory barriers are a huge problem.  Their costs have been highlighted by prestigious international research bodies such as the OECD and World Bank, and considered by the International Competition Network’s Advocacy Working Group.  Government-imposed restrictions on competition benefit powerful incumbents and stymie entry by innovative new competitors.  (One manifestation of this that is particularly harmful for American workers and denies job opportunities to millions of lower-income Americans is occupational licensing, whose increasing burdens are delineated in a substantial body of research – see, for example, a 2015 Obama Administration White House Report and a 2016 Heritage Foundation Commentary that explore the topic.)  Federal Trade Commission (FTC) and Justice Department (DOJ) antitrust officials should consider emphasizing “state action” lawsuits aimed at displacing entry barriers and other unwarranted competitive burdens imposed by self-interested state regulatory boards.  When the legal prerequisites for such enforcement actions are not met, the FTC and the DOJ should ramp up their “competition advocacy” efforts, with the aim of convincing state regulators to avoid adopting new restraints on competition – and, where feasible, eliminating or curbing existing restraints.

The FTC and DOJ also should be authorized by the White House to pursue advocacy initiatives whose goal is to dismantle or lessen the burden of excessive federal regulations (such advocacy played a role in furthering federal regulatory reform during the Ford and Carter Administrations).  To bolster those initiatives, the Trump Administration should consider establishing a high-level federal task force on procompetitive regulatory reform, in the spirit of previous reform initiatives.  The task force would report to the president and include senior level representatives from all federal agencies with regulatory responsibilities.  The task force could examine all major regulatory and statutory schemes overseen by Executive Branch and independent agencies, and develop a list of specific reforms designed to reduce federal regulatory impediments to robust competition.  Those reforms could be implemented through specific regulatory changes or legislative proposals, as the case might require.  The task force would have ample material to work with – for example, anticompetitive cartel-like output restrictions, such as those allowed under federal agricultural orders, are especially pernicious.  In addition to specific cartel-like programs, scores of regulatory regimes administered by individual federal agencies impose huge costs and merit particular attention, as documented in the Heritage Foundation’s annual “Red Tape Rising” reports that document the growing burden of federal regulation (see, for example, the 2016 edition of Red Tape Rising).

With respect to traditional antitrust enforcement, the Trump Administration should emphasize sound, empirically-based economic analysis in merger and non-merger enforcement.  They should also adopt a “decision-theoretic” approach to enforcement, to the greatest extent feasible.  Specifically, in developing their enforcement priorities, in considering case selection criteria, and in assessing possible new (or amended) antitrust guidelines, DOJ and FTC antitrust enforcers should recall that antitrust is, like all administrative systems, inevitably subject to error costs.  Accordingly, Trump Administration enforcers should be mindful of the outstanding insights provide by Judge (and Professor) Frank Easterbrook on the harm from false positives in enforcement (which are more easily corrected by market forces than false negatives), and by Justice (and Professor) Stephen Breyer on the value of bright line rules and safe harbors, supported by sound economic analysis.  As to specifics, the DOJ and FTC should issue clear statements of policy on the great respect that should be accorded the exercise of intellectual property rights, to correct Obama antitrust enforcers’ poor record on intellectual property protection (see, for example, here).  The DOJ and the FTC should also accord greater respect to the efficiencies associated with unilateral conduct by firms possessing market power, and should consider reissuing an updated and revised version of the 2008 DOJ Report on Single Firm Conduct).

With regard to international competition policy, procedural issues should be accorded high priority.  Full and fair consideration by enforcers of all relevant evidence (especially economic evidence) and the views of all concerned parties ensures that sound analysis is brought to bear in enforcement proceedings and, thus, that errors in antitrust enforcement are minimized.  Regrettably, a lack of due process in foreign antitrust enforcement has become a matter of growing concern to the United States, as foreign competition agencies proliferate and increasingly bring actions against American companies.  Thus, the Trump Administration should make due process problems in antitrust a major enforcement priority.  White House-level support (ensuring the backing of other key Executive Branch departments engaged in foreign economic policy) for this priority may be essential, in order to strengthen the U.S. Government’s hand in negotiations and consultations with foreign governments on process-related concerns.

Finally, other international competition policy matters also merit close scrutiny by the new Administration.  These include such issues as the inappropriate imposition of extraterritorial remedies on American companies by foreign competition agencies; the harmful impact of anticompetitive foreign regulations on American businesses; and inappropriate attacks on the legitimate exercise of intellectual property by American firms (in particular, American patent holders).  As in the case of process-related concerns, White House attention and broad U.S. Government involvement in dealing with these problems may be essential.

That’s all for now, folks.  May you all enjoy your turkey and have a blessed Thanksgiving with friends and family.

Next week the FCC is slated to vote on the second iteration of Chairman Wheeler’s proposed broadband privacy rules. Of course, as has become all too common, none of us outside the Commission has actually seen the proposal. But earlier this month Chairman Wheeler released a Fact Sheet that suggests some of the ways it would update the rules he initially proposed.

According to the Fact Sheet, the new proposed rules are

designed to evolve with changing technologies and encourage innovation, and are in harmony with other key privacy frameworks and principles — including those outlined by the Federal Trade Commission and the Administration’s Consumer Privacy Bill of Rights.

Unfortunately, the Chairman’s proposal appears to fall short of the mark on both counts.

As I discuss in detail in a letter filed with the Commission yesterday, despite the Chairman’s rhetoric, the rules described in the Fact Sheet fail to align with the FTC’s approach to privacy regulation embodied in its 2012 Privacy Report in at least two key ways:

  • First, the Fact Sheet significantly expands the scope of information that would be considered “sensitive” beyond that contemplated by the FTC. That, in turn, would impose onerous and unnecessary consumer consent obligations on commonplace uses of data, undermining consumer welfare, depriving consumers of information and access to new products and services, and restricting competition.
  • Second, unlike the FTC’s framework, the proposal described by the Fact Sheet ignores the crucial role of “context” in determining the appropriate level of consumer choice before affected companies may use consumer data. Instead, the Fact Sheet takes a rigid, acontextual approach that would stifle innovation and harm consumers.

The Chairman’s proposal moves far beyond the FTC’s definition of “sensitive” information requiring “opt-in” consent

The FTC’s privacy guidance is, in its design at least, appropriately flexible, aimed at balancing the immense benefits of information flows with sensible consumer protections. Thus it eschews an “inflexible list of specific practices” that would automatically trigger onerous consent obligations and “risk[] undermining companies’ incentives to innovate and develop new products and services….”

Under the FTC’s regime, depending on the context in which it is used (on which see the next section, below), the sensitivity of data delineates the difference between data uses that require “express affirmative” (opt-in) consent and those that do not (requiring only “other protections” short of opt-in consent — e.g., opt-out).

Because the distinction is so important — because opt-in consent is much more likely to staunch data flows — the FTC endeavors to provide guidance as to what data should be considered sensitive, and to cabin the scope of activities requiring opt-in consent. Thus, the FTC explains that “information about children, financial and health information, Social Security numbers, and precise geolocation data [should be treated as] sensitive.” But beyond those instances, the FTC doesn’t consider any other type of data as inherently sensitive.

By contrast, and without explanation, Chairman Wheeler’s Fact Sheet significantly expands what constitutes “sensitive” information requiring “opt-in” consent by adding “web browsing history,” “app usage history,” and “the content of communications” to the list of categories of data deemed sensitive in all cases.

By treating some of the most common and important categories of data as always “sensitive,” and by making the sensitivity of data the sole determinant for opt-in consent, the Chairman’s proposal would make it almost impossible for ISPs to make routine (to say nothing of innovative), appropriate, and productive uses of data comparable to those undertaken by virtually every major Internet company.  This goes well beyond anything contemplated by the FTC — with no evidence of any corresponding benefit to consumers and with obvious harm to competition, innovation, and the overall economy online.

And because the Chairman’s proposal would impose these inappropriate and costly restrictions only on ISPs, it would create a barrier to competition by ISPs in other platform markets, without offering a defensible consumer protection rationale to justify either the disparate treatment or the restriction on competition.

As Fred Cate and Michael Staten have explained,

“Opt-in” offers no greater privacy protection than allowing consumers to “opt-out”…, yet it imposes significantly higher costs on consumers, businesses, and the economy.

Not surprisingly, these costs fall disproportionately on the relatively poor and the less technology-literate. In the former case, opt-in requirements may deter companies from offering services at all, even to people who would make a very different trade-off between privacy and monetary price. In the latter case, because an initial decision to opt-in must be taken in relative ignorance, users without much experience to guide their decisions will face effectively higher decision-making costs than more knowledgeable users.

The Chairman’s proposal ignores the central role of context in the FTC’s privacy framework

In part for these reasons, central to the FTC’s more flexible framework is the establishment of a sort of “safe harbor” for data uses where the benefits clearly exceed the costs and consumer consent may be inferred:

Companies do not need to provide choice before collecting and using consumer data for practices that are consistent with the context of the transaction or the company’s relationship with the consumer….

Thus for many straightforward uses of data, the “context of the transaction,” not the asserted “sensitivity” of the underlying data, is the threshold question in evaluating the need for consumer choice in the FTC’s framework.

Chairman Wheeler’s Fact Sheet, by contrast, ignores this central role of context in its analysis. Instead, it focuses solely on data sensitivity, claiming that doing so is “in line with customer expectations.”

But this is inconsistent with the FTC’s approach.

In fact, the FTC’s framework explicitly rejects a pure “consumer expectations” standard:

Rather than relying solely upon the inherently subjective test of consumer expectations, the… standard focuses on more objective factors related to the consumer’s relationship with a business.

And while everyone agrees that sensitivity is a key part of pegging privacy regulation to actual consumer and corporate relationships, the FTC also recognizes that the importance of the sensitivity of the underlying data varies with the context in which it is used. Or, in the words of the White House’s 2012 Consumer Data Privacy in a Networked World Report (introducing its Consumer Privacy Bill of Rights), “[c]ontext should shape the balance and relative emphasis of particular principles” guiding the regulation of privacy.

By contrast, Chairman Wheeler’s “sensitivity-determines-consumer-expectations” framing is a transparent attempt to claim fealty to the FTC’s (and the Administration’s) privacy standards while actually implementing a privacy regime that is flatly inconsistent with them.

The FTC’s approach isn’t perfect, but that’s no excuse to double down on its failings

The FTC’s privacy guidance, and even more so its privacy enforcement practices under Section 5, are far from perfect. The FTC should be commended for its acknowledgement that consumers’ privacy preferences and companies’ uses of data will change over time, and that there are trade-offs inherent in imposing any constraints on the flow of information. But even the FTC fails to actually assess the magnitude of the costs and benefits of, and the deep complexities involved in, the trade-off, and puts an unjustified thumb on the scale in favor of limiting data use.  

But that’s no excuse for Chairman Wheeler to ignore what the FTC gets right, and to double down on its failings. Based on the Fact Sheet (and the initial NPRM), it’s a virtual certainty that the Chairman’s proposal doesn’t heed the FTC’s refreshing call for humility and flexibility regarding the application of privacy rules to ISPs (and other Internet platforms):

These are complex and rapidly evolving areas, and more work should be done to learn about the practices of all large platform providers, their technical capabilities with respect to consumer data, and their current and expected uses of such data.

The rhetoric of the Chairman’s Fact Sheet is correct: the FCC should in fact conform its approach to privacy to the framework established by the FTC. Unfortunately, the reality of the Fact Sheet simply doesn’t comport with its rhetoric.

As the FCC’s vote on the Chairman’s proposal rapidly nears, and in light of its significant defects, we can only hope that the rest of the Commission refrains from reflexively adopting the proposed regime, and works to ensure that these problematic deviations from the FTC’s framework are addressed before moving forward.

Today ICLE released a white paper entitled, A critical assessment of the latest charge of Google’s anticompetitive bias from Yelp and Tim Wu.

The paper is a comprehensive response to a study by Michael Luca, Timothy Wu, Sebastian Couvidat, Daniel Frank, & William Seltzer, entitled, Is Google degrading search? Consumer harm from Universal Search.

The Wu, et al. paper will be one of the main topics of discussion at today’s Capitol Forum and George Washington Institute of Public Policy event on Dominant Platforms Under the Microscope: Policy Approaches in the US and EU, at which I will be speaking — along with a host of luminaries including, inter alia, Josh Wright, Jonathan Kanter, Allen Grunes, Catherine Tucker, and Michael Luca — one of the authors of the Universal Search study.

Follow the link above to register — the event starts at noon today at the National Press Club.

Meanwhile, here’s a brief description of our paper:

Late last year, Tim Wu of Columbia Law School (and now the White House Office of Management and Budget), Michael Luca of Harvard Business School (and a consultant for Yelp), and a group of Yelp data scientists released a study claiming that Google has been purposefully degrading search results from its more-specialized competitors in the area of local search. The authors’ claim is that Google is leveraging its dominant position in general search to thwart competition from specialized search engines by favoring its own, less-popular, less-relevant results over those of its competitors:

To improve the popularity of its specialized search features, Google has used the power of its dominant general search engine. The primary means for doing so is what is called the “universal search” or the “OneBox.”

This is not a new claim, and researchers have been attempting (and failing) to prove Google’s “bias” for some time. Likewise, these critics have drawn consistent policy conclusions from their claims, asserting that antitrust violations lie at the heart of the perceived bias. But the studies are systematically marred by questionable methodology and bad economics.

This latest study by Tim Wu, along with a cadre of researchers employed by Yelp (one of Google’s competitors and one of its chief antitrust provocateurs), fares no better, employing slightly different but equally questionable methodology, bad economics, and a smattering of new, but weak, social science. (For a thorough criticism of the inherent weaknesses of Wu et al.’s basic social science methodology, see Miguel de la Mano, Stephen Lewis, and Andrew Leyden, Focus on the Evidence: A Brief Rebuttal of Wu, Luca, et al (2016), available here).

The basic thesis of the study is that Google purposefully degrades its local searches (e.g., for restaurants, hotels, services, etc.) to the detriment of its specialized search competitors, local businesses, consumers, and even Google’s bottom line — and that this is an actionable antitrust violation.

But in fact the study shows nothing of the kind. Instead, the study is marred by methodological problems that, in the first instance, make it impossible to draw any reliable conclusions. Nor does the study show that Google’s conduct creates any antitrust-relevant problems. Rather, the construction of the study and the analysis of its results reflect a superficial and inherently biased conception of consumer welfare that completely undermines the study’s purported legal and economic conclusions.

Read the whole thing here.

Brand drug manufacturers are no strangers to antitrust accusations when it comes to their complicated relationship with generic competitors — most obviously with respect to reverse payment settlements. But the massive and massively complex regulatory scheme under which drugs are regulated has provided other opportunities for regulatory legerdemain with potentially anticompetitive effect, as well.

In particular, some FTC Commissioners have raised concerns that brand drug companies have been taking advantage of an FDA drug safety program — the Risk Evaluation and Mitigation Strategies program, or “REMS” — to delay or prevent generic entry.

Drugs subject to a REMS restricted distribution program are difficult to obtain through market channels and not otherwise readily available, even for would-be generic manufacturers that need samples in order to perform the tests required to receive FDA approval to market their products. REMS allows (requires, in fact) brand manufacturers to restrict the distribution of certain drugs that present safety or abuse risks, creating an opportunity for branded drug manufacturers to take advantage of imprecise regulatory requirements by inappropriately limiting access by generic manufacturers.

The FTC has not (yet) brought an enforcement action, but it has opened several investigations, and filed an amicus brief in a private-party litigation. Generic drug companies have filed several antitrust claims against branded drug companies and raised concerns with the FDA.

The problem, however, is that even if these companies are using REMS to delay generics, such a practice makes for a terrible antitrust case. Not only does the existence of a regulatory scheme arguably set Trinko squarely in the way of a successful antitrust case, but the sort of refusal to deal claims at issue here (as in Trinko) are rightly difficult to win because, as the DOJ’s Section 2 Report notes, “there likely are few circumstances where forced sharing would help consumers in the long run.”

But just because there isn’t a viable antitrust case doesn’t mean there isn’t still a competition problem. But in this case, it’s a problem of regulatory failure. Companies rationally take advantage of poorly written federal laws and regulations in order to tilt the market to their own advantage. It’s no less problematic for the market, but its solution is much more straightforward, if politically more difficult.

Thus it’s heartening to see that Senator Mike Lee (R-UT), along with three of his colleagues (Patrick Leahy (D-VT), Chuck Grassley (R-IA), and Amy Klobuchar (D-MN)), has proposed a novel but efficient way to correct these bureaucracy-generated distortions in the pharmaceutical market without resorting to the “blunt instrument” of antitrust law. As the bill notes:

While the antitrust laws may address actions by license holders who impede the prompt negotiation and development on commercially reasonable terms of a single, shared system of elements to assure safe use, a more tailored legal pathway would help ensure that license holders negotiate such agreements in good faith and in a timely manner, facilitating competition in the marketplace for drugs and biological products.

The legislative solution put forward by the Creating and Restoring Equal Access to Equivalent Samples (CREATES) Act of 2016 targets the right culprit: the poor regulatory drafting that permits possibly anticompetitive conduct to take place. Moreover, the bill refrains from creating a per se rule, instead implementing several features that should still enable brand manufacturers to legitimately restrict access to drug samples when appropriate.

In essence, Senator Lee’s bill introduces a third party (in this case, the Secretary of Health and Human Services) who is capable of determining whether an eligible generic manufacturer is able to comply with REMS restrictions — thus bypassing any bias on the part of the brand manufacturer. Where the Secretary determines that a generic firm meets the REMS requirements, the bill also creates a narrow cause of action for this narrow class of plaintiffs, allowing suits against certain brand manufacturers who — despite the prohibition on using REMS to delay generics — nevertheless misuse the process to delay competitive entry.

Background on REMS

The REMS program was introduced as part of the Food and Drug Administration Amendments Act of 2007 (FDAAA). Following the withdrawal of Vioxx, an arthritis pain reliever, from the market because of a post-approval linkage of the drug to heart attacks, the FDA was under considerable fire, and there was a serious risk that fewer and fewer net beneficial drugs would be approved. The REMS program was introduced by Congress as a mechanism to ensure that society could reap the benefits from particularly risky drugs and biologics — rather than the FDA preventing them from entering the market at all. It accomplishes this by ensuring (among other things) that brands and generics adopt appropriate safety protocols for distribution and use of drugs — particularly when a drug has the potential to cause serious side effects, or has an unusually high abuse profile.

The FDA-determined REMS protocols can range from the simple (e.g., requiring a medication guide or a package insert about potential risks) to the more burdensome (including restrictions on a drug’s sale and distribution, or what the FDA calls “Elements to Assure Safe Use” (“ETASU”)). Most relevant here, the REMS process seems to allow brands considerable leeway to determine whether generic manufacturers are compliant or able to comply with ETASUs. Given this discretion, it is no surprise that brand manufacturers may be tempted to block competition by citing “safety concerns.”

Although the FDA specifically forbids the use of REMS to block lower-cost, generic alternatives from entering the market (of course), almost immediately following the law’s enactment, certain less-scrupulous branded pharmaceutical companies began using REMS for just that purpose (also, of course).

REMS abuse

To enter into pharmaceutical markets that no longer have any underlying IP protections, manufactures must submit to the FDA an Abbreviated New Drug Application (ANDA) for a generic, or an Abbreviated Biologic License Application (ABLA) for a biosimilar, of the brand drug. The purpose is to prove to the FDA that the competing product is as safe and effective as the branded reference product. In order to perform the testing sufficient to prove efficacy and safety, generic and biosimilar drug manufacturers must acquire a sample (many samples, in fact) of the reference product they are trying to replicate.

For the narrow class of dangerous or highly abused drugs, generic manufacturers are forced to comply with any REMS restrictions placed upon the brand manufacturer — even when the terms require the brand manufacturer to tightly control the distribution of its product.

And therein lies the problem. Because the brand manufacturer controls access to its products, it can refuse to provide the needed samples, using REMS as an excuse. In theory, it may be true in certain cases that a brand manufacturer is justified in refusing to distribute samples of its product, of course; some would-be generic manufacturers certainly may not meet the requisite standards for safety and security.

But in practice it turns out that most of the (known) examples of brands refusing to provide samples happen across the board — they preclude essentially all generic competition, not just the few firms that might have insufficient safeguards. It’s extremely difficult to justify such refusals on the basis of a generic manufacturer’s suitability when all would-be generic competitors are denied access, including well-established, high-quality manufacturers.

But, for a few brand manufacturers, at least, that seems to be how the REMS program is implemented. Thus, for example, Jon Haas, director of patient access at Turing Pharmaceuticals, referred to the practice of denying generics samples this way:

Most likely I would block that purchase… We spent a lot of money for this drug. We would like to do our best to avoid generic competition. It’s inevitable. They seem to figure out a way [to make generics], no matter what. But I’m certainly not going to make it easier for them. We’re spending millions and millions in research to find a better Daraprim, if you will.

As currently drafted, the REMS program gives branded manufacturers the ability to limit competition by stringing along negotiations for product samples for months, if not years. Although access to a few samples for testing is seemingly such a small, trivial thing, the ability to block this access allows a brand manufacturer to limit competition (at least from bioequivalent and generic drugs; obviously competition between competing branded drugs remains).

And even if a generic competitor manages to get ahold of samples, the law creates an additional wrinkle by imposing a requirement that brand and generic manufacturers enter into a single shared REMS plan for bioequivalent and generic drugs. But negotiating the particulars of the single, shared program can drag on for years. Consequently, even when a generic manufacturer has received the necessary samples, performed the requisite testing, and been approved by the FDA to sell a competing drug, it still may effectively be barred from entering the marketplace because of REMS.

The number of drugs covered by REMS is small: fewer than 100 in a universe of several thousand FDA-approved drugs. And the number of these alleged to be subject to abuse is much smaller still. Nonetheless, abuse of this regulation by certain brand manufacturers has likely limited competition and increased prices.

Antitrust is not the answer

Whether the complex, underlying regulatory scheme that allocates the relative rights of brands and generics — and that balances safety against access — gets the balance correct or not is an open question, to be sure. But given the regulatory framework we have and the perceived need for some sort of safety controls around access to samples and for shared REMS plans, the law should at least work to do what it intends, without creating an opportunity for harmful manipulation. Yet it appears that the ambiguity of the current law has allowed some brand manufacturers to exploit these safety protections to limit competition.

As noted above, some are quite keen to make this an antitrust issue. But, as also noted, antitrust is a poor fit for handling such abuses.

First, antitrust law has an uneasy relationship with other regulatory schemes. Not least because of Trinko, it is a tough case to make that brand manufacturers are violating antitrust laws when they rely upon legal obligations under a safety program that is essentially designed to limit generic entry on safety grounds. The issue is all the more properly removed from the realm of antitrust enforcement given that the problem is actually one of regulatory failure, not market failure.

Second, antitrust law doesn’t impose a duty to deal with rivals except in very limited circumstances. In Trinko, for example, the Court rejected the invitation to extend a duty to deal to situations where an existing, voluntary economic relationship wasn’t terminated. By definition this is unlikely to be the case here where the alleged refusal to deal is what prevents the generic from entering the market in the first place. The logic behind Trinko (and a host of other cases that have limited competitors’ obligations to assist their rivals) was to restrict duty to deal cases to those rare circumstances where it reliably leads to long-term competitive harm — not where it amounts to a perfectly legitimate effort to compete without giving rivals a leg-up.

But antitrust is such a powerful tool and such a flexible “catch-all” regulation, that there are always efforts to thwart reasonable limits on its use. As several of us at TOTM have written about at length in the past, former FTC Commissioner Rosch and former FTC Chairman Leibowitz were vocal proponents of using Section 5 of the FTC Act to circumvent sensible judicial limits on making out and winning antitrust claims, arguing that the limits were meant only for private plaintiffs — not (implicitly infallible) government enforcers. Although no one at the FTC has yet (publicly) suggested bringing a REMS case as a standalone Section 5 case, such a case would be consistent with the sorts of theories that animated past standalone Section 5 cases.

Again, this approach serves as an end-run around the reasonable judicial constraints that evolved as a result of judges actually examining the facts of individual cases over time, and is a misguided way of dealing with what is, after all, fundamentally a regulatory design problem.

The CREATES Act

Senator Lee’s bill, on the other hand, aims to solve the problem with a more straightforward approach by improving the existing regulatory mechanism and by adding a limited judicial remedy to incentivize compliance under the amended regulatory scheme. In summary:

  • The bill creates a cause of action for a refusal to deal only where plaintiff can prove, by a preponderance of the evidence, that certain well-defined conditions are met.
  • For samples, if a drug is not covered by a REMS, or if the generic manufacturer is specifically authorized, then the generic can sue if it doesn’t receive sufficient quantities of samples on commercially reasonable terms. This is not a per se offense subject to outsized antitrust damages. Instead, the remedy is a limited injunction ensuring the sale of samples on commercially reasonable terms, reasonable attorneys’ fees, and a monetary fine limited to revenue earned from sale of the drug during the refusal period.
  • The bill also gives a brand manufacturer an affirmative defense if it can prove by a preponderance of the evidence that, regardless of its own refusal to supply them, samples were nevertheless available elsewhere on commercially reasonable terms, or where the brand manufacturer is unable to supply the samples because it does not actually produce or market the drug.
  • In order to deal with the REMS process problems, the bill creates similar rights with similar limitations when the license holders and generics cannot come to an agreement about a shared REMS on commercially reasonable terms within 120 days of first contact by an eligible developer.
  • The bill also explicitly limits brand manufacturers’ liability for claims “arising out of the failure of an [eligible generic manufacturer] to follow adequate safeguards,” thus removing one of the (perfectly legitimate) objections to the bill pressed by brand manufacturers.

The primary remedy is limited, injunctive relief to end the delay. And brands are protected from frivolous litigation by an affirmative defense under which they need only show that the product is available for purchase on reasonable terms elsewhere. Damages are similarly limited and are awarded only if a court finds that the brand manufacturer lacked a legitimate business justification for its conduct (which, under the drug safety regime, means essentially a reasonable belief that its own REMS plan would be violated by dealing with the generic entrant). And monetary damages do not include punitive damages.

Finally, the proposed bill completely avoids the question of whether antitrust laws are applicable, leaving that possibility open to determination by courts — as is appropriate. Moreover, by establishing even more clearly the comprehensive regulatory regime governing potential generic entrants’ access to dangerous drugs, the bill would, given the holding in Trinko, probably make application of antitrust laws here considerably less likely.

Ultimately Senator Lee’s bill is a well-thought-out and targeted fix to an imperfect regulation that seems to be facilitating anticompetitive conduct by a few bad actors. It does so without trampling on the courts’ well-established antitrust jurisprudence, and without imposing excessive cost or risk on the majority of brand manufacturers that behave perfectly appropriately under the law.

In a recent Truth on the Market blog posting, I summarized the discussion at a May 17 Heritage Foundation program on the problem of anticompetitive market distortions (ACMDs), featuring Shanker Singham of the Legatum Institute (a market-oriented London think tank) and me.  The program highlighted the topic of anticompetitive government-imposed laws and regulations (which Singham and I refer to as anticompetitive market distortions, or ACMDs):

Trade freedom has increased around the world, according to the 2016 Heritage Foundation Index of Economic Freedom, due to a decrease in trade barriers, particularly tariffs. Despite this progress, many economies struggle with another burden that is increasing costs for families and businesses. Non-tariff barriers and overregulation, in the form of government-imposed laws and regulations, continue to stifle innovation and competition. These onerous and excessive regulations, backed by the power of the state, benefit the well-connected and act as an additional layer of government favoritism. Meanwhile, individuals are strapped with higher costs and fewer options.  

Singham and three colleagues (Srinivasa Rangan of Babson College, Molly Kiniry of the Competere Group, and Robert Bradley of Northeastern University) have now produced an impressive study of the economic impact of ACMDs in India (which has one of the world’s most highly regulated economies), released on May 31 by the Legatum Institute.  The study applies to India’s ACMDs the authors’ “Productivity Simulator,” which aggregates economic data to gauge the theoretical economic growth potential of an economy if ACMDs are eliminated.  Focusing on the full gamut of ACMDs affecting a nation in the areas of property rights, domestic competition, and international competition, the Simulator estimates the potential productivity gains for individual economies as measured in changes to GDP per capita, assuming all ACMDs are eliminated.  Using those productivity estimates, the Simulator can then be employed to derive resultant nation-specific estimates of potential GDP increases from “perfect” regulatory reform.  Although a perfect “regulatory nirvana” may not be achievable in the “real world,” Productivity Simulator estimates have the virtue of spotlighting the magnitude of forgone welfare due to regulatory excesses.  Even assuming a degree of imperfection in Productivity Simulator estimates applied to India, the results are startling, as the Executive Summary to the May 31 report reveals:

 “The [May 31] Study makes the following key findings:

» If India eliminated all its distortions it would be the fifth largest economy in the

world, and in GDP per capita terms, it would rise from being ranked 169th to being ranked 67th.

» If India eliminated all its distortions it would generate over 200 million new jobs, and reduce absolute poverty to zero.

» If India improved its insolvency rules, opened up to foreign investment in certain areas and better protected intellectual property rules, the number of people living on less than $2 per day would be reduced from 770 million to 627 million.

» Simply optimising its regulatory environment with regard to the World Bank Doing Business Index would lead to a productivity gain of only 0.07%.

» Improving its insolvency rules, opening up to foreign investment in certain areas and better protecting intellectual property (L2) could lead to a productivity gain of 148%.

» Fully optimising its distortions could lead to a productivity gain of 1875% of which the Indian economy would capture almost 700%.”

I look forward to further application of the Productivity Simulator to other economies.  Research reports of this sort, in conjunction with studies carried out by the World Bank and the Organization for Economic Cooperation and Development that employ other methodologies, build a strong case for sweeping market-oriented regulatory reform, in foreign countries and in the United States.

I have previously written at this site (see here, here, and here) and elsewhere (see here, here, and here) about the problem of anticompetitive market distortions (ACMDs), government-supported (typically crony capitalist) rules that weaken the competitive process, undermine free trade, slow economic growth, and harm consumers.  On May 17, the Heritage Foundation hosted a presentation by Shanker Singham of the Legatum Institute (a London think tank) and me on recent research and projects aimed at combatting ACMDs.

Singham began his remarks by noting that from the late 1940s to the early 1990s, trade negotiations under the auspices of the General Agreement on Tariffs and Trade (GATT) (succeeded by the World Trade Organization (WTO)), were highly successful in reducing tariffs and certain non-tariff barriers, and in promoting agreements to deal with trade-related aspects of such areas as government procurement, services, investment, and intellectual property, among others.  Regrettably, however, liberalization of trade restraints at the border was not matched by procompetitive regulatory reform inside borders.  Indeed, to the contrary, ACMDs have continued to proliferate, harming competition, consumers, and economic welfare.  As Singham further explained, the problem is particularly acute in developing countries:  “Because of the failure of early [regulatory] reform in the 1990s which empowered oligarchs and created vested interests in the whole of the developing world, national level reform is extremely difficult.”

To highlight the seriousness of the ACMD problem, Singham and several colleagues have developed a proprietary “Productivity Simulator,” that focuses on potential national economic output based on measures of the effectiveness of domestic competition, international competition, and property rights protections within individual nations.  (The stronger the protections, the greater the potential of the free market to create wealth.)   The Productivity Simulator is able to show, with a regressed accuracy of 90%, the potential gains of reducing distortions in a given country.  Every country has its own curve in the Productivity Simulator – it is a curve because the gains are exponential as one moves to the most difficult reforms.  If all distortions in the world were eliminated (aka, the ceiling of human potential), the Simulator predicts global GDP would rise by 1100% (a conservative estimate, because the Simulator could not be applied to certain very regulatorily-distorted economies for which data were unavailable).   By illustrating the huge “dollars and cents” magnitude of economic losses due to anticompetitive distortions, the Simulator could make the ACMD problem more concrete and thereby help invigorate reform efforts.

Singham also has adapted his Simulator technique to demonstrate the potential for economic growth in proposed “Enterprise Cities” (“e-Cities”), free-market oriented zones within a country that avoid ACMDs and provide strong property rights and rule of law protections.  (Existing city states such as Hong Kong, Singapore, and Dubai already possess e-City characteristics.)  Individual e-City laws, regulations, and dispute-resolution mechanisms are negotiated between individual governments and entrepreneurial project teams headed by Singham.  (Already, potential e-cities are under consideration in Morocco, Saudi Arabia, Saudi Arabia, Bosnia & Herzegovina, and Somalia.)  Private investors would be attracted to e-Cities due to their free market regulatory climate and legal protections.  To the extent that e-Cities are launched and thrive, they may serve as “demonstration projects” for the welfare benefits of dismantling ACMDs.

Following Singham’s presentation, I discussed analyses of the ACMD problem carried out in recent years by major international organizations, including the World Bank, the Organization for Economic Cooperation and Development (OECD, an economic think tank funded by developed countries), and the International Competition Network (a network of national competition agencies and experts legal and economic advisers that produces non-binding “best practices” recommendations dealing with competition law and policy).  The OECD’s  “Competition Assessment Toolkit” is a how-to manual for ferreting out ACMDs – it “helps governments to eliminate barriers to competition by providing a method for identifying unnecessary restraints on market activities and developing alternative, less restrictive measures that still achieve government policy objectives.”  The OECD has used the Toolkit to demonstrate the huge economic cost to the Greek economy (5.2 billion euros) of just a very small subset of anticompetitive regulations.  The ICN has drawn on Toolkit principles in developing “Recommended Practices on Competition Assessment” that national competition agencies can apply in opposing ACMDs.  In a related vein, the ICN has also produced a “Competition Culture Project Report” that provides useful survey-based analysis competition agencies could draw upon to generate public support for dismantling ACMDs.  The World Bank has cooperated with ICN advocacy efforts.  It has sponsored annual World Bank forums featuring industry-specific studies of the costs of regulatory restrictions, held in conjunction with ICN annual conferences, and (beginning in 2015).  It also has joined with the ICN in supporting annual “competition advocacy contests” in which national competition agencies are able to highlight economic improvements due to specific regulatory reform successes.  Developed countries also suffer from ACMDs.  For example, occupational licensing restrictions in the United States affect over a quarter of the work force, and, according to a 2015 White House Report, “licensing requirements raise the price of goods and services, restrict employment opportunities, and make it more difficult for workers to take their skills across State lines.”  Moreover, the multibillion dollar cost burden of federal regulations continues to grow rapidly, as documented by the Heritage Foundation’s annual “Red Tape Rising” reports.

I closed my presentation by noting that statutory international trade law reforms operating at the border could complement efforts to reduce regulatory burdens operating inside the border.  In particular, I cited my 2015 Heritage study recommending that United States antidumping law be revised to adopt a procompetitive antitrust-based standard (in contrast to the current approach that serves as an unjustified tax on certain imports).  I also noted the importance of ensuring that trade laws protect against imports that violate intellectual property rights, because such imports undermine competition on the merits.

In sum, the effort to reduce the burdens of ACMDs continue to be pursued and to be highlighted in research, proposed demonstration projects, and efforts to spur regulatory reform.  This is a long-term initiative very much worth pursuing, even though its near-term successes may prove minor at best.

In an effort to control drug spending, several states are considering initiatives that will impose new price controls on prescription drugs. Ballot measures under consideration in California and Ohio will require drug companies to sell drugs under various state programs at a mandated discount. And legislators in Massachusetts and Pennsylvania have drafted bills that would create new government commissions to regulate the price of drugs. These state initiatives have followed proposals by presidential nominees to enact new price controls to address the high costs of pharmaceuticals.

As I explain in a new study, further price controls are a bad idea for several reasons.

First, as I discussed in a previous post, several government programs, such as Medicaid, the 340B Program, the Department of Defense and Veterans Affairs drug programs, and spending in the coverage gap of Medicare Part D, already impose price controls. Under these programs, required rebates are typically calculated as set percentages off of a drug company’s average drug price. But this approach gives drug companies an incentive to raise prices; a required percentage rebate off of a higher average price can serve to offset the mandated price control.

Second, over 40 percent of drugs sold in the U.S. are sold under government programs that mandate price controls. With such a large share of their drugs sold at significant discounts, drug companies have the incentive to charge even higher prices to other non-covered patients to offset the discounts. Indeed, numerous studies and government analyses have concluded that required discounts under Medicaid and Medicare have resulted in increased prices for other consumers as manufacturers seek to offset revenue lost under price controls.

Third, evidence suggests that price controls contribute to significant drug shortages: at a below-market price, the demand for drugs exceeds the amount of drugs that manufacturers are willing or able to sell.

Fourth, price controls hinder innovation in the pharmaceutical industry. Brand drug companies incur an average of $2.6 billion in costs to bring each new drug to market with FDA approval. They must offset these significant costs with revenues earned during the patent period; within 3 months after patent expiry, generic competitors will have already captured over 70 percent of the brand drugs’ market share and significantly eroded their profits. But price controls imposed on drugs under patent increase the risk that drug companies will not earn the profits they need to offset their development costs (only 20% of marketed brand drugs ever earn enough sales to cover their development cost). The result will be less R&D spending and less innovation. Indeed, a substantial body of empirical literature establishes that pharmaceutical firms’ profitability is linked to their research and development efforts and innovation.

Instead of imposing price controls, the government should increase drug competition in order to reduce drug spending without these negative consequences. Increased drug competition will expand product offerings, giving consumers more choice in the drugs they take. It will also lower prices and spur innovation as suppliers compete to attain or protect valuable market share from rivals.

First, the FDA should reduce the backlog of generic drugs awaiting approval. The single most important factor in controlling drug spending in recent decades has been the dramatic increase in generic drug usage; generic drugs have saved consumers $1.68 trillion over the past decade. But the degree to which generics reduce drug prices depends on the number of generic competitors in the market; the more competitors, the more price competition and downward pressure on prices. Unfortunately, a backlog of generic drug approvals at the FDA has restricted generic competition in many important market segments. There are currently over 3,500 generic applications pending approval; fast-tracking these FDA approvals will provide consumers with many new lower-priced drug options.

Second, regulators should expedite the approval and acceptance of biosimilars—the generic counterparts to high-priced biologic drugs. Biologic drugs are different from traditional medications because they are based on living organisms and, as a result, are far more complex and expensive to develop. By 2013, spending on biologic drugs comprised a quarter of all drug spending in the U.S., and their share of drug spending is expected increase significantly over the next decade. Unfortunately, the average cost of a biologic drug is 22 times greater than a traditional drug, making them prohibitively expensive for many consumers.

Fortunately, Congress has recognized the need for cheaper, “generic” substitutes for biologic drugs—or biosimilars. As part of the Affordable Care Act, Congress created a biosimilars approval pathway that would enable these cheaper biologic drugs to obtain FDA approval and reach patients more quickly. Nevertheless, the FDA has approved only one biosimilar for use in the U.S. despite several pending biosimilar applications. The agency has also yet to provide any meaningful guidance as to what standards it will employ in determining whether a biosimilar is interchangeable with a biologic. Burdensome requirements for interchangeability increase the difficulty and cost of biosimilar approval and limit the ease of biosimilar substitution at pharmacies.

Expediting the approval of biosimilars will increase competition in the market for biologic drugs, reducing prices and allowing more patients access to these life-saving and life-enhancing treatments. Estimates suggest that a biosimilar approval pathway at the FDA will save U.S. consumers between $44 billion and $250 billion over the next decade.

The recent surge in drug spending must be addressed to ensure that patients can continue to afford life-saving and life-enhancing medications. However, proposals calling for new price controls are the wrong approach. While superficially appealing, price controls may have unintended consequences—less innovation, drug shortages, and higher prices for some consumers—that could harm consumers rather than helping them. In contrast, promoting competition will lower pharmaceutical prices and drug spending without these deleterious effects.

 

 

 

As ICLE argued in its amicus brief, the Second Circuit’s ruling in United States v. Apple Inc. is in direct conflict with the Supreme Court’s 2007 Leegin decision, and creates a circuit split with the Third Circuit based on that court’s Toledo Mack ruling. Moreover, the negative consequences of the court’s ruling will be particularly acute for modern, high-technology sectors of the economy, where entrepreneurs planning to deploy new business models will now face exactly the sort of artificial deterrents that the Court condemned in Trinko:

Mistaken inferences and the resulting false condemnations are especially costly, because they chill the very conduct the antitrust laws are designed to protect.

Absent review by the Supreme Court to correct the Second Circuit’s error, the result will be less-vigorous competition and a reduction in consumer welfare. The Court should grant certiorari.

The Second Circuit committed a number of important errors in its ruling.

First, as the Supreme Court held in Leegin, condemnation under the per se rule is appropriate

only for conduct that would always or almost always tend to restrict competition… [and] only after courts have had considerable experience with the type of restraint at issue.

Neither is true in this case. The use of MFNs in Apple’s contracts with the publishers and its adoption of the so-called “agency model” for e-book pricing have never been reviewed by the courts in a setting like this one, let alone found to “always or almost always tend to restrict competition.” There is no support in the case law or economic literature for the proposition that agency models or MFNs used to facilitate entry by new competitors in platform markets like this one are anticompetitive.

Second, the court of appeals emphasized that in some cases e-book prices increased after Apple’s entry, and it viewed that fact as strong support for application of the per se rule. But the Court in Leegin made clear that the per se rule is inappropriate where, as here, “prices can be increased in the course of promoting procompetitive effects.”  

What the Second Circuit missed is that competition occurs on many planes other than price; higher prices do not necessarily suggest decreased competition or anticompetitive effects. As Josh Wright points out:

[T]the multi-dimensional nature of competition implies that antitrust analysis seeking to maximize consumer or total welfare must inevitably calculate welfare tradeoffs when innovation and price effects run in opposite directions.

Higher prices may accompany welfare-enhancing “competition on the merits,” resulting in greater investment in product quality, reputation, innovation, or distribution mechanisms.

While the court acknowledged that “[n]o court can presume to know the proper price of an ebook,” its analysis nevertheless rested on the presumption that Amazon’s prices before Apple’s entry were competitive. The record, however, offered no support for that presumption, and thus no support for the inference that post-entry price increases were anticompetitive.

In fact, as Alan Meese has pointed out, a restraint might increase prices precisely because it overcomes a market failure:

[P]roof that a restraint alters price or output when compared to the status quo ante is at least equally consistent with an alternative explanation, namely, that the agreement under scrutiny corrects a market failure and does not involve the exercise or creation of market power. Because such failures can result in prices that are below the optimum, or output that is above it, contracts that correct or attenuate market failure will often increase prices or reduce output when compared to the status quo ante. As a result, proof that such a restraint alters price or other terms of trade is at least equally consistent with a procompetitive explanation, and thus cannot give rise to a prima facie case under settled antitrust doctrine.

Before Apple’s entry, Amazon controlled 90% of the e-books market, and the publishers had for years been unable to muster sufficient bargaining power to renegotiate the terms of their contracts with Amazon. At the same time, Amazon’s pricing strategies as a nascent platform developer in a burgeoning market (that it was, in practical effect, trying to create) likely did not always produce prices that would be optimal under evolving market conditions as the market matured. The fact that prices may have increased following the alleged anticompetitive conduct cannot support an inference that the conduct was anticompetitive.

Third, the Second Circuit also made a mistake in dismissing Apple’s defenses. The court asserted that

this defense — that higher prices enable more competitors to enter a market — is no justification for a horizontal price‐fixing conspiracy.

But the court is incorrect. As Bill Kolasky points out in his post, it is well-accepted that otherwise-illegal agreements that are ancillary to a procompetitive transaction should be evaluated under the rule of reason.

It was not that Apple couldn’t enter unless Amazon’s prices (and its own) were increased. Rather, the contention made by Apple was that it could not enter unless it was able to attract a critical mass of publishers to its platform – a task which required some sharing of information among the publishers – and unless it was able to ensure that Amazon would not artificially lower its prices to such an extent that it would prevent Apple from attracting a critical mass of readers to its platform. The MFN and the agency model were thus ancillary restraints that facilitated the transactions between Apple and the publishers and between Apple and iPad purchasers. In this regard they are appropriately judged under the rule of reason and, under the rule of reason, offer a valid procompetitive justification for the restraints.

And it was the fact of Apple’s entry, not the use of vertical restraints in its contracts, that enabled the publishers to wield the bargaining power sufficient to move Amazon to the agency model. The court itself noted that the introduction of the iPad and iBookstore “gave publishers more leverage to negotiate for alternative sales models or different pricing.” And as Ben Klein noted at trial,

Apple’s entry probably gave the publishers an increased ability to threaten [Amazon sufficiently that it accepted the agency model]…. The MFN [made] a trivial change in the publishers’ incentives…. The big change that occurs is the change on the other side of the bargaining situation after Apple comes in where Amazon now cannot just tell them no.

Fourth, the purpose of applying the per se rule is to root out activities that always or almost always harm competition. Although it’s possible that a horizontal agreement that facilitates entry and increases competition could be subject to the per se rule, in this case its application was inappropriate. The novelty of Apple’s arrangement with the publishers, coupled with the weakness of proof of any sort of actual price fixing fails to meet even a minimal threshold that would require application of the per se rule.

Not all horizontal arrangements are per se illegal. If an arrangement is relatively novel, facilitates entry, and is patently different from naked price fixing, it should be reviewed under the rule of reason. See BMI. All of those conditions are met here.

The conduct of the publishers – distinct from their agreements with Apple – to find some manner of changing their contracts with Amazon is not itself price fixing, either. The prices themselves would be set only subsequent to whatever new contracts were adopted. At worst, the conduct of the publishers in working toward new contracts with Amazon can be characterized as a facilitating practice.

But even then, the precedent of the Court counsels against applying the per se rule to facilitating practices such as the mere dissemination of price information or, as in this case, information regarding the parties’ preferred, bilateral, contractual relationships. As the Second Circuit itself once held, following the Supreme Court,  

[the] exchange of information is not illegal per se, but can be found unlawful under a rule of reason analysis.

In other words, even the behavior of the publishers should be analyzed under a rule of reason – and Apple’s conduct in facilitating that behavior cannot be imbued with complicity in a price-fixing scheme that may not have existed at all.

Fifth, in order for conduct to “eliminate price competition,” there must be price competition to begin with. But as the district court itself noted, the publishers do not compete on price. This point is oft-overlooked in discussions of the case. It is perhaps possible to say that the contract terms at issue and the publishers’ pressure on Amazon affected price competition between Apple and Amazon – but even then it cannot be said to have reduced competition, because, absent Apple’s entry, there was no competition at all between Apple and Amazon.

It’s true that, if all Apple’s entry did was to transfer identical e-book sales from Amazon to Apple, at higher prices and therefore lower output, it might be difficult to argue that Apple’s entry was procompetitive. But the myopic focus on e-book titles without consideration of product differentiation is mistaken, as well.

The relevant competition here is between Apple and Amazon at the platform level. As explained above, it is misleading to look solely at prices in evaluating the market’s competitiveness. Provided that switching costs are low enough and information about the platforms is available to consumers, consumer welfare may have been enhanced by competition between the platforms on a range of non-price dimensions, including, for example: the Apple iBookstore’s distinctive design, Apple’s proprietary file format, features on Apple’s iPad that were unavailable on Kindle Readers, Apple’s use of a range of marketing incentives unavailable to Amazon, and Apple’s algorithmic matching between its data and consumers’ e-book purchases.

While it’s difficult to disentangle Apple’s entry from other determinants of consumers’ demand for e-books, and even harder to establish with certainty the “but-for” world, it is nonetheless telling that the e-book market has expanded significantly since Apple’s entry, and that purchases of both iPads and Kindles have increased, as well.

There is, in other words, no clear evidence that consumers viewed the two products as perfect substitutes, and thus there is no evidence that Apple’s entry merely caused a non-welfare-enhancing substitution from Amazon to Apple. At minimum, there is no basis for treating the contract terms that facilitated Apple’s entry under a per se standard.

***

The point, in sum, is that there is in fact substantial evidence that Apple’ entry was pro-competitive, that there was no price-fixing scheme of which Apple was a part, and absolutely no evidence that the vertical restraints at issue in the case were the sort that should presumptively give rise to liability. Not only was application of the per se rule inappropriate, but, to answer Richard Epstein, there is strong evidence that Apple should win under a rule of reason analysis, as well.