Archives For regulation

Last week the International Center for Law & Economics and I filed an amicus brief in the DC Circuit in support of en banc review of the court’s decision to uphold the FCC’s 2015 Open Internet Order.

In our previous amicus brief before the panel that initially reviewed the OIO, we argued, among other things, that

In order to justify its Order, the Commission makes questionable use of important facts. For instance, the Order’s ban on paid prioritization ignores and mischaracterizes relevant record evidence and relies on irrelevant evidence. The Order also omits any substantial consideration of costs. The apparent necessity of the Commission’s aggressive treatment of the Order’s factual basis demonstrates the lengths to which the Commission must go in its attempt to fit the Order within its statutory authority.

Our brief supporting en banc review builds on these points to argue that

By reflexively affording substantial deference to the FCC in affirming the Open Internet Order (“OIO”), the panel majority’s opinion is in tension with recent Supreme Court precedent….

The panel majority need not have, and arguably should not have, afforded the FCC the level of deference that it did. The Supreme Court’s decisions in State Farm, Fox, and Encino all require a more thorough vetting of the reasons underlying an agency change in policy than is otherwise required under the familiar Chevron framework. Similarly, Brown and Williamson, Utility Air Regulatory Group, and King all indicate circumstances in which an agency construction of an otherwise ambiguous statute is not due deference, including when the agency interpretation is a departure from longstanding agency understandings of a statute or when the agency is not acting in an expert capacity (e.g., its decision is based on changing policy preferences, not changing factual or technical considerations).

In effect, the panel majority based its decision whether to afford the FCC deference upon deference to the agency’s poorly supported assertions that it was due deference. We argue that this is wholly inappropriate in light of recent Supreme Court cases.

Moreover,

The panel majority failed to appreciate the importance of granting Chevron deference to the FCC. That importance is most clearly seen at an aggregate level. In a large-scale study of every Court of Appeals decision between 2003 and 2013, Professors Kent Barnett and Christopher Walker found that a court’s decision to defer to agency action is uniquely determinative in cases where, as here, an agency is changing established policy.

Kent Barnett & Christopher J. Walker, Chevron In the Circuit Courts 61, Figure 14 (2016), available at ssrn.com/abstract=2808848.

Figure 14 from Barnett & Walker, as reproduced in our brief.

As  that study demonstrates,

agency decisions to change established policy tend to present serious, systematic defects — and [thus that] it is incumbent upon this court to review the panel majority’s decision to reflexively grant Chevron deference. Further, the data underscore the importance of the Supreme Court’s command in Fox and Encino that agencies show good reason for a change in policy; its recognition in Brown & Williamson and UARG that departures from existing policy may fall outside of the Chevron regime; and its command in King that policies not made by agencies acting in their capacity as technical experts may fall outside of the Chevron regime. In such cases, the Court essentially holds that reflexive application of Chevron deference may not be appropriate because these circumstances may tend toward agency action that is arbitrary, capricious, in excess of statutory authority, or otherwise not in accordance with law.

As we conclude:

The present case is a clear example where greater scrutiny of an agency’s decision-making process is both warranted and necessary. The panel majority all too readily afforded the FCC great deference, despite the clear and unaddressed evidence of serious flaws in the agency’s decision-making process. As we argued in our brief before the panel, and as Judge Williams recognized in his partial dissent, the OIO was based on factually inaccurate, contradicted, and irrelevant record evidence.

Read our full — and very short — amicus brief here.

Yesterday, the International Center for Law & Economics filed reply comments in the docket of the FCC’s Broadband Privacy NPRM. ICLE was joined in its comments by the following scholars of law & economics:

  • Babette E. Boliek, Associate Professor of Law, Pepperdine School of Law
  • Adam Candeub, Professor of Law, Michigan State University College of Law
  • Justin (Gus) Hurwitz, Assistant Professor of Law, Nebraska College of Law
  • Daniel Lyons, Associate Professor, Boston College Law School
  • Geoffrey A. Manne, Executive Director, International Center for Law & Economics
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University Department of Economics

As in our initial comments, we drew on the economic scholarship of multi-sided platforms to argue that the FCC failed to consider the ways in which asymmetric regulation will ultimately have negative competitive effects and harm consumers. The FCC and some critics claimed that ISPs are gatekeepers deserving of special regulation — a case that both the FCC and the critics failed to make.

The NPRM fails adequately to address these issues, to make out an adequate case for the proposed regulation, or to justify treating ISPs differently than other companies that collect and use data.

Perhaps most important, the NPRM also fails to acknowledge or adequately assess the actual market in which the use of consumer data arises: the advertising market. Whether intentionally or not, this NPRM is not primarily about regulating consumer privacy; it is about keeping ISPs out of the advertising business. But in this market, ISPs are upstarts challenging the dominant position of firms like Google and Facebook.

Placing onerous restrictions upon ISPs alone results in either under-regulation of edge providers or over-regulation of ISPs within the advertising market, without any clear justification as to why consumer privacy takes on different qualities for each type of advertising platform. But the proper method of regulating privacy is, in fact, the course that both the FTC and the FCC have historically taken, and which has yielded a stable, evenly administered regime: case-by-case examination of actual privacy harms and a minimalist approach to ex ante, proscriptive regulations.

We also responded to particular claims made by New America’s Open Technology Institute about the expectations of consumers regarding data collection online, the level of competitiveness in the marketplace, and the technical realities that differentiate ISPs from edge providers.

OTI attempts to substitute its own judgment of what consumers (should) believe about their data for that of consumers themselves. And in the process it posits a “context” that can and will never shift as new technology and new opportunities emerge. Such a view of consumer expectations is flatly anti-innovation and decidedly anti-consumer, consigning broadband users to yesterday’s technology and business models. The rule OTI supports could effectively forbid broadband providers from offering consumers the option to trade data for lower prices.

Our reply comments went on to point out that much of the basis upon which the NPRM relies — and alleged lack of adequate competition among ISPs — was actually a “manufactured scarcity” based upon the Commission’s failure to properly analyze the relevant markets.

The Commission’s claim that ISPs, uniquely among companies in the modern data economy, face insufficient competition in the broadband market is… insufficiently supported. The flawed manner in which the Commission has defined the purported relevant market for broadband distorts the analysis upon which the proposed rules are based, and manufactures a false scarcity in order to justify unduly burdensome privacy regulations for ISPs. Even the Commission’s own data suggest that consumer choice is alive and well in broadband… The reality is that there is in fact enough competition in the broadband market to offer privacy-sensitive consumers options if they are ever faced with what they view as overly invasive broadband business practices. According to the Commission, as of December 2014, 74% of American homes had a choice of two or more wired ISPs delivering download speeds of at least 10 Mbps, and 88% had a choice of at least two providers of 3 Mbps service. Meanwhile, 93% of consumers have access to at least three mobile broadband providers. Looking forward, consumer choice at all download speeds is increasing at rapid rates due to extensive network upgrades and new entry in a highly dynamic market.

Finally, we rebutted the contention that predictive analytics was a magical tool that would enable ISPs to dominate information gathering and would, consequently, lead to consumer harms — even where ISPs had access only to seemingly trivial data about users.

Some comments in support of the proposed rules attempt to cast ISPs as all powerful by virtue of their access to apparently trivial data — IP addresses, access timing, computer ports, etc. — because of the power of predictive analytics. These commenters assert that the possibility of predictive analytics coupled with a large data set undermines research that demonstrates that ISPs, thanks to increasing encryption, do not have access to any better quality data, and probably less quality data, than edge providers themselves have.

But this is a curious bit of reasoning. It essentially amounts to the idea that, not only should consumers be permitted to control with whom their data is shared, but that all other parties online should be proscribed from making their own independent observations about consumers. Such a rule would be akin to telling supermarkets that they are not entitled to observe traffic patterns in their stores in order to place particular products in relatively more advantageous places, for example. But the reality is that most data is noise; simply having more of it is not necessarily a boon, and predictive analytics is far from a panacea. In fact, the insights gained from extensive data collection are frequently useless when examining very large data sets, and are better employed by single firms answering particular questions about their users and products.

Our full reply comments are available here.

In the wake of the recent OIO decision, separation of powers issues should be at the forefront of everyone’s mind. In reaching its decision, the DC Circuit relied upon Chevron to justify its extreme deference to the FCC. The court held, for instance, that

Our job is to ensure that an agency has acted “within the limits of [Congress’s] delegation” of authority… and that its action is not “arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law.”… Critically, we do not “inquire as to whether the agency’s decision is wise as a policy matter; indeed, we are forbidden from substituting our judgment for that of the agency.”… Nor do we inquire whether “some or many economists would disapprove of the [agency’s] approach” because “we do not sit as a panel of referees on a professional economics journal, but as a panel of generalist judges obliged to defer to a reasonable judgment by an agency acting pursuant to congressionally delegated authority.

The DC Circuit’s decision takes a broad view of Chevron deference and, in so doing, ignores or dismisses some of the limits placed upon the doctrine by cases like Michigan v. EPA and UARG v. EPA (though Judge Williams does bring up UARG in dissent).

Whatever one thinks of the validity of the FCC’s approach to regulating the Internet, there is no question that it has, at best, a weak statutory foothold. Without prejudging the merits of the OIO, or the question of deference to agencies that find “[regulatory] elephants in [statutory] mouseholes,”  such broad claims of authority, based on such limited statutory language, should give one pause. That the court upheld the FCC’s interpretation of the Act without expressing reservations, suggesting any limits, or admitting of any concrete basis for challenging the agency’s authority beyond circular references to “abuse of discretion” is deeply troubling.

Separation of powers is a fundamental feature of our democracy, and one that has undoubtedly contributed to the longevity of our system of self-governance. Not least among the important features of separation of powers is the ability of courts to review the lawfulness of legislation and executive action.

The founders presciently realized the dangers of allowing one part of the government to centralize power in itself. In Federalist 47, James Madison observed that

The accumulation of all powers, legislative, executive, and judiciary, in the same hands, whether of one, a few, or many, and whether hereditary, selfappointed, or elective, may justly be pronounced the very definition of tyranny. Were the federal Constitution, therefore, really chargeable with the accumulation of power, or with a mixture of powers, having a dangerous tendency to such an accumulation, no further arguments would be necessary to inspire a universal reprobation of the system. (emphasis added)

The modern administrative apparatus has become the sort of governmental body that the founders feared and that we have somehow grown to accept. The FCC is not alone in this: any member of the alphabet soup that constitutes our administrative state, whether “independent” or otherwise, is typically vested with great, essentially unreviewable authority over the economy and our daily lives.

As Justice Thomas so aptly put it in his must-read concurrence in Michigan v. EPA:

Perhaps there is some unique historical justification for deferring to federal agencies, but these cases reveal how paltry an effort we have made to understand it or to confine ourselves to its boundaries. Although we hold today that EPA exceeded even the extremely permissive limits on agency power set by our precedents, we should be alarmed that it felt sufficiently emboldened by those precedents to make the bid for deference that it did here. As in other areas of our jurisprudence concerning administrative agencies, we seem to be straying further and further from the Constitution without so much as pausing to ask why. We should stop to consider that document before blithely giving the force of law to any other agency “interpretations” of federal statutes.

Administrative discretion is fantastic — until it isn’t. If your party is the one in power, unlimited discretion gives your side the ability to run down a wish list, checking off controversial items that could never make it past a deliberative body like Congress. That same discretion, however, becomes a nightmare under extreme deference as political opponents, newly in power, roll back preferred policies. In the end, regulation tends toward the extremes, on both sides, and ultimately consumers and companies pay the price in the form of excessive regulatory burdens and extreme uncertainty.

In theory, it is (or should be) left to the courts to rein in agency overreach. Unfortunately, courts have been relatively unwilling to push back on the administrative state, leaving the task up to Congress. And Congress, too, has, over the years, found too much it likes in agency power to seriously take on the structural problems that give agencies effectively free reign. At least, until recently.

In March of this year, Representative Ratcliffe (R-TX) proposed HR 4768: the Separation of Powers Restoration Act (“SOPRA”). Arguably this is first real effort to fix the underlying problem since the 1995 “Comprehensive Regulatory Reform Act” (although, it should be noted, SOPRA is far more targeted than was the CRRA). Under SOPRA, 5 U.S.C. § 706 — the enacted portion of the APA that deals with judicial review of agency actions —  would be amended to read as follows (with the new language highlighted):

(a) To the extent necessary to decision and when presented, the reviewing court shall determine the meaning or applicability of the terms of an agency action and decide de novo all relevant questions of law, including the interpretation of constitutional and statutory provisions, and rules made by agencies. Notwithstanding any other provision of law, this subsection shall apply in any action for judicial review of agency action authorized under any provision of law. No law may exempt any such civil action from the application of this section except by specific reference to this section.

These changes to the scope of review would operate as a much-needed check on the unlimited discretion that agencies currently enjoy. They give courts the ability to review “de novo all relevant questions of law,” which includes agencies’ interpretations of their own rules.

The status quo has created a negative feedback cycle. The Chevron doctrine, as it has played out, gives outsized incentives to both the federal agencies, as well as courts, to essentially disregard Congress’s intended meaning for particular statutes. Today an agency can write rules and make decisions safe in the knowledge that Chevron will likely insulate it from any truly serious probing by a district court with regards to how well the agency’s action actually matches up with congressional intent or with even rudimentary cost-benefit analysis.

Defenders of the administrative state may balk at changing this state of affairs, of course. But defending an institution that is almost entirely immune from judicial and legal review seems to be a particularly hard row to hoe.

Public Knowledge, for instance, claims that

Judicial deference to agency decision-making is critical in instances where Congress’ intent is unclear because it balances each branch of government’s appropriate role and acknowledges the realities of the modern regulatory state.

To quote Justice Scalia, an unfortunate champion of the Chevron doctrine, this is “pure applesauce.”

The very core of the problem that SOPRA addresses is that the administrative state is not a proper branch of government — it’s a shadow system of quasi-legislation and quasi-legal review. Congress can be chastened by popular vote. Judges who abuse discretion can be overturned (or impeached). The administrative agencies, on the other hand, are insulated through doctrines like Chevron and Auer, and their personnel subject more or less to the political whims of the executive branch.

Even agencies directly under the control of the executive branch  — let alone independent agencies — become petrified caricatures of their original design as layers of bureaucratic rule and custom accrue over years, eventually turning the organization into an entity that serves, more or less, to perpetuate its own existence.

Other supporters of the status quo actually identify the unreviewable see-saw of agency discretion as a feature, not a bug:

Even people who agree with the anti-government premises of the sponsors [of SOPRA] should recognize that a change in the APA standard of review is an inapt tool for advancing that agenda. It is shortsighted, because it ignores the fact that, over time, political administrations change. Sometimes the administration in office will generally be in favor of deregulation, and in these circumstances a more intrusive standard of judicial review would tend to undercut that administration’s policies just as surely as it may tend to undercut a more progressive administration’s policies when the latter holds power. The APA applies equally to affirmative regulation and to deregulation.

But presidential elections — far from justifying this extreme administrative deference — actually make the case for trimming the sails of the administrative state. Presidential elections have become an important part about how candidates will wield the immense regulatory power vested in the executive branch.

Thus, for example, as part of his presidential bid, Jeb Bush indicated he would use the EPA to roll back every policy that Obama had put into place. One of Donald Trump’s allies suggested that Trump “should turn off [CNN’s] FCC license” in order to punish the news agency. And VP hopeful Elizabeth Warren has suggested using the FDIC to limit the growth of financial institutions, and using the FCC and FTC to tilt the markets to make it easier for the small companies to get an advantage over the “big guys.”

Far from being neutral, technocratic administrators of complex social and economic matters, administrative agencies have become one more political weapon of majority parties as they make the case for how their candidates will use all the power at their disposal — and more — to work their will.

As Justice Thomas, again, noted in Michigan v. EPA:

In reality…, agencies “interpreting” ambiguous statutes typically are not engaged in acts of interpretation at all. Instead, as Chevron itself acknowledged, they are engaged in the “formulation of policy.” Statutory ambiguity thus becomes an implicit delegation of rulemaking authority, and that authority is used not to find the best meaning of the text, but to formulate legally binding rules to fill in gaps based on policy judgments made by the agency rather than Congress.

And this is just the thing: SOPRA would bring far-more-valuable predictability and longevity to our legal system by imposing a system of accountability on the agencies. Currently, commissions often believe they can act with impunity (until the next election at least), and even the intended constraints of the APA frequently won’t do much to tether their whims to statute or law if they’re intent on deviating. Having a known constraint (or, at least, a reliable process by which judicial constraint may be imposed) on their behavior will make them think twice about exactly how legally and economically sound proposed rules and other actions are.

The administrative state isn’t going away, even if SOPRA were passed; it will continue to be the source of the majority of the rules under which our economy operates. We have long believed that a benefit of our judicial system is its consistency and relative lack of politicization. If this is a benefit for interpreting laws when agencies aren’t involved, it should also be a benefit when they are involved. Particularly as more and more law emanates from agencies rather than Congress, the oversight of largely neutral judicial arbiters is an essential check on the administrative apparatus’ “accumulation of all powers.”

The interest of judges tends to include a respect for the development of precedent that yields consistent and transparent rules for all future litigants and, more broadly, for economic actors and consumers making decisions in the shadow of the law. This is markedly distinct from agencies which, more often than not, promote the particular, shifting, and often-narrow political sentiments of the day.

Whether a Republican- or a Democrat— appointed district judge reviews an agency action, that judge will be bound (more or less) by the precedent that came before, regardless of the judge’s individual political preferences. Contrast this with the FCC’s decision to reclassify broadband as a Title II service, for example, where previously it had been committed to the idea that broadband was an information service, subject to an entirely different — and far less onerous — regulatory regime.  Of course, the next FCC Chairman may feel differently, and nothing would stop another regulatory shift back to the pre-OIO status quo. Perhaps more troublingly, the enormous discretion afforded by courts under current standards of review would permit the agency to endlessly tweak its rules — forbearing from some regulations but not others, un-forbearing, re-interpreting, etc., with precious few judicial standards available to bring certainty to the rules or to ensure their fealty to the statute or the sound economics that is supposed to undergird administrative decisionmaking.

SOPRA, or a bill like it, would have required the Commission to actually be accountable for its historical regulations, and would force it to undergo at least rudimentary economic analysis to justify its actions. This form of accountability can only be to the good.

The genius of our system is its (potential) respect for the rule of law. This is an issue that both sides of the aisle should be able to get behind: minority status is always just one election cycle away. We should all hope to see SOPRA — or some bill like it — gain traction, rooted in long-overdue reflection on just how comfortable we are as a polity with a bureaucratic system increasingly driven by unaccountable discretion.

As regulatory review of the merger between Aetna and Humana hits the homestretch, merger critics have become increasingly vocal in their opposition to the deal. This is particularly true of a subset of healthcare providers concerned about losing bargaining power over insurers.

Fortunately for consumers, the merger appears to be well on its way to approval. California recently became the 16th of 20 state insurance commissions that will eventually review the merger to approve it. The U.S. Department of Justice is currently reviewing the merger and may issue its determination as early as July.

Only Missouri has issued a preliminary opinion that the merger might lead to competitive harm. But Missouri is almost certain to remain an outlier, and its analysis simply doesn’t hold up to scrutiny.

The Missouri opinion echoed the Missouri Hospital Association’s (MHA) concerns about the effect of the merger on Medicare Advantage (MA) plans. It’s important to remember, however, that hospital associations like the MHA are not consumer advocacy groups. They are trade organizations whose primary function is to protect the interests of their member hospitals.

In fact, the American Hospital Association (AHA) has mounted continuous opposition to the deal. This is itself a good indication that the merger will benefit consumers, in part by reducing hospital reimbursement costs under MA plans.

More generally, critics have argued that history proves that health insurance mergers lead to higher premiums, without any countervailing benefits. Merger opponents place great stock in a study by economist Leemore Dafny and co-authors that purports to show that insurance mergers have historically led to seven percent higher premiums.

But that study, which looked at a pre-Affordable Care Act (ACA) deal and assessed its effects only on premiums for traditional employer-provided plans, has little relevance today.

The Dafny study first performed a straightforward statistical analysis of overall changes in concentration (that is, the number of insurers in a given market) and price, and concluded that “there is no significant association between concentration levels and premium growth.” Critics never mention this finding.

The study’s secondary, more speculative, analysis took the observed effects of a single merger — the 1999 merger between Prudential and Aetna — and extrapolated for all changes in concentration (i.e., the number of insurers in a given market) and price over an eight-year period. It concluded that, on average, seven percent of the cumulative increase in premium prices between 1998 and 2006 was the result of a reduction in the number of insurers.

But what critics fail to mention is that when the authors looked at the actual consequences of the 1999 Prudential/Aetna merger, they found effects lasting only two years — and an average price increase of only one half of one percent. And these negligible effects were restricted to premiums paid under plans purchased by large employers, a critical limitation of the studies’ relevance to today’s proposed mergers.

Moreover, as the study notes in passing, over the same eight-year period, average premium prices increased in total by 54 percent. Yet the study offers no insights into what was driving the vast bulk of premium price increases — or whether those factors are still present today.  

Few sectors of the economy have changed more radically in the past few decades than healthcare has. While extrapolated effects drawn from 17-year-old data may grab headlines, they really don’t tell us much of anything about the likely effects of a particular merger today.

Indeed, the ACA and current trends in healthcare policy have dramatically altered the way health insurance markets work. Among other things, the advent of new technologies and the move to “value-based” care are redefining the relationship between insurers and healthcare providers. Nowhere is this more evident than in the Medicare and Medicare Advantage market at the heart of the Aetna/Humana merger.

In an effort to stop the merger on antitrust grounds, critics claim that Medicare and MA are distinct products, in distinct markets. But it is simply incorrect to claim that Medicare Advantage and traditional Medicare aren’t “genuine alternatives.”

In fact, as the Office of Insurance Regulation in Florida — a bellwether state for healthcare policy — concluded in approving the merger: “Medicare Advantage, the private market product, competes directly with Traditional Medicare.”

Consumers who search for plans at Medicare.gov are presented with a direct comparison between traditional Medicare and available MA plans. And the evidence suggests that they regularly switch between the two. Today, almost a third of eligible Medicare recipients choose MA plans, and the majority of current MA enrollees switched to MA from traditional Medicare.

True, Medicare and MA plans are not identical. But for antitrust purposes, substitutes need not be perfect to exert pricing discipline on each other. Take HMOs and PPOs, for example. No one disputes that they are substitutes, and that prices for one constrain prices for the other. But as anyone who has considered switching between an HMO and a PPO knows, price is not the only variable that influences consumers’ decisions.

The same is true for MA and traditional Medicare. For many consumers, Medicare’s standard benefits, more-expensive supplemental benefits, plus a wider range of provider options present a viable alternative to MA’s lower-cost expanded benefits and narrower, managed provider network.

The move away from a traditional fee-for-service model changes how insurers do business. It requires larger investments in technology, better tracking of preventive care and health outcomes, and more-holistic supervision of patient care by insurers. Arguably, all of this may be accomplished most efficiently by larger insurers with more resources and a greater ability to work with larger, more integrated providers.

This is exactly why many hospitals, which continue to profit from traditional, fee-for-service systems, are opposed to a merger that promises to expand these value-based plans. Significantly, healthcare providers like Encompass Medical Group, which have done the most to transition their services to the value-based care model, have offered letters of support for the merger.

Regardless of their rhetoric — whether about market definition or historic precedent — the most vocal merger critics are opposed to the deal for a very simple reason: They stand to lose money if the merger is approved. That may be a good reason for some hospitals to wish the merger would go away, but it is a terrible reason to actually stop it.

[This post was first published on June 27, 2016 in The Hill as “Don’t believe the critics, Aetna-Humana merger a good deal for consumers“]

Yesterday the Heritage Foundation published a Legal Memorandum, in which I explain the need for the reform of U.S. Food and Drug Administration (FDA) regulation, in order to promote path-breaking biopharmaceutical innovation.  Highlights of this Legal Memorandum are set forth below.

In recent decades, U.S. and foreign biopharmaceutical companies (makers of drugs that are based on chemical compounds or biological materials, such as vaccines) and medical device manufacturers have been responsible for many cures and advances in treatment that have benefited patients’ lives.  New cancer treatments, medical devices, and other medical discoveries are being made at a rapid pace.

The biopharmaceutical industry is also a major generator of American economic growth and a high-technology leader.  The U.S. biopharmaceutical sector directly employs over 810,000 workers, supports 3.4 million American jobs across the country, contributed almost one-fourth of all domestic research and development (R&D) funded by U.S. businesses in 2013—more than any other single sector—and contributes roughly $790 billion a year to the American economy, according to one study.   American biopharmaceutical firms collaborate with hospitals, universities, and research institutions around the country to provide clinical trials and treatments and to create new jobs.  Their products also boost workplace productivity by treating medical conditions, thereby reducing absenteeism and disability leave.

Properly tailored and limited regulation of biopharmaceutical products and medical devices helps to promote public safety, but FDA regulations as currently designed hinder and slow the innovation process and retard the diffusion of medical improvements.  Specifically, research indicates that current regulatory norms and the delays they engender unnecessarily bloat costs, discourage research and development, slow the pace of health improvements for millions of Americans, and harm the American economy.  These factors should be kept in mind by Congress and the Administration as they study how best to reform (and, where appropriate, eliminate) FDA regulation of drugs and medical devices.  (One particular reform that appears to be unequivocally beneficial and thus worthy of immediate consideration is the prohibition of any FDA restrictions on truthful speech concerning off-label drug uses—speech that benefits consumers and enjoys First Amendment protection.)  Reducing the burdens imposed on inventors by the FDA would allow more drugs to get to the market more quickly so that patients could pursue new and potentially lifesaving treatments.

Thanks to Geoff for the introduction. I look forward to posting a few things over the summer.

I’d like to begin by discussing Geoff’s post on the pending legislative proposals designed to combat strategic abuse of drug safety regulations to prevent generic competition. Specifically, I’d like to address the economic incentive structure that is in effect in this highly regulated market.

Like many others, I first noticed the abuse of drug safety regulations to prevent competition when Turing Pharmaceuticals—then led by now infamous CEO Martin Shkreli—acquired the manufacturing rights for the anti-parasitic drug Daraprim, and raised the price of the drug by over 5,000%. The result was a drug that cost $750 per tablet. Daraprim (pyrimethamine) is used to combat malaria and toxoplasma gondii infections in immune-compromised patients, especially those with HIV. The World Health Organization includes Daraprim on its “List of Essential Medicines” as a medicine important to basic health systems. After the huge price hike, the drug was effectively out of reach for many insurance plans and uninsured patients who needed it for the six to eight week course of treatment for toxoplasma gondii infections.

It’s not unusual for drugs to sell at huge multiples above their manufacturing cost. Indeed, a primary purpose of patent law is to allow drug companies to earn sufficient profits to engage in the expensive and risky business of developing new drugs. But Daraprim was first sold in 1953 and thus has been off patent for decades. With no intellectual property protection Daraprim should, theoretically, now be available from generic drug manufactures for only a little above cost. Indeed, this is what we see in the rest of the world. Daraprim is available all over the world for very cheap prices. The per tablet price is 3 rupees (US$0.04) in India, R$0.07 (US$0.02) in Brazil, US$0.18 in Australia, and US$0.66 in the UK.

So what gives in the U.S.? Or rather, what does not give? What in our system of drug distribution has gotten stuck and is preventing generic competition from swooping in to compete down the high price of off-patent drugs like Daraprim? The answer is not market failure, but rather regulatory failure, as Geoff noted in his post. While generics would love to enter a market where a drug is currently selling for high profits, they cannot do so without getting FDA approval for their generic version of the drug at issue. To get approval, a generic simply has to file an Abbreviated New Drug Application (“ANDA”) that shows that its drug is equivalent to the branded drug with which it wants to compete. There’s no need for the generic to repeat the safety and efficacy tests that the brand manufacturer originally conducted. To test for equivalence, the generic needs samples of the brand drug. Without those samples, the generic cannot meet its burden of showing equivalence. This is where the strategic use of regulation can come into play.

Geoff’s post explains the potential abuse of Risk Evaluation and Mitigation Strategies (“REMS”). REMS are put in place to require certain safety steps (like testing a woman for pregnancy before prescribing a drug that can cause birth defects) or to restrict the distribution channels for dangerous or addictive drugs. As Geoff points out, there is evidence that a few brand name manufacturers have engaged in bad-faith refusals to provide samples using the excuse of REMS or restricted distribution programs to (1) deny requests for samples, (2) prevent generic manufacturers from buying samples from resellers, and (3) deny generics whose drugs have won approval access to the REMS system that is required for generics to distribute their drugs. Once the FDA has certified that a generic manufacturer can safely handle the drug at issue, there is no legitimate basis for the owners of brand name drugs to deny samples to the generic maker. Expressed worries about liability from entering joint REMS programs with generics also ring hollow, for the most part, and would be ameliorated by the pending legislation.

It’s important to note that this pricing situation is unique to drugs because of the regulatory framework surrounding drug manufacture and distribution. If a manufacturer of, say, an off-patent vacuum cleaner wants to prevent competitors from copying its vacuum cleaner design, it is unlikely to be successful. Even if the original manufacturer refuses to sell any vacuum cleaners to a competitor, and instructs its retailers not to sell either, this will be very difficult to monitor and enforce. Moreover, because of an unrestricted resale market, a competitor would inevitably be able to obtain samples of the vacuum cleaner it wishes to copy. Only patent law can successfully protect against the copying of a product sold to the general public, and when the patent expires, so too will the ability to prevent copying.

Drugs are different. The only way a consumer can resell prescription drugs is by breaking the law. Pills bought from an illegal secondary market would be useless to generics for purposes of FDA approval anyway, because the chain of custody would not exist to prove that the samples are the real thing. This means generics need to get samples from the authorized manufacturer or distribution company. When a drug is subject to a REMS-required restricted distribution program, it is even more difficult, if not impossible, for a generic maker to get samples of the drugs for which it wants to make generic versions. Restricted distribution programs, which are used for dangerous or addictive drugs, by design very tightly control the chain of distribution so that the drugs go only to patients with proper prescriptions from authorized doctors.

A troubling trend has arisen recently in which drug owners put their branded drugs into restricted distribution programs not because of any FDA REMS requirement, but instead as a method to prevent generics from obtaining samples and making generic versions of the drugs. This is the strategy that Turing used before it raised prices over 5,000% on Daraprim. And Turing isn’t the only company to use this strategy. It is being emulated by others, although perhaps not so conspicuously. For instance, in 2015, Valeant Pharmaceuticals completed a hostile takeover of Allergan Pharmaceuticals, with the help of the hedge fund, Pershing Square. Once Valeant obtained ownership of Allergan and its drug portfolio, it adopted restricted distribution programs and raised the prices on its off-patent drugs substantially. It raised the price of two life-saving heart drugs by 212% and 525% respectively. Others have followed suit.

A key component of the strategy to profit from hiking prices on off-patent drugs while avoiding competition from generics is to select drugs that do not currently have generic competitors. Sometimes this is because a drug has recently come off patent, and sometimes it is because the drug is for a small patient population, and thus generics haven’t bothered to enter the market given that brand name manufacturers generally drop their prices to close to cost after the drug comes off patent. But with the strategic control of samples and refusals to allow generics to enter REMS programs, the (often new) owners of the brand name drugs seek to prevent the generic competition that we count on to make products cheap and plentiful once their patent protection expires.

Most brand name drug makers do not engage in withholding samples from generics and abusing restricted distribution and REMS programs. But the few that do cost patients and insurers dearly for important medicines that should be much cheaper once they go off patent. More troubling still is the recent strategy of taking drugs that have been off patent and cheap for years, and abusing the regulatory regime to raise prices and block competition. This growing trend of abusing restricted distribution and REMS to facilitate rent extraction from drug purchasers needs to be corrected.

Two bills addressing this issue are pending in Congress. Both bills (1) require drug companies to provide samples to generics after the FDA has certified the generic, (2) require drug companies to enter into shared REMS programs with generics, (3) allow generics to set up their own REMS compliant systems, and (4) exempt drug companies from liability for sharing products and REMS-compliant systems with generic companies in accordance with the steps set out in the bills. When it comes to remedies, however, the Senate version is significantly better. The penalties provided in the House bill are both vague and overly broad. The bill provides for treble damages and costs against the drug company “of the kind described in section 4(a) of the Clayton Act.” Not only is the application of the Clayton Act unclear in the context of the heavily regulated market for drugs (see Trinko), but treble damages may over-deter reasonably restrictive behavior by drug companies when it comes to distributing dangerous drugs.

The remedies in the Senate version are very well crafted to deter rent seeking behavior while not overly deterring reasonable behavior. The remedial scheme is particularly good, because it punishes most those companies that attempt to make exorbitant profits on drugs by denying generic entry. The Senate version provides as a remedy for unreasonable delay that the plaintiff shall be awarded attorneys’ fees, costs, and the defending drug company’s profits on the drug at issue during the time of the unreasonable delay. This means that a brand name drug company that sells an old drug for a low price and delays sharing only because of honest concern about the safety standards of a particular generic company will not face terribly high damages if it is found unreasonable. On the other hand, a company that sends the price of an off-patent drug soaring and then attempts to block generic entry will know that it can lose all of its rent-seeking profits, plus the cost of the victorious generic company’s attorneys fees. This vastly reduces the incentive for the company owning the brand name drug to raise prices and keep competitors out. It likewise greatly increases the incentive of a generic company to enter the market and–if it is unreasonably blocked–to file a civil action the result of which would be to transfer the excess profits to the generic. This provides a rather elegant fix to the regulatory gaming in this area that has become an increasing problem. The balancing of interests and incentives in the Senate bill should leave many congresspersons feeling comfortable to support the bill.

Brand drug manufacturers are no strangers to antitrust accusations when it comes to their complicated relationship with generic competitors — most obviously with respect to reverse payment settlements. But the massive and massively complex regulatory scheme under which drugs are regulated has provided other opportunities for regulatory legerdemain with potentially anticompetitive effect, as well.

In particular, some FTC Commissioners have raised concerns that brand drug companies have been taking advantage of an FDA drug safety program — the Risk Evaluation and Mitigation Strategies program, or “REMS” — to delay or prevent generic entry.

Drugs subject to a REMS restricted distribution program are difficult to obtain through market channels and not otherwise readily available, even for would-be generic manufacturers that need samples in order to perform the tests required to receive FDA approval to market their products. REMS allows (requires, in fact) brand manufacturers to restrict the distribution of certain drugs that present safety or abuse risks, creating an opportunity for branded drug manufacturers to take advantage of imprecise regulatory requirements by inappropriately limiting access by generic manufacturers.

The FTC has not (yet) brought an enforcement action, but it has opened several investigations, and filed an amicus brief in a private-party litigation. Generic drug companies have filed several antitrust claims against branded drug companies and raised concerns with the FDA.

The problem, however, is that even if these companies are using REMS to delay generics, such a practice makes for a terrible antitrust case. Not only does the existence of a regulatory scheme arguably set Trinko squarely in the way of a successful antitrust case, but the sort of refusal to deal claims at issue here (as in Trinko) are rightly difficult to win because, as the DOJ’s Section 2 Report notes, “there likely are few circumstances where forced sharing would help consumers in the long run.”

But just because there isn’t a viable antitrust case doesn’t mean there isn’t still a competition problem. But in this case, it’s a problem of regulatory failure. Companies rationally take advantage of poorly written federal laws and regulations in order to tilt the market to their own advantage. It’s no less problematic for the market, but its solution is much more straightforward, if politically more difficult.

Thus it’s heartening to see that Senator Mike Lee (R-UT), along with three of his colleagues (Patrick Leahy (D-VT), Chuck Grassley (R-IA), and Amy Klobuchar (D-MN)), has proposed a novel but efficient way to correct these bureaucracy-generated distortions in the pharmaceutical market without resorting to the “blunt instrument” of antitrust law. As the bill notes:

While the antitrust laws may address actions by license holders who impede the prompt negotiation and development on commercially reasonable terms of a single, shared system of elements to assure safe use, a more tailored legal pathway would help ensure that license holders negotiate such agreements in good faith and in a timely manner, facilitating competition in the marketplace for drugs and biological products.

The legislative solution put forward by the Creating and Restoring Equal Access to Equivalent Samples (CREATES) Act of 2016 targets the right culprit: the poor regulatory drafting that permits possibly anticompetitive conduct to take place. Moreover, the bill refrains from creating a per se rule, instead implementing several features that should still enable brand manufacturers to legitimately restrict access to drug samples when appropriate.

In essence, Senator Lee’s bill introduces a third party (in this case, the Secretary of Health and Human Services) who is capable of determining whether an eligible generic manufacturer is able to comply with REMS restrictions — thus bypassing any bias on the part of the brand manufacturer. Where the Secretary determines that a generic firm meets the REMS requirements, the bill also creates a narrow cause of action for this narrow class of plaintiffs, allowing suits against certain brand manufacturers who — despite the prohibition on using REMS to delay generics — nevertheless misuse the process to delay competitive entry.

Background on REMS

The REMS program was introduced as part of the Food and Drug Administration Amendments Act of 2007 (FDAAA). Following the withdrawal of Vioxx, an arthritis pain reliever, from the market because of a post-approval linkage of the drug to heart attacks, the FDA was under considerable fire, and there was a serious risk that fewer and fewer net beneficial drugs would be approved. The REMS program was introduced by Congress as a mechanism to ensure that society could reap the benefits from particularly risky drugs and biologics — rather than the FDA preventing them from entering the market at all. It accomplishes this by ensuring (among other things) that brands and generics adopt appropriate safety protocols for distribution and use of drugs — particularly when a drug has the potential to cause serious side effects, or has an unusually high abuse profile.

The FDA-determined REMS protocols can range from the simple (e.g., requiring a medication guide or a package insert about potential risks) to the more burdensome (including restrictions on a drug’s sale and distribution, or what the FDA calls “Elements to Assure Safe Use” (“ETASU”)). Most relevant here, the REMS process seems to allow brands considerable leeway to determine whether generic manufacturers are compliant or able to comply with ETASUs. Given this discretion, it is no surprise that brand manufacturers may be tempted to block competition by citing “safety concerns.”

Although the FDA specifically forbids the use of REMS to block lower-cost, generic alternatives from entering the market (of course), almost immediately following the law’s enactment, certain less-scrupulous branded pharmaceutical companies began using REMS for just that purpose (also, of course).

REMS abuse

To enter into pharmaceutical markets that no longer have any underlying IP protections, manufactures must submit to the FDA an Abbreviated New Drug Application (ANDA) for a generic, or an Abbreviated Biologic License Application (ABLA) for a biosimilar, of the brand drug. The purpose is to prove to the FDA that the competing product is as safe and effective as the branded reference product. In order to perform the testing sufficient to prove efficacy and safety, generic and biosimilar drug manufacturers must acquire a sample (many samples, in fact) of the reference product they are trying to replicate.

For the narrow class of dangerous or highly abused drugs, generic manufacturers are forced to comply with any REMS restrictions placed upon the brand manufacturer — even when the terms require the brand manufacturer to tightly control the distribution of its product.

And therein lies the problem. Because the brand manufacturer controls access to its products, it can refuse to provide the needed samples, using REMS as an excuse. In theory, it may be true in certain cases that a brand manufacturer is justified in refusing to distribute samples of its product, of course; some would-be generic manufacturers certainly may not meet the requisite standards for safety and security.

But in practice it turns out that most of the (known) examples of brands refusing to provide samples happen across the board — they preclude essentially all generic competition, not just the few firms that might have insufficient safeguards. It’s extremely difficult to justify such refusals on the basis of a generic manufacturer’s suitability when all would-be generic competitors are denied access, including well-established, high-quality manufacturers.

But, for a few brand manufacturers, at least, that seems to be how the REMS program is implemented. Thus, for example, Jon Haas, director of patient access at Turing Pharmaceuticals, referred to the practice of denying generics samples this way:

Most likely I would block that purchase… We spent a lot of money for this drug. We would like to do our best to avoid generic competition. It’s inevitable. They seem to figure out a way [to make generics], no matter what. But I’m certainly not going to make it easier for them. We’re spending millions and millions in research to find a better Daraprim, if you will.

As currently drafted, the REMS program gives branded manufacturers the ability to limit competition by stringing along negotiations for product samples for months, if not years. Although access to a few samples for testing is seemingly such a small, trivial thing, the ability to block this access allows a brand manufacturer to limit competition (at least from bioequivalent and generic drugs; obviously competition between competing branded drugs remains).

And even if a generic competitor manages to get ahold of samples, the law creates an additional wrinkle by imposing a requirement that brand and generic manufacturers enter into a single shared REMS plan for bioequivalent and generic drugs. But negotiating the particulars of the single, shared program can drag on for years. Consequently, even when a generic manufacturer has received the necessary samples, performed the requisite testing, and been approved by the FDA to sell a competing drug, it still may effectively be barred from entering the marketplace because of REMS.

The number of drugs covered by REMS is small: fewer than 100 in a universe of several thousand FDA-approved drugs. And the number of these alleged to be subject to abuse is much smaller still. Nonetheless, abuse of this regulation by certain brand manufacturers has likely limited competition and increased prices.

Antitrust is not the answer

Whether the complex, underlying regulatory scheme that allocates the relative rights of brands and generics — and that balances safety against access — gets the balance correct or not is an open question, to be sure. But given the regulatory framework we have and the perceived need for some sort of safety controls around access to samples and for shared REMS plans, the law should at least work to do what it intends, without creating an opportunity for harmful manipulation. Yet it appears that the ambiguity of the current law has allowed some brand manufacturers to exploit these safety protections to limit competition.

As noted above, some are quite keen to make this an antitrust issue. But, as also noted, antitrust is a poor fit for handling such abuses.

First, antitrust law has an uneasy relationship with other regulatory schemes. Not least because of Trinko, it is a tough case to make that brand manufacturers are violating antitrust laws when they rely upon legal obligations under a safety program that is essentially designed to limit generic entry on safety grounds. The issue is all the more properly removed from the realm of antitrust enforcement given that the problem is actually one of regulatory failure, not market failure.

Second, antitrust law doesn’t impose a duty to deal with rivals except in very limited circumstances. In Trinko, for example, the Court rejected the invitation to extend a duty to deal to situations where an existing, voluntary economic relationship wasn’t terminated. By definition this is unlikely to be the case here where the alleged refusal to deal is what prevents the generic from entering the market in the first place. The logic behind Trinko (and a host of other cases that have limited competitors’ obligations to assist their rivals) was to restrict duty to deal cases to those rare circumstances where it reliably leads to long-term competitive harm — not where it amounts to a perfectly legitimate effort to compete without giving rivals a leg-up.

But antitrust is such a powerful tool and such a flexible “catch-all” regulation, that there are always efforts to thwart reasonable limits on its use. As several of us at TOTM have written about at length in the past, former FTC Commissioner Rosch and former FTC Chairman Leibowitz were vocal proponents of using Section 5 of the FTC Act to circumvent sensible judicial limits on making out and winning antitrust claims, arguing that the limits were meant only for private plaintiffs — not (implicitly infallible) government enforcers. Although no one at the FTC has yet (publicly) suggested bringing a REMS case as a standalone Section 5 case, such a case would be consistent with the sorts of theories that animated past standalone Section 5 cases.

Again, this approach serves as an end-run around the reasonable judicial constraints that evolved as a result of judges actually examining the facts of individual cases over time, and is a misguided way of dealing with what is, after all, fundamentally a regulatory design problem.

The CREATES Act

Senator Lee’s bill, on the other hand, aims to solve the problem with a more straightforward approach by improving the existing regulatory mechanism and by adding a limited judicial remedy to incentivize compliance under the amended regulatory scheme. In summary:

  • The bill creates a cause of action for a refusal to deal only where plaintiff can prove, by a preponderance of the evidence, that certain well-defined conditions are met.
  • For samples, if a drug is not covered by a REMS, or if the generic manufacturer is specifically authorized, then the generic can sue if it doesn’t receive sufficient quantities of samples on commercially reasonable terms. This is not a per se offense subject to outsized antitrust damages. Instead, the remedy is a limited injunction ensuring the sale of samples on commercially reasonable terms, reasonable attorneys’ fees, and a monetary fine limited to revenue earned from sale of the drug during the refusal period.
  • The bill also gives a brand manufacturer an affirmative defense if it can prove by a preponderance of the evidence that, regardless of its own refusal to supply them, samples were nevertheless available elsewhere on commercially reasonable terms, or where the brand manufacturer is unable to supply the samples because it does not actually produce or market the drug.
  • In order to deal with the REMS process problems, the bill creates similar rights with similar limitations when the license holders and generics cannot come to an agreement about a shared REMS on commercially reasonable terms within 120 days of first contact by an eligible developer.
  • The bill also explicitly limits brand manufacturers’ liability for claims “arising out of the failure of an [eligible generic manufacturer] to follow adequate safeguards,” thus removing one of the (perfectly legitimate) objections to the bill pressed by brand manufacturers.

The primary remedy is limited, injunctive relief to end the delay. And brands are protected from frivolous litigation by an affirmative defense under which they need only show that the product is available for purchase on reasonable terms elsewhere. Damages are similarly limited and are awarded only if a court finds that the brand manufacturer lacked a legitimate business justification for its conduct (which, under the drug safety regime, means essentially a reasonable belief that its own REMS plan would be violated by dealing with the generic entrant). And monetary damages do not include punitive damages.

Finally, the proposed bill completely avoids the question of whether antitrust laws are applicable, leaving that possibility open to determination by courts — as is appropriate. Moreover, by establishing even more clearly the comprehensive regulatory regime governing potential generic entrants’ access to dangerous drugs, the bill would, given the holding in Trinko, probably make application of antitrust laws here considerably less likely.

Ultimately Senator Lee’s bill is a well-thought-out and targeted fix to an imperfect regulation that seems to be facilitating anticompetitive conduct by a few bad actors. It does so without trampling on the courts’ well-established antitrust jurisprudence, and without imposing excessive cost or risk on the majority of brand manufacturers that behave perfectly appropriately under the law.

In a recent Truth on the Market blog posting, I summarized the discussion at a May 17 Heritage Foundation program on the problem of anticompetitive market distortions (ACMDs), featuring Shanker Singham of the Legatum Institute (a market-oriented London think tank) and me.  The program highlighted the topic of anticompetitive government-imposed laws and regulations (which Singham and I refer to as anticompetitive market distortions, or ACMDs):

Trade freedom has increased around the world, according to the 2016 Heritage Foundation Index of Economic Freedom, due to a decrease in trade barriers, particularly tariffs. Despite this progress, many economies struggle with another burden that is increasing costs for families and businesses. Non-tariff barriers and overregulation, in the form of government-imposed laws and regulations, continue to stifle innovation and competition. These onerous and excessive regulations, backed by the power of the state, benefit the well-connected and act as an additional layer of government favoritism. Meanwhile, individuals are strapped with higher costs and fewer options.  

Singham and three colleagues (Srinivasa Rangan of Babson College, Molly Kiniry of the Competere Group, and Robert Bradley of Northeastern University) have now produced an impressive study of the economic impact of ACMDs in India (which has one of the world’s most highly regulated economies), released on May 31 by the Legatum Institute.  The study applies to India’s ACMDs the authors’ “Productivity Simulator,” which aggregates economic data to gauge the theoretical economic growth potential of an economy if ACMDs are eliminated.  Focusing on the full gamut of ACMDs affecting a nation in the areas of property rights, domestic competition, and international competition, the Simulator estimates the potential productivity gains for individual economies as measured in changes to GDP per capita, assuming all ACMDs are eliminated.  Using those productivity estimates, the Simulator can then be employed to derive resultant nation-specific estimates of potential GDP increases from “perfect” regulatory reform.  Although a perfect “regulatory nirvana” may not be achievable in the “real world,” Productivity Simulator estimates have the virtue of spotlighting the magnitude of forgone welfare due to regulatory excesses.  Even assuming a degree of imperfection in Productivity Simulator estimates applied to India, the results are startling, as the Executive Summary to the May 31 report reveals:

 “The [May 31] Study makes the following key findings:

» If India eliminated all its distortions it would be the fifth largest economy in the

world, and in GDP per capita terms, it would rise from being ranked 169th to being ranked 67th.

» If India eliminated all its distortions it would generate over 200 million new jobs, and reduce absolute poverty to zero.

» If India improved its insolvency rules, opened up to foreign investment in certain areas and better protected intellectual property rules, the number of people living on less than $2 per day would be reduced from 770 million to 627 million.

» Simply optimising its regulatory environment with regard to the World Bank Doing Business Index would lead to a productivity gain of only 0.07%.

» Improving its insolvency rules, opening up to foreign investment in certain areas and better protecting intellectual property (L2) could lead to a productivity gain of 148%.

» Fully optimising its distortions could lead to a productivity gain of 1875% of which the Indian economy would capture almost 700%.”

I look forward to further application of the Productivity Simulator to other economies.  Research reports of this sort, in conjunction with studies carried out by the World Bank and the Organization for Economic Cooperation and Development that employ other methodologies, build a strong case for sweeping market-oriented regulatory reform, in foreign countries and in the United States.

Last week the International Center for Law & Economics filed comments on the FCC’s Broadband Privacy NPRM. ICLE was joined in its comments by the following scholars of law & economics:

  • Babette E. Boliek, Associate Professor of Law, Pepperdine School of Law
  • Adam Candeub, Professor of Law, Michigan State University College of Law
  • Justin (Gus) Hurwitz, Assistant Professor of Law, Nebraska College of Law
  • Daniel Lyons, Associate Professor, Boston College Law School
  • Geoffrey A. Manne, Executive Director, International Center for Law & Economics
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University Department of Economics

As we note in our comments:

The Commission’s NPRM would shoehorn the business models of a subset of new economy firms into a regime modeled on thirty-year-old CPNI rules designed to address fundamentally different concerns about a fundamentally different market. The Commission’s hurried and poorly supported NPRM demonstrates little understanding of the data markets it proposes to regulate and the position of ISPs within that market. And, what’s more, the resulting proposed rules diverge from analogous rules the Commission purports to emulate. Without mounting a convincing case for treating ISPs differently than the other data firms with which they do or could compete, the rules contemplate disparate regulatory treatment that would likely harm competition and innovation without evident corresponding benefit to consumers.

In particular, we focus on the FCC’s failure to justify treating ISPs differently than other competitors, and its failure to justify more stringent treatment for ISPs in general:

In short, the Commission has not made a convincing case that discrimination between ISPs and edge providers makes sense for the industry or for consumer welfare. The overwhelming body of evidence upon which other regulators have relied in addressing privacy concerns urges against a hard opt-in approach. That same evidence and analysis supports a consistent regulatory approach for all competitors, and nowhere advocates for a differential approach for ISPs when they are participating in the broader informatics and advertising markets.

With respect to the proposed opt-in regime, the NPRM ignores the weight of economic evidence on opt-in rules and fails to justify the specific rules it prescribes. Of most significance is the imposition of this opt-in requirement for the sharing of non-sensitive data.

On net opt-in regimes may tend to favor the status quo, and to maintain or grow the position of a few dominant firms. Opt-in imposes additional costs on consumers and hurts competition — and it may not offer any additional protections over opt-out. In the absence of any meaningful evidence or rigorous economic analysis to the contrary, the Commission should eschew imposing such a potentially harmful regime on broadband and data markets.

Finally, we explain that, although the NPRM purports to embrace a regulatory regime consistent with the current “federal privacy regime,” and particularly the FTC’s approach to privacy regulation, it actually does no such thing — a sentiment echoed by a host of current and former FTC staff and commissioners, including the Bureau of Consumer Protection staff, Commissioner Maureen Ohlhausen, former Chairman Jon Leibowitz, former Commissioner Josh Wright, and former BCP Director Howard Beales.

Our full comments are available here.

Earlier this week I testified before the U.S. House Subcommittee on Commerce, Manufacturing, and Trade regarding several proposed FTC reform bills.

You can find my written testimony here. That testimony was drawn from a 100 page report, authored by Berin Szoka and me, entitled “The Federal Trade Commission: Restoring Congressional Oversight of the Second National Legislature — An Analysis of Proposed Legislation.” In the report we assess 9 of the 17 proposed reform bills in great detail, and offer a host of suggested amendments or additional reform proposals that, we believe, would help make the FTC more accountable to the courts. As I discuss in my oral remarks, that judicial oversight was part of the original plan for the Commission, and an essential part of ensuring that its immense discretion is effectively directed toward protecting consumers as technology and society evolve around it.

The report is “Report 2.0” of the FTC: Technology & Reform Project, which was convened by the International Center for Law & Economics and TechFreedom with an inaugural conference in 2013. Report 1.0 lays out some background on the FTC and its institutional dynamics, identifies the areas of possible reform at the agency, and suggests the key questions/issues each of them raises.

The text of my oral remarks follow, or, if you prefer, you can watch them here:

Chairman Burgess, Ranking Member Schakowsky, and Members of the Subcommittee, thank you for the opportunity to appear before you today.

I’m Executive Director of the International Center for Law & Economics, a non-profit, non-partisan research center. I’m a former law professor, I used to work at Microsoft, and I had what a colleague once called the most illustrious FTC career ever — because, at approximately 2 weeks, it was probably the shortest.

I’m not typically one to advocate active engagement by Congress in anything (no offense). But the FTC is different.

Despite Congressional reforms, the FTC remains the closest thing we have to a second national legislature. Its jurisdiction covers nearly every company in America. Section 5, at its heart, runs just 20 words — leaving the Commission enormous discretion to make policy decisions that are essentially legislative.

The courts were supposed to keep the agency on course. But they haven’t. As Former Chairman Muris has written, “the agency has… traditionally been beyond judicial control.”

So it’s up to Congress to monitor the FTC’s processes, and tweak them when the FTC goes off course, which is inevitable.

This isn’t a condemnation of the FTC’s dedicated staff. Rather, this one way ratchet of ever-expanding discretion is simply the nature of the beast.

Yet too many people lionize the status quo. They see any effort to change the agency from the outside as an affront. It’s as if Congress was struck by a bolt of lightning in 1914 and the Perfect Platonic Agency sprang forth.

But in the real world, an agency with massive scope and discretion needs oversight — and feedback on how its legal doctrines evolve.

So why don’t the courts play that role? Companies essentially always settle with the FTC because of its exceptionally broad investigatory powers, its relatively weak standard for voting out complaints, and the fact that those decisions effectively aren’t reviewable in federal court.

Then there’s the fact that the FTC sits in judgment of its own prosecutions. So even if a company doesn’t settle and actually wins before the ALJ, FTC staff still wins 100% of the time before the full Commission.

Able though FTC staffers are, this can’t be from sheer skill alone.

Whether by design or by neglect, the FTC has become, as Chairman Muris again described it, “a largely unconstrained agency.”

Please understand: I say this out of love. To paraphrase Churchill, the FTC is the “worst form of regulatory agency — except for all the others.”

Eventually Congress had to course-correct the agency — to fix the disconnect and to apply its own pressure to refocus Section 5 doctrine.

So a heavily Democratic Congress pressured the Commission to adopt the Unfairness Policy Statement in 1980. The FTC promised to restrain itself by balancing the perceived benefits of its unfairness actions against the costs, and not acting when injury is insignificant or consumers could have reasonably avoided injury on their own. It is, inherently, an economic calculus.

But while the Commission pays lip service to the test, you’d be hard-pressed to identify how (or whether) it’s implemented it in practice. Meanwhile, the agency has essentially nullified the “materiality” requirement that it volunteered in its 1983 Deception Policy Statement.

Worst of all, Congress failed to anticipate that the FTC would resume exercising its vast discretion through what it now proudly calls its “common law of consent decrees” in data security cases.

Combined with a flurry of recommended best practices in reports that function as quasi-rulemakings, these settlements have enabled the FTC to circumvent both Congressional rulemaking reforms and meaningful oversight by the courts.

The FTC’s data security settlements aren’t an evolving common law. They’re a static statement of “reasonable” practices, repeated about 55 times over the past 14 years. At this point, it’s reasonable to assume that they apply to all circumstances — much like a rule (which is, more or less, the opposite of the common law).

Congressman Pompeo’s SHIELD Act would help curtail this practice, especially if amended to include consent orders and reports. It would also help focus the Commission on the actual elements of the Unfairness Policy Statement — which should be codified through Congressman Mullins’ SURE Act.

Significantly, only one data security case has actually come before an Article III court. The FTC trumpets Wyndham as an out-and-out win. But it wasn’t. In fact, the court agreed with Wyndham on the crucial point that prior consent orders were of little use in trying to understand the requirements of Section 5.

More recently the FTC suffered another rebuke. While it won its product design suit against Amazon, the Court rejected the Commission’s “fencing in” request to permanently hover over the company and micromanage practices that Amazon had already ended.

As the FTC grapples with such cutting-edge legal issues, it’s drifting away from the balance it promised Congress.

But Congress can’t fix these problems simply by telling the FTC to take its bedrock policy statements more seriously. Instead it must regularly reassess the process that’s allowed the FTC to avoid meaningful judicial scrutiny. The FTC requires significant course correction if its model is to move closer to a true “common law.”