Archives For federal trade commission

In recent years much ink has been spilled on the problem of online privacy breaches, involving the unauthorized use of personal information transmitted over the Internet.  Internet privacy concerns are warranted.  According to a 2016 National Telecommunications and Information Administration survey of Internet-using households, 19 percent of such households (representing nearly 19 million households) reported that they had been affected by an online security breach, identity theft, or similar malicious activity during the 12 months prior to the July 2015 survey.  Security breaches appear to be more common among the most intensive Internet-using households – 31 percent of those using at least five different types of online devices suffered such breaches.  Security breach statistics, of course, do not directly measure the consumer welfare losses attributable to the unauthorized use of personal data that consumers supply to Internet service providers and to the websites which they visit.

What is the correct overall approach government should take in dealing with Internet privacy problems?  In addressing this question, it is important to focus substantial attention on the effects of online privacy regulation on economic welfare.  In particular, policies should aim at addressing Internet privacy problems in a manner that does not unduly harm the private sector or deny opportunities to consumers who are not being harmed.  The U.S. Federal Trade Commission (FTC), the federal government’s primary consumer protection agency, has been the principal federal regulator of online privacy practices.  Very recently, however, the U.S. Federal Communications Commission (FCC) has asserted the authority to regulate the privacy practices of broadband Internet service providers, and is proposing an extremely burdensome approach to such regulation that would, if implemented, have harmful economic consequences.

In March 2016, FTC Commissioner Maureen Ohlhausen succinctly summarized the FTC’s general approach to online privacy-related enforcement under Section 5 of the FTC Act, which proscribes unfair or deceptive acts or practices:

[U]nfairness establishes a baseline prohibition on practices that the overwhelming majority of consumers would never knowingly approve. Above that baseline, consumers remain free to find providers that match their preferences, and our deception authority governs those arrangements. . . .  The FTC’s case-by-case enforcement of our unfairness authority shapes our baseline privacy practices.  Like the common law, this incremental approach has proven both relatively predictable and adaptable as new technologies and business models emerge.

In November 2015, Professor (and former FTC Commissioner) Joshua Wright argued the FTC’s approach is insufficiently attuned to economic analysis, in particular, the “tradeoffs between the value to consumers and society of the free flow and exchange of data and the creation of new products and services on the one hand, against the value lost by consumers from any associated reduction in privacy.”  Nevertheless, on balance, FTC enforcement in this area generally is restrained and somewhat attentive to cost-benefit considerations.  (This undoubtedly reflects the fact (see my Heritage Legal Memorandum, here) that the statutory definition of “unfairness” in Section 5(n) of the FTC Act embodies cost-benefit analysis, and that the FTC’s Policy Statement on Deception requires detriment to consumers acting reasonably in the circumstances.)  In other words, federal enforcement policy with respect to online privacy, although it could be improved, is in generally good shape.

Or it was in good shape.  Unfortunately, on April 1, 2016, the Federal Communications Commission (FCC) decided to inject itself into “privacy space” by issuing a Notice of Proposed Rulemaking entitled “Protecting the Privacy of Customers of Broadband and Other Telecommunications Services.”  This “Privacy NPRM” sets forth detailed rules that, if adopted, would impose onerous privacy obligations on “Broadband Internet Access Service” (BIAS) Providers, the firms that provide the cables, wires, and telecommunications equipment through which Internet traffic flows – primarily cable (Comcast, for example) and telephone (Verizon, for example) companies.   The Privacy NPRM reclassifies BIAS provision as a “common carrier” service, thereby totally precluding the FTC from regulating BIAS Providers’ privacy practices (since the FTC is barred by law from regulating common carriers, under 15 U.S. Code § 45(a)(2)).  Put simply, the NPRM required BIAS Providers “to obtain express consent in advance of practically every use of a customer[s] data”, without regard to the effects of such a requirement on economic welfare.  All other purveyors of Internet services, however – in particular, the large numbers of “edge providers” that generate Internet content and services (Google, Amazon, and Facebook, for example) – are exempt from the new FCC regulatory requirements.  In short, the Privacy NPRM establishes a two-tier privacy regulatory system, with BIAS Providers subject to tight FCC privacy rules, while all other Internet service firms are subject to more nuanced, case-by-case, effects-based evaluation of their privacy practices by the FTC.  This disparate regulatory approach is peculiar (if not wholly illogical), since edge providers in general have greater access than BIAS Providers to consumers’ non-public information, and thus may appear to pose a greater threat to consumers’ interest in privacy.

The FCC’s proposal to regulate BIAS Providers’ privacy practices represents bad law and horrible economic policy.  First, it undermines the rule of law by extending the FCC’s authority beyond its congressional mandate.  It does this by basing its regulation of a huge universe of information exchanges on Section 222 of the Telecommunications Act of 1996, a narrow provision aimed at a very limited type of customer-related data obtained in connection with old-style voice telephony transmissions.  This is egregious regulatory overreach.  Second, if implemented, it will harm consumers, producers, and the overall economic by imposing a set of sweeping opt-in consent requirements on BIAS Providers, without regard to private sector burdens or actual consumer welfare (see here); by reducing BIAS Provider revenues and thereby dampening investment that is vital to the continued growth of and innovation in Internet-related industries (see here); by reducing the ability of BIAS Providers to provide welfare-enhancing competitive pressure on providers on Internet edge providers (see here); and by raising consumer prices for Internet services and deny discount programs desired by consumers (see here).

What’s worse, the FCC’s proposed involvement in online privacy oversight comes at a time of increased Internet privacy regulation by foreign countries, much of it highly intrusive and lacking in economic sophistication.  A particularly noteworthy effort to clarify cross-national legal standards is the Privacy Shield, a 2016 United States – European Union agreement that establishes regulatory online privacy protection norms, backed by FTC enforcement, that U.S. companies transmitting data into Europe may choose to accept on a voluntary basis.  (If they do not accede to the Shield, they may be subject to uncertain and heavy-handed European sanctions.)  The Privacy NPRM, if implemented, will create an additional concern for BIAS Providers, since they will have to evaluate the implications of new FCC regulation (rather than simply rely on FTC oversight) in deciding whether to opt in to the Shield’s standards and obligations.

In sum, the FCC’s Privacy NPRM would, if implemented, harm consumers and producers, slow innovation, and offend the rule of law.  This prompts four recommendations.

  • The FCC should withdraw the NPRM and leave it to the FTC to oversee all online privacy practices, under its Section 5 unfairness and deception authority. The adoption of the Privacy Shield, which designates the FTC as the responsible American privacy oversight agency, further strengthens the case against FCC regulation in this area. 
  • In overseeing online privacy practices, the FTC should employ a very light touch that stresses economic analysis and cost-benefit considerations. Moreover, it should avoid requiring that rigid privacy policy conditions be kept in place for long periods of time through consent decree conditions, in order to allow changing market conditions to shape and improve business privacy policies. 
  • Moreover, the FTC should borrow a page from former FTC Commissioner Joshua Wright by implementing an “economic approach” to privacy. Under such an approach:  

o             FTC economists would help make the Commission a privacy “thought leader” by developing a rigorous academic research agenda on the economics of privacy, featuring the economic evaluation of industry sectors and practices; 

o             the FTC would bear the burden of proof of showing that violations of a company’s privacy policy are material to consumer decision-making;

o             FTC economists would report independently to the FTC about proposed privacy-related enforcement initiatives; and

o             the FTC would publish the views of its Bureau of Economics in all privacy-related consent decrees that are placed on the public record.   

  • The FTC should encourage the European Commission and other foreign regulators to take into account the economics of privacy in developing their privacy regulatory policies. In so doing, it should emphasize that innovation is harmed, the beneficial development of the Internet is slowed, and consumer welfare and rights are undermined through highly prescriptive regulation in this area (well-intentioned though it may be).  Relatedly, the FTC and other U.S. Government negotiators should argue against adoption of a “one-size-fits-all” global privacy regulation framework.   Such a global framework could harmfully freeze into place over-regulatory policies and preclude beneficial experimentation in alternative forms of “lighter-touch” regulation and enforcement. 

While no panacea, these recommendations would help deter (or, at least, constrain) the economically harmful government micromanagement of businesses’ privacy practices, in the United States and abroad.

In the wake of the recent OIO decision, separation of powers issues should be at the forefront of everyone’s mind. In reaching its decision, the DC Circuit relied upon Chevron to justify its extreme deference to the FCC. The court held, for instance, that

Our job is to ensure that an agency has acted “within the limits of [Congress’s] delegation” of authority… and that its action is not “arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law.”… Critically, we do not “inquire as to whether the agency’s decision is wise as a policy matter; indeed, we are forbidden from substituting our judgment for that of the agency.”… Nor do we inquire whether “some or many economists would disapprove of the [agency’s] approach” because “we do not sit as a panel of referees on a professional economics journal, but as a panel of generalist judges obliged to defer to a reasonable judgment by an agency acting pursuant to congressionally delegated authority.

The DC Circuit’s decision takes a broad view of Chevron deference and, in so doing, ignores or dismisses some of the limits placed upon the doctrine by cases like Michigan v. EPA and UARG v. EPA (though Judge Williams does bring up UARG in dissent).

Whatever one thinks of the validity of the FCC’s approach to regulating the Internet, there is no question that it has, at best, a weak statutory foothold. Without prejudging the merits of the OIO, or the question of deference to agencies that find “[regulatory] elephants in [statutory] mouseholes,”  such broad claims of authority, based on such limited statutory language, should give one pause. That the court upheld the FCC’s interpretation of the Act without expressing reservations, suggesting any limits, or admitting of any concrete basis for challenging the agency’s authority beyond circular references to “abuse of discretion” is deeply troubling.

Separation of powers is a fundamental feature of our democracy, and one that has undoubtedly contributed to the longevity of our system of self-governance. Not least among the important features of separation of powers is the ability of courts to review the lawfulness of legislation and executive action.

The founders presciently realized the dangers of allowing one part of the government to centralize power in itself. In Federalist 47, James Madison observed that

The accumulation of all powers, legislative, executive, and judiciary, in the same hands, whether of one, a few, or many, and whether hereditary, selfappointed, or elective, may justly be pronounced the very definition of tyranny. Were the federal Constitution, therefore, really chargeable with the accumulation of power, or with a mixture of powers, having a dangerous tendency to such an accumulation, no further arguments would be necessary to inspire a universal reprobation of the system. (emphasis added)

The modern administrative apparatus has become the sort of governmental body that the founders feared and that we have somehow grown to accept. The FCC is not alone in this: any member of the alphabet soup that constitutes our administrative state, whether “independent” or otherwise, is typically vested with great, essentially unreviewable authority over the economy and our daily lives.

As Justice Thomas so aptly put it in his must-read concurrence in Michigan v. EPA:

Perhaps there is some unique historical justification for deferring to federal agencies, but these cases reveal how paltry an effort we have made to understand it or to confine ourselves to its boundaries. Although we hold today that EPA exceeded even the extremely permissive limits on agency power set by our precedents, we should be alarmed that it felt sufficiently emboldened by those precedents to make the bid for deference that it did here. As in other areas of our jurisprudence concerning administrative agencies, we seem to be straying further and further from the Constitution without so much as pausing to ask why. We should stop to consider that document before blithely giving the force of law to any other agency “interpretations” of federal statutes.

Administrative discretion is fantastic — until it isn’t. If your party is the one in power, unlimited discretion gives your side the ability to run down a wish list, checking off controversial items that could never make it past a deliberative body like Congress. That same discretion, however, becomes a nightmare under extreme deference as political opponents, newly in power, roll back preferred policies. In the end, regulation tends toward the extremes, on both sides, and ultimately consumers and companies pay the price in the form of excessive regulatory burdens and extreme uncertainty.

In theory, it is (or should be) left to the courts to rein in agency overreach. Unfortunately, courts have been relatively unwilling to push back on the administrative state, leaving the task up to Congress. And Congress, too, has, over the years, found too much it likes in agency power to seriously take on the structural problems that give agencies effectively free reign. At least, until recently.

In March of this year, Representative Ratcliffe (R-TX) proposed HR 4768: the Separation of Powers Restoration Act (“SOPRA”). Arguably this is first real effort to fix the underlying problem since the 1995 “Comprehensive Regulatory Reform Act” (although, it should be noted, SOPRA is far more targeted than was the CRRA). Under SOPRA, 5 U.S.C. § 706 — the enacted portion of the APA that deals with judicial review of agency actions —  would be amended to read as follows (with the new language highlighted):

(a) To the extent necessary to decision and when presented, the reviewing court shall determine the meaning or applicability of the terms of an agency action and decide de novo all relevant questions of law, including the interpretation of constitutional and statutory provisions, and rules made by agencies. Notwithstanding any other provision of law, this subsection shall apply in any action for judicial review of agency action authorized under any provision of law. No law may exempt any such civil action from the application of this section except by specific reference to this section.

These changes to the scope of review would operate as a much-needed check on the unlimited discretion that agencies currently enjoy. They give courts the ability to review “de novo all relevant questions of law,” which includes agencies’ interpretations of their own rules.

The status quo has created a negative feedback cycle. The Chevron doctrine, as it has played out, gives outsized incentives to both the federal agencies, as well as courts, to essentially disregard Congress’s intended meaning for particular statutes. Today an agency can write rules and make decisions safe in the knowledge that Chevron will likely insulate it from any truly serious probing by a district court with regards to how well the agency’s action actually matches up with congressional intent or with even rudimentary cost-benefit analysis.

Defenders of the administrative state may balk at changing this state of affairs, of course. But defending an institution that is almost entirely immune from judicial and legal review seems to be a particularly hard row to hoe.

Public Knowledge, for instance, claims that

Judicial deference to agency decision-making is critical in instances where Congress’ intent is unclear because it balances each branch of government’s appropriate role and acknowledges the realities of the modern regulatory state.

To quote Justice Scalia, an unfortunate champion of the Chevron doctrine, this is “pure applesauce.”

The very core of the problem that SOPRA addresses is that the administrative state is not a proper branch of government — it’s a shadow system of quasi-legislation and quasi-legal review. Congress can be chastened by popular vote. Judges who abuse discretion can be overturned (or impeached). The administrative agencies, on the other hand, are insulated through doctrines like Chevron and Auer, and their personnel subject more or less to the political whims of the executive branch.

Even agencies directly under the control of the executive branch  — let alone independent agencies — become petrified caricatures of their original design as layers of bureaucratic rule and custom accrue over years, eventually turning the organization into an entity that serves, more or less, to perpetuate its own existence.

Other supporters of the status quo actually identify the unreviewable see-saw of agency discretion as a feature, not a bug:

Even people who agree with the anti-government premises of the sponsors [of SOPRA] should recognize that a change in the APA standard of review is an inapt tool for advancing that agenda. It is shortsighted, because it ignores the fact that, over time, political administrations change. Sometimes the administration in office will generally be in favor of deregulation, and in these circumstances a more intrusive standard of judicial review would tend to undercut that administration’s policies just as surely as it may tend to undercut a more progressive administration’s policies when the latter holds power. The APA applies equally to affirmative regulation and to deregulation.

But presidential elections — far from justifying this extreme administrative deference — actually make the case for trimming the sails of the administrative state. Presidential elections have become an important part about how candidates will wield the immense regulatory power vested in the executive branch.

Thus, for example, as part of his presidential bid, Jeb Bush indicated he would use the EPA to roll back every policy that Obama had put into place. One of Donald Trump’s allies suggested that Trump “should turn off [CNN’s] FCC license” in order to punish the news agency. And VP hopeful Elizabeth Warren has suggested using the FDIC to limit the growth of financial institutions, and using the FCC and FTC to tilt the markets to make it easier for the small companies to get an advantage over the “big guys.”

Far from being neutral, technocratic administrators of complex social and economic matters, administrative agencies have become one more political weapon of majority parties as they make the case for how their candidates will use all the power at their disposal — and more — to work their will.

As Justice Thomas, again, noted in Michigan v. EPA:

In reality…, agencies “interpreting” ambiguous statutes typically are not engaged in acts of interpretation at all. Instead, as Chevron itself acknowledged, they are engaged in the “formulation of policy.” Statutory ambiguity thus becomes an implicit delegation of rulemaking authority, and that authority is used not to find the best meaning of the text, but to formulate legally binding rules to fill in gaps based on policy judgments made by the agency rather than Congress.

And this is just the thing: SOPRA would bring far-more-valuable predictability and longevity to our legal system by imposing a system of accountability on the agencies. Currently, commissions often believe they can act with impunity (until the next election at least), and even the intended constraints of the APA frequently won’t do much to tether their whims to statute or law if they’re intent on deviating. Having a known constraint (or, at least, a reliable process by which judicial constraint may be imposed) on their behavior will make them think twice about exactly how legally and economically sound proposed rules and other actions are.

The administrative state isn’t going away, even if SOPRA were passed; it will continue to be the source of the majority of the rules under which our economy operates. We have long believed that a benefit of our judicial system is its consistency and relative lack of politicization. If this is a benefit for interpreting laws when agencies aren’t involved, it should also be a benefit when they are involved. Particularly as more and more law emanates from agencies rather than Congress, the oversight of largely neutral judicial arbiters is an essential check on the administrative apparatus’ “accumulation of all powers.”

The interest of judges tends to include a respect for the development of precedent that yields consistent and transparent rules for all future litigants and, more broadly, for economic actors and consumers making decisions in the shadow of the law. This is markedly distinct from agencies which, more often than not, promote the particular, shifting, and often-narrow political sentiments of the day.

Whether a Republican- or a Democrat— appointed district judge reviews an agency action, that judge will be bound (more or less) by the precedent that came before, regardless of the judge’s individual political preferences. Contrast this with the FCC’s decision to reclassify broadband as a Title II service, for example, where previously it had been committed to the idea that broadband was an information service, subject to an entirely different — and far less onerous — regulatory regime.  Of course, the next FCC Chairman may feel differently, and nothing would stop another regulatory shift back to the pre-OIO status quo. Perhaps more troublingly, the enormous discretion afforded by courts under current standards of review would permit the agency to endlessly tweak its rules — forbearing from some regulations but not others, un-forbearing, re-interpreting, etc., with precious few judicial standards available to bring certainty to the rules or to ensure their fealty to the statute or the sound economics that is supposed to undergird administrative decisionmaking.

SOPRA, or a bill like it, would have required the Commission to actually be accountable for its historical regulations, and would force it to undergo at least rudimentary economic analysis to justify its actions. This form of accountability can only be to the good.

The genius of our system is its (potential) respect for the rule of law. This is an issue that both sides of the aisle should be able to get behind: minority status is always just one election cycle away. We should all hope to see SOPRA — or some bill like it — gain traction, rooted in long-overdue reflection on just how comfortable we are as a polity with a bureaucratic system increasingly driven by unaccountable discretion.

Thanks to Geoff for the introduction. I look forward to posting a few things over the summer.

I’d like to begin by discussing Geoff’s post on the pending legislative proposals designed to combat strategic abuse of drug safety regulations to prevent generic competition. Specifically, I’d like to address the economic incentive structure that is in effect in this highly regulated market.

Like many others, I first noticed the abuse of drug safety regulations to prevent competition when Turing Pharmaceuticals—then led by now infamous CEO Martin Shkreli—acquired the manufacturing rights for the anti-parasitic drug Daraprim, and raised the price of the drug by over 5,000%. The result was a drug that cost $750 per tablet. Daraprim (pyrimethamine) is used to combat malaria and toxoplasma gondii infections in immune-compromised patients, especially those with HIV. The World Health Organization includes Daraprim on its “List of Essential Medicines” as a medicine important to basic health systems. After the huge price hike, the drug was effectively out of reach for many insurance plans and uninsured patients who needed it for the six to eight week course of treatment for toxoplasma gondii infections.

It’s not unusual for drugs to sell at huge multiples above their manufacturing cost. Indeed, a primary purpose of patent law is to allow drug companies to earn sufficient profits to engage in the expensive and risky business of developing new drugs. But Daraprim was first sold in 1953 and thus has been off patent for decades. With no intellectual property protection Daraprim should, theoretically, now be available from generic drug manufactures for only a little above cost. Indeed, this is what we see in the rest of the world. Daraprim is available all over the world for very cheap prices. The per tablet price is 3 rupees (US$0.04) in India, R$0.07 (US$0.02) in Brazil, US$0.18 in Australia, and US$0.66 in the UK.

So what gives in the U.S.? Or rather, what does not give? What in our system of drug distribution has gotten stuck and is preventing generic competition from swooping in to compete down the high price of off-patent drugs like Daraprim? The answer is not market failure, but rather regulatory failure, as Geoff noted in his post. While generics would love to enter a market where a drug is currently selling for high profits, they cannot do so without getting FDA approval for their generic version of the drug at issue. To get approval, a generic simply has to file an Abbreviated New Drug Application (“ANDA”) that shows that its drug is equivalent to the branded drug with which it wants to compete. There’s no need for the generic to repeat the safety and efficacy tests that the brand manufacturer originally conducted. To test for equivalence, the generic needs samples of the brand drug. Without those samples, the generic cannot meet its burden of showing equivalence. This is where the strategic use of regulation can come into play.

Geoff’s post explains the potential abuse of Risk Evaluation and Mitigation Strategies (“REMS”). REMS are put in place to require certain safety steps (like testing a woman for pregnancy before prescribing a drug that can cause birth defects) or to restrict the distribution channels for dangerous or addictive drugs. As Geoff points out, there is evidence that a few brand name manufacturers have engaged in bad-faith refusals to provide samples using the excuse of REMS or restricted distribution programs to (1) deny requests for samples, (2) prevent generic manufacturers from buying samples from resellers, and (3) deny generics whose drugs have won approval access to the REMS system that is required for generics to distribute their drugs. Once the FDA has certified that a generic manufacturer can safely handle the drug at issue, there is no legitimate basis for the owners of brand name drugs to deny samples to the generic maker. Expressed worries about liability from entering joint REMS programs with generics also ring hollow, for the most part, and would be ameliorated by the pending legislation.

It’s important to note that this pricing situation is unique to drugs because of the regulatory framework surrounding drug manufacture and distribution. If a manufacturer of, say, an off-patent vacuum cleaner wants to prevent competitors from copying its vacuum cleaner design, it is unlikely to be successful. Even if the original manufacturer refuses to sell any vacuum cleaners to a competitor, and instructs its retailers not to sell either, this will be very difficult to monitor and enforce. Moreover, because of an unrestricted resale market, a competitor would inevitably be able to obtain samples of the vacuum cleaner it wishes to copy. Only patent law can successfully protect against the copying of a product sold to the general public, and when the patent expires, so too will the ability to prevent copying.

Drugs are different. The only way a consumer can resell prescription drugs is by breaking the law. Pills bought from an illegal secondary market would be useless to generics for purposes of FDA approval anyway, because the chain of custody would not exist to prove that the samples are the real thing. This means generics need to get samples from the authorized manufacturer or distribution company. When a drug is subject to a REMS-required restricted distribution program, it is even more difficult, if not impossible, for a generic maker to get samples of the drugs for which it wants to make generic versions. Restricted distribution programs, which are used for dangerous or addictive drugs, by design very tightly control the chain of distribution so that the drugs go only to patients with proper prescriptions from authorized doctors.

A troubling trend has arisen recently in which drug owners put their branded drugs into restricted distribution programs not because of any FDA REMS requirement, but instead as a method to prevent generics from obtaining samples and making generic versions of the drugs. This is the strategy that Turing used before it raised prices over 5,000% on Daraprim. And Turing isn’t the only company to use this strategy. It is being emulated by others, although perhaps not so conspicuously. For instance, in 2015, Valeant Pharmaceuticals completed a hostile takeover of Allergan Pharmaceuticals, with the help of the hedge fund, Pershing Square. Once Valeant obtained ownership of Allergan and its drug portfolio, it adopted restricted distribution programs and raised the prices on its off-patent drugs substantially. It raised the price of two life-saving heart drugs by 212% and 525% respectively. Others have followed suit.

A key component of the strategy to profit from hiking prices on off-patent drugs while avoiding competition from generics is to select drugs that do not currently have generic competitors. Sometimes this is because a drug has recently come off patent, and sometimes it is because the drug is for a small patient population, and thus generics haven’t bothered to enter the market given that brand name manufacturers generally drop their prices to close to cost after the drug comes off patent. But with the strategic control of samples and refusals to allow generics to enter REMS programs, the (often new) owners of the brand name drugs seek to prevent the generic competition that we count on to make products cheap and plentiful once their patent protection expires.

Most brand name drug makers do not engage in withholding samples from generics and abusing restricted distribution and REMS programs. But the few that do cost patients and insurers dearly for important medicines that should be much cheaper once they go off patent. More troubling still is the recent strategy of taking drugs that have been off patent and cheap for years, and abusing the regulatory regime to raise prices and block competition. This growing trend of abusing restricted distribution and REMS to facilitate rent extraction from drug purchasers needs to be corrected.

Two bills addressing this issue are pending in Congress. Both bills (1) require drug companies to provide samples to generics after the FDA has certified the generic, (2) require drug companies to enter into shared REMS programs with generics, (3) allow generics to set up their own REMS compliant systems, and (4) exempt drug companies from liability for sharing products and REMS-compliant systems with generic companies in accordance with the steps set out in the bills. When it comes to remedies, however, the Senate version is significantly better. The penalties provided in the House bill are both vague and overly broad. The bill provides for treble damages and costs against the drug company “of the kind described in section 4(a) of the Clayton Act.” Not only is the application of the Clayton Act unclear in the context of the heavily regulated market for drugs (see Trinko), but treble damages may over-deter reasonably restrictive behavior by drug companies when it comes to distributing dangerous drugs.

The remedies in the Senate version are very well crafted to deter rent seeking behavior while not overly deterring reasonable behavior. The remedial scheme is particularly good, because it punishes most those companies that attempt to make exorbitant profits on drugs by denying generic entry. The Senate version provides as a remedy for unreasonable delay that the plaintiff shall be awarded attorneys’ fees, costs, and the defending drug company’s profits on the drug at issue during the time of the unreasonable delay. This means that a brand name drug company that sells an old drug for a low price and delays sharing only because of honest concern about the safety standards of a particular generic company will not face terribly high damages if it is found unreasonable. On the other hand, a company that sends the price of an off-patent drug soaring and then attempts to block generic entry will know that it can lose all of its rent-seeking profits, plus the cost of the victorious generic company’s attorneys fees. This vastly reduces the incentive for the company owning the brand name drug to raise prices and keep competitors out. It likewise greatly increases the incentive of a generic company to enter the market and–if it is unreasonably blocked–to file a civil action the result of which would be to transfer the excess profits to the generic. This provides a rather elegant fix to the regulatory gaming in this area that has become an increasing problem. The balancing of interests and incentives in the Senate bill should leave many congresspersons feeling comfortable to support the bill.

Brand drug manufacturers are no strangers to antitrust accusations when it comes to their complicated relationship with generic competitors — most obviously with respect to reverse payment settlements. But the massive and massively complex regulatory scheme under which drugs are regulated has provided other opportunities for regulatory legerdemain with potentially anticompetitive effect, as well.

In particular, some FTC Commissioners have raised concerns that brand drug companies have been taking advantage of an FDA drug safety program — the Risk Evaluation and Mitigation Strategies program, or “REMS” — to delay or prevent generic entry.

Drugs subject to a REMS restricted distribution program are difficult to obtain through market channels and not otherwise readily available, even for would-be generic manufacturers that need samples in order to perform the tests required to receive FDA approval to market their products. REMS allows (requires, in fact) brand manufacturers to restrict the distribution of certain drugs that present safety or abuse risks, creating an opportunity for branded drug manufacturers to take advantage of imprecise regulatory requirements by inappropriately limiting access by generic manufacturers.

The FTC has not (yet) brought an enforcement action, but it has opened several investigations, and filed an amicus brief in a private-party litigation. Generic drug companies have filed several antitrust claims against branded drug companies and raised concerns with the FDA.

The problem, however, is that even if these companies are using REMS to delay generics, such a practice makes for a terrible antitrust case. Not only does the existence of a regulatory scheme arguably set Trinko squarely in the way of a successful antitrust case, but the sort of refusal to deal claims at issue here (as in Trinko) are rightly difficult to win because, as the DOJ’s Section 2 Report notes, “there likely are few circumstances where forced sharing would help consumers in the long run.”

But just because there isn’t a viable antitrust case doesn’t mean there isn’t still a competition problem. But in this case, it’s a problem of regulatory failure. Companies rationally take advantage of poorly written federal laws and regulations in order to tilt the market to their own advantage. It’s no less problematic for the market, but its solution is much more straightforward, if politically more difficult.

Thus it’s heartening to see that Senator Mike Lee (R-UT), along with three of his colleagues (Patrick Leahy (D-VT), Chuck Grassley (R-IA), and Amy Klobuchar (D-MN)), has proposed a novel but efficient way to correct these bureaucracy-generated distortions in the pharmaceutical market without resorting to the “blunt instrument” of antitrust law. As the bill notes:

While the antitrust laws may address actions by license holders who impede the prompt negotiation and development on commercially reasonable terms of a single, shared system of elements to assure safe use, a more tailored legal pathway would help ensure that license holders negotiate such agreements in good faith and in a timely manner, facilitating competition in the marketplace for drugs and biological products.

The legislative solution put forward by the Creating and Restoring Equal Access to Equivalent Samples (CREATES) Act of 2016 targets the right culprit: the poor regulatory drafting that permits possibly anticompetitive conduct to take place. Moreover, the bill refrains from creating a per se rule, instead implementing several features that should still enable brand manufacturers to legitimately restrict access to drug samples when appropriate.

In essence, Senator Lee’s bill introduces a third party (in this case, the Secretary of Health and Human Services) who is capable of determining whether an eligible generic manufacturer is able to comply with REMS restrictions — thus bypassing any bias on the part of the brand manufacturer. Where the Secretary determines that a generic firm meets the REMS requirements, the bill also creates a narrow cause of action for this narrow class of plaintiffs, allowing suits against certain brand manufacturers who — despite the prohibition on using REMS to delay generics — nevertheless misuse the process to delay competitive entry.

Background on REMS

The REMS program was introduced as part of the Food and Drug Administration Amendments Act of 2007 (FDAAA). Following the withdrawal of Vioxx, an arthritis pain reliever, from the market because of a post-approval linkage of the drug to heart attacks, the FDA was under considerable fire, and there was a serious risk that fewer and fewer net beneficial drugs would be approved. The REMS program was introduced by Congress as a mechanism to ensure that society could reap the benefits from particularly risky drugs and biologics — rather than the FDA preventing them from entering the market at all. It accomplishes this by ensuring (among other things) that brands and generics adopt appropriate safety protocols for distribution and use of drugs — particularly when a drug has the potential to cause serious side effects, or has an unusually high abuse profile.

The FDA-determined REMS protocols can range from the simple (e.g., requiring a medication guide or a package insert about potential risks) to the more burdensome (including restrictions on a drug’s sale and distribution, or what the FDA calls “Elements to Assure Safe Use” (“ETASU”)). Most relevant here, the REMS process seems to allow brands considerable leeway to determine whether generic manufacturers are compliant or able to comply with ETASUs. Given this discretion, it is no surprise that brand manufacturers may be tempted to block competition by citing “safety concerns.”

Although the FDA specifically forbids the use of REMS to block lower-cost, generic alternatives from entering the market (of course), almost immediately following the law’s enactment, certain less-scrupulous branded pharmaceutical companies began using REMS for just that purpose (also, of course).

REMS abuse

To enter into pharmaceutical markets that no longer have any underlying IP protections, manufactures must submit to the FDA an Abbreviated New Drug Application (ANDA) for a generic, or an Abbreviated Biologic License Application (ABLA) for a biosimilar, of the brand drug. The purpose is to prove to the FDA that the competing product is as safe and effective as the branded reference product. In order to perform the testing sufficient to prove efficacy and safety, generic and biosimilar drug manufacturers must acquire a sample (many samples, in fact) of the reference product they are trying to replicate.

For the narrow class of dangerous or highly abused drugs, generic manufacturers are forced to comply with any REMS restrictions placed upon the brand manufacturer — even when the terms require the brand manufacturer to tightly control the distribution of its product.

And therein lies the problem. Because the brand manufacturer controls access to its products, it can refuse to provide the needed samples, using REMS as an excuse. In theory, it may be true in certain cases that a brand manufacturer is justified in refusing to distribute samples of its product, of course; some would-be generic manufacturers certainly may not meet the requisite standards for safety and security.

But in practice it turns out that most of the (known) examples of brands refusing to provide samples happen across the board — they preclude essentially all generic competition, not just the few firms that might have insufficient safeguards. It’s extremely difficult to justify such refusals on the basis of a generic manufacturer’s suitability when all would-be generic competitors are denied access, including well-established, high-quality manufacturers.

But, for a few brand manufacturers, at least, that seems to be how the REMS program is implemented. Thus, for example, Jon Haas, director of patient access at Turing Pharmaceuticals, referred to the practice of denying generics samples this way:

Most likely I would block that purchase… We spent a lot of money for this drug. We would like to do our best to avoid generic competition. It’s inevitable. They seem to figure out a way [to make generics], no matter what. But I’m certainly not going to make it easier for them. We’re spending millions and millions in research to find a better Daraprim, if you will.

As currently drafted, the REMS program gives branded manufacturers the ability to limit competition by stringing along negotiations for product samples for months, if not years. Although access to a few samples for testing is seemingly such a small, trivial thing, the ability to block this access allows a brand manufacturer to limit competition (at least from bioequivalent and generic drugs; obviously competition between competing branded drugs remains).

And even if a generic competitor manages to get ahold of samples, the law creates an additional wrinkle by imposing a requirement that brand and generic manufacturers enter into a single shared REMS plan for bioequivalent and generic drugs. But negotiating the particulars of the single, shared program can drag on for years. Consequently, even when a generic manufacturer has received the necessary samples, performed the requisite testing, and been approved by the FDA to sell a competing drug, it still may effectively be barred from entering the marketplace because of REMS.

The number of drugs covered by REMS is small: fewer than 100 in a universe of several thousand FDA-approved drugs. And the number of these alleged to be subject to abuse is much smaller still. Nonetheless, abuse of this regulation by certain brand manufacturers has likely limited competition and increased prices.

Antitrust is not the answer

Whether the complex, underlying regulatory scheme that allocates the relative rights of brands and generics — and that balances safety against access — gets the balance correct or not is an open question, to be sure. But given the regulatory framework we have and the perceived need for some sort of safety controls around access to samples and for shared REMS plans, the law should at least work to do what it intends, without creating an opportunity for harmful manipulation. Yet it appears that the ambiguity of the current law has allowed some brand manufacturers to exploit these safety protections to limit competition.

As noted above, some are quite keen to make this an antitrust issue. But, as also noted, antitrust is a poor fit for handling such abuses.

First, antitrust law has an uneasy relationship with other regulatory schemes. Not least because of Trinko, it is a tough case to make that brand manufacturers are violating antitrust laws when they rely upon legal obligations under a safety program that is essentially designed to limit generic entry on safety grounds. The issue is all the more properly removed from the realm of antitrust enforcement given that the problem is actually one of regulatory failure, not market failure.

Second, antitrust law doesn’t impose a duty to deal with rivals except in very limited circumstances. In Trinko, for example, the Court rejected the invitation to extend a duty to deal to situations where an existing, voluntary economic relationship wasn’t terminated. By definition this is unlikely to be the case here where the alleged refusal to deal is what prevents the generic from entering the market in the first place. The logic behind Trinko (and a host of other cases that have limited competitors’ obligations to assist their rivals) was to restrict duty to deal cases to those rare circumstances where it reliably leads to long-term competitive harm — not where it amounts to a perfectly legitimate effort to compete without giving rivals a leg-up.

But antitrust is such a powerful tool and such a flexible “catch-all” regulation, that there are always efforts to thwart reasonable limits on its use. As several of us at TOTM have written about at length in the past, former FTC Commissioner Rosch and former FTC Chairman Leibowitz were vocal proponents of using Section 5 of the FTC Act to circumvent sensible judicial limits on making out and winning antitrust claims, arguing that the limits were meant only for private plaintiffs — not (implicitly infallible) government enforcers. Although no one at the FTC has yet (publicly) suggested bringing a REMS case as a standalone Section 5 case, such a case would be consistent with the sorts of theories that animated past standalone Section 5 cases.

Again, this approach serves as an end-run around the reasonable judicial constraints that evolved as a result of judges actually examining the facts of individual cases over time, and is a misguided way of dealing with what is, after all, fundamentally a regulatory design problem.

The CREATES Act

Senator Lee’s bill, on the other hand, aims to solve the problem with a more straightforward approach by improving the existing regulatory mechanism and by adding a limited judicial remedy to incentivize compliance under the amended regulatory scheme. In summary:

  • The bill creates a cause of action for a refusal to deal only where plaintiff can prove, by a preponderance of the evidence, that certain well-defined conditions are met.
  • For samples, if a drug is not covered by a REMS, or if the generic manufacturer is specifically authorized, then the generic can sue if it doesn’t receive sufficient quantities of samples on commercially reasonable terms. This is not a per se offense subject to outsized antitrust damages. Instead, the remedy is a limited injunction ensuring the sale of samples on commercially reasonable terms, reasonable attorneys’ fees, and a monetary fine limited to revenue earned from sale of the drug during the refusal period.
  • The bill also gives a brand manufacturer an affirmative defense if it can prove by a preponderance of the evidence that, regardless of its own refusal to supply them, samples were nevertheless available elsewhere on commercially reasonable terms, or where the brand manufacturer is unable to supply the samples because it does not actually produce or market the drug.
  • In order to deal with the REMS process problems, the bill creates similar rights with similar limitations when the license holders and generics cannot come to an agreement about a shared REMS on commercially reasonable terms within 120 days of first contact by an eligible developer.
  • The bill also explicitly limits brand manufacturers’ liability for claims “arising out of the failure of an [eligible generic manufacturer] to follow adequate safeguards,” thus removing one of the (perfectly legitimate) objections to the bill pressed by brand manufacturers.

The primary remedy is limited, injunctive relief to end the delay. And brands are protected from frivolous litigation by an affirmative defense under which they need only show that the product is available for purchase on reasonable terms elsewhere. Damages are similarly limited and are awarded only if a court finds that the brand manufacturer lacked a legitimate business justification for its conduct (which, under the drug safety regime, means essentially a reasonable belief that its own REMS plan would be violated by dealing with the generic entrant). And monetary damages do not include punitive damages.

Finally, the proposed bill completely avoids the question of whether antitrust laws are applicable, leaving that possibility open to determination by courts — as is appropriate. Moreover, by establishing even more clearly the comprehensive regulatory regime governing potential generic entrants’ access to dangerous drugs, the bill would, given the holding in Trinko, probably make application of antitrust laws here considerably less likely.

Ultimately Senator Lee’s bill is a well-thought-out and targeted fix to an imperfect regulation that seems to be facilitating anticompetitive conduct by a few bad actors. It does so without trampling on the courts’ well-established antitrust jurisprudence, and without imposing excessive cost or risk on the majority of brand manufacturers that behave perfectly appropriately under the law.

Last week the International Center for Law & Economics filed comments on the FCC’s Broadband Privacy NPRM. ICLE was joined in its comments by the following scholars of law & economics:

  • Babette E. Boliek, Associate Professor of Law, Pepperdine School of Law
  • Adam Candeub, Professor of Law, Michigan State University College of Law
  • Justin (Gus) Hurwitz, Assistant Professor of Law, Nebraska College of Law
  • Daniel Lyons, Associate Professor, Boston College Law School
  • Geoffrey A. Manne, Executive Director, International Center for Law & Economics
  • Paul H. Rubin, Samuel Candler Dobbs Professor of Economics, Emory University Department of Economics

As we note in our comments:

The Commission’s NPRM would shoehorn the business models of a subset of new economy firms into a regime modeled on thirty-year-old CPNI rules designed to address fundamentally different concerns about a fundamentally different market. The Commission’s hurried and poorly supported NPRM demonstrates little understanding of the data markets it proposes to regulate and the position of ISPs within that market. And, what’s more, the resulting proposed rules diverge from analogous rules the Commission purports to emulate. Without mounting a convincing case for treating ISPs differently than the other data firms with which they do or could compete, the rules contemplate disparate regulatory treatment that would likely harm competition and innovation without evident corresponding benefit to consumers.

In particular, we focus on the FCC’s failure to justify treating ISPs differently than other competitors, and its failure to justify more stringent treatment for ISPs in general:

In short, the Commission has not made a convincing case that discrimination between ISPs and edge providers makes sense for the industry or for consumer welfare. The overwhelming body of evidence upon which other regulators have relied in addressing privacy concerns urges against a hard opt-in approach. That same evidence and analysis supports a consistent regulatory approach for all competitors, and nowhere advocates for a differential approach for ISPs when they are participating in the broader informatics and advertising markets.

With respect to the proposed opt-in regime, the NPRM ignores the weight of economic evidence on opt-in rules and fails to justify the specific rules it prescribes. Of most significance is the imposition of this opt-in requirement for the sharing of non-sensitive data.

On net opt-in regimes may tend to favor the status quo, and to maintain or grow the position of a few dominant firms. Opt-in imposes additional costs on consumers and hurts competition — and it may not offer any additional protections over opt-out. In the absence of any meaningful evidence or rigorous economic analysis to the contrary, the Commission should eschew imposing such a potentially harmful regime on broadband and data markets.

Finally, we explain that, although the NPRM purports to embrace a regulatory regime consistent with the current “federal privacy regime,” and particularly the FTC’s approach to privacy regulation, it actually does no such thing — a sentiment echoed by a host of current and former FTC staff and commissioners, including the Bureau of Consumer Protection staff, Commissioner Maureen Ohlhausen, former Chairman Jon Leibowitz, former Commissioner Josh Wright, and former BCP Director Howard Beales.

Our full comments are available here.

Earlier this week I testified before the U.S. House Subcommittee on Commerce, Manufacturing, and Trade regarding several proposed FTC reform bills.

You can find my written testimony here. That testimony was drawn from a 100 page report, authored by Berin Szoka and me, entitled “The Federal Trade Commission: Restoring Congressional Oversight of the Second National Legislature — An Analysis of Proposed Legislation.” In the report we assess 9 of the 17 proposed reform bills in great detail, and offer a host of suggested amendments or additional reform proposals that, we believe, would help make the FTC more accountable to the courts. As I discuss in my oral remarks, that judicial oversight was part of the original plan for the Commission, and an essential part of ensuring that its immense discretion is effectively directed toward protecting consumers as technology and society evolve around it.

The report is “Report 2.0” of the FTC: Technology & Reform Project, which was convened by the International Center for Law & Economics and TechFreedom with an inaugural conference in 2013. Report 1.0 lays out some background on the FTC and its institutional dynamics, identifies the areas of possible reform at the agency, and suggests the key questions/issues each of them raises.

The text of my oral remarks follow, or, if you prefer, you can watch them here:

Chairman Burgess, Ranking Member Schakowsky, and Members of the Subcommittee, thank you for the opportunity to appear before you today.

I’m Executive Director of the International Center for Law & Economics, a non-profit, non-partisan research center. I’m a former law professor, I used to work at Microsoft, and I had what a colleague once called the most illustrious FTC career ever — because, at approximately 2 weeks, it was probably the shortest.

I’m not typically one to advocate active engagement by Congress in anything (no offense). But the FTC is different.

Despite Congressional reforms, the FTC remains the closest thing we have to a second national legislature. Its jurisdiction covers nearly every company in America. Section 5, at its heart, runs just 20 words — leaving the Commission enormous discretion to make policy decisions that are essentially legislative.

The courts were supposed to keep the agency on course. But they haven’t. As Former Chairman Muris has written, “the agency has… traditionally been beyond judicial control.”

So it’s up to Congress to monitor the FTC’s processes, and tweak them when the FTC goes off course, which is inevitable.

This isn’t a condemnation of the FTC’s dedicated staff. Rather, this one way ratchet of ever-expanding discretion is simply the nature of the beast.

Yet too many people lionize the status quo. They see any effort to change the agency from the outside as an affront. It’s as if Congress was struck by a bolt of lightning in 1914 and the Perfect Platonic Agency sprang forth.

But in the real world, an agency with massive scope and discretion needs oversight — and feedback on how its legal doctrines evolve.

So why don’t the courts play that role? Companies essentially always settle with the FTC because of its exceptionally broad investigatory powers, its relatively weak standard for voting out complaints, and the fact that those decisions effectively aren’t reviewable in federal court.

Then there’s the fact that the FTC sits in judgment of its own prosecutions. So even if a company doesn’t settle and actually wins before the ALJ, FTC staff still wins 100% of the time before the full Commission.

Able though FTC staffers are, this can’t be from sheer skill alone.

Whether by design or by neglect, the FTC has become, as Chairman Muris again described it, “a largely unconstrained agency.”

Please understand: I say this out of love. To paraphrase Churchill, the FTC is the “worst form of regulatory agency — except for all the others.”

Eventually Congress had to course-correct the agency — to fix the disconnect and to apply its own pressure to refocus Section 5 doctrine.

So a heavily Democratic Congress pressured the Commission to adopt the Unfairness Policy Statement in 1980. The FTC promised to restrain itself by balancing the perceived benefits of its unfairness actions against the costs, and not acting when injury is insignificant or consumers could have reasonably avoided injury on their own. It is, inherently, an economic calculus.

But while the Commission pays lip service to the test, you’d be hard-pressed to identify how (or whether) it’s implemented it in practice. Meanwhile, the agency has essentially nullified the “materiality” requirement that it volunteered in its 1983 Deception Policy Statement.

Worst of all, Congress failed to anticipate that the FTC would resume exercising its vast discretion through what it now proudly calls its “common law of consent decrees” in data security cases.

Combined with a flurry of recommended best practices in reports that function as quasi-rulemakings, these settlements have enabled the FTC to circumvent both Congressional rulemaking reforms and meaningful oversight by the courts.

The FTC’s data security settlements aren’t an evolving common law. They’re a static statement of “reasonable” practices, repeated about 55 times over the past 14 years. At this point, it’s reasonable to assume that they apply to all circumstances — much like a rule (which is, more or less, the opposite of the common law).

Congressman Pompeo’s SHIELD Act would help curtail this practice, especially if amended to include consent orders and reports. It would also help focus the Commission on the actual elements of the Unfairness Policy Statement — which should be codified through Congressman Mullins’ SURE Act.

Significantly, only one data security case has actually come before an Article III court. The FTC trumpets Wyndham as an out-and-out win. But it wasn’t. In fact, the court agreed with Wyndham on the crucial point that prior consent orders were of little use in trying to understand the requirements of Section 5.

More recently the FTC suffered another rebuke. While it won its product design suit against Amazon, the Court rejected the Commission’s “fencing in” request to permanently hover over the company and micromanage practices that Amazon had already ended.

As the FTC grapples with such cutting-edge legal issues, it’s drifting away from the balance it promised Congress.

But Congress can’t fix these problems simply by telling the FTC to take its bedrock policy statements more seriously. Instead it must regularly reassess the process that’s allowed the FTC to avoid meaningful judicial scrutiny. The FTC requires significant course correction if its model is to move closer to a true “common law.”

[Below is an excellent essay by Devlin Hartline that was first posted at the Center for the Protection of Intellectual Property blog last week, and I’m sharing it here.]

ACKNOWLEDGING THE LIMITATIONS OF THE FTC’S “PAE” STUDY

By Devlin Hartline

The FTC’s long-awaited case study of patent assertion entities (PAEs) is expected to be released this spring. Using its subpoena power under Section 6(b) to gather information from a handful of firms, the study promises us a glimpse at their inner workings. But while the results may be interesting, they’ll also be too narrow to support any informed policy changes. And you don’t have to take my word for it—the FTC admits as much. In one submission to the Office of Management and Budget (OMB), which ultimately decided whether the study should move forward, the FTC acknowledges that its findings “will not be generalizable to the universe of all PAE activity.” In another submission to the OMB, the FTC recognizes that “the case study should be viewed as descriptive and probative for future studies seeking to explore the relationships between organizational form and assertion behavior.”

However, this doesn’t mean that no one will use the study to advocate for drastic changes to the patent system. Even before the study’s release, many people—including some FTC Commissioners themselves—have already jumped to conclusions when it comes to PAEs, arguing that they are a drag on innovation and competition. Yet these same people say that we need this study because there’s no good empirical data analyzing the systemic costs and benefits of PAEs. They can’t have it both ways. The uproar about PAEs is emblematic of the broader movement that advocates for the next big change to the patent system before we’ve even seen how the last one panned out. In this environment, it’s unlikely that the FTC and other critics will responsibly acknowledge that the study simply cannot give us an accurate assessment of the bigger picture.

Limitations of the FTC Study 

Many scholars have written about the study’s fundamental limitations. As statistician Fritz Scheuren points out, there are two kinds of studies: exploratory and confirmatory. An exploratory study is a starting point that asks general questions in order to generate testable hypotheses, while a confirmatory study is then used to test the validity of those hypotheses. The FTC study, with its open-ended questions to a handful of firms, is a classic exploratory study. At best, the study will generate answers that could help researchers begin to form theories and design another round of questions for further research. Scheuren notes that while the “FTC study may well be useful at generating exploratory data with respect to PAE activity,” it “is not designed to confirm supportable subject matter conclusions.”

One significant constraint with the FTC study is that the sample size is small—only twenty-five PAEs—and the control group is even smaller—a mixture of fifteen manufacturers and non-practicing entities (NPEs) in the wireless chipset industry. Scheuren reasons that there “is also the risk of non-representative sampling and potential selection bias due to the fact that the universe of PAEs is largely unknown and likely quite diverse.” And the fact that the control group comes from one narrow industry further prevents any generalization of the results. Scheuren concludes that the FTC study “may result in potentially valuable information worthy of further study,” but that it is “not designed in a way as to support public policy decisions.”

Professor Michael Risch questions the FTC’s entire approach: “If the FTC is going to the trouble of doing a study, why not get it done right the first time and a) sample a larger number of manufacturers, in b) a more diverse area of manufacturing, and c) get identical information?” He points out that the FTC won’t be well-positioned to draw conclusions because the control group is not even being asked the same questions as the PAEs. Risch concludes that “any report risks looking like so many others: a static look at an industry with no benchmark to compare it to.” Professor Kristen Osenga echoes these same sentiments and notes that “the study has been shaped in a way that will simply add fuel to the anti–‘patent troll’ fire without providing any data that would explain the best way to fix the real problems in the patent field today.”

Osenga further argues that the study is flawed since the FTC’s definition of PAEs perpetuates the myth that patent licensing firms are all the same. The reality is that many different types of businesses fall under the “PAE” umbrella, and it makes no sense to impute the actions of a small subset to the entire group when making policy recommendations. Moreover, Osenga questions the FTC’s “shortsighted viewpoint” of the potential benefits of PAEs, and she doubts how the “impact on innovation and competition” will be ascertainable given the questions being asked. Anne Layne-Farrar expresses similar doubts about the conclusions that can be drawn from the FTC study since only licensors are being surveyed. She posits that it “cannot generate a full dataset for understanding the conduct of the parties in patent license negotiation or the reasons for the failure of negotiations.”

Layne-Farrar concludes that the FTC study “can point us in fruitful directions for further inquiry and may offer context for interpreting quantitative studies of PAE litigation, but should not be used to justify any policy changes.” Consistent with the FTC’s own admissions of the study’s limitations, this is the real bottom line of what we should expect. The study will have no predictive power because it only looks at how a small sample of firms affect a few other players within the patent ecosystem. It does not quantify how that activity ultimately affects innovation and competition—the very information needed to support policy recommendations. The FTC study is not intended to produce the sort of compelling statistical data that can be extrapolated to the larger universe of firms.

FTC Commissioners Put Cart Before Horse

The FTC has a history of bias against PAEs, as demonstrated in its 2011 report that skeptically questioned the “uncertain benefits” of PAEs while assuming their “detrimental effects” in undermining innovation. That report recommended special remedy rules for PAEs, even as the FTC acknowledged the lack of objective evidence of systemic failure and the difficulty of distinguishing “patent transactions that harm innovation from those that promote it.” With its new study, the FTC concedes to the OMB that much is still not known about PAEs and that the findings will be preliminary and non-generalizable. However, this hasn’t prevented some Commissioners from putting the cart before the horse with PAEs.

In fact, the very call for the FTC to institute the PAE study started with its conclusion. In her 2013 speech suggesting the study, FTC Chairwoman Edith Ramirez recognized that “we still have only snapshots of the costs and benefits of PAE activity” and that “we will need to learn a lot more” in order “to see the full competitive picture.” While acknowledging the vast potential benefits of PAEs in rewarding invention, benefiting competition and consumers, reducing enforcement hurdles, increasing liquidity, encouraging venture capital investment, and funding R&D, she nevertheless concluded that “PAEs exploit underlying problems in the patent system to the detriment of innovation and consumers.” And despite the admitted lack of data, Ramirez stressed “the critical importance of continuing the effort on patent reform to limit the costs associated with some types of PAE activity.”

This position is duplicitous: If the costs and benefits of PAEs are still unknown, what justifies Ramirez’s rushed call for immediate action? While benefits have to be weighed against costs, it’s clear that she’s already jumped to the conclusion that the costs outweigh the benefits. In another speech a few months later, Ramirez noted that the “troubling stories” about PAEs “don’t tell us much about the competitive costs and benefits of PAE activity.” Despite this admission, Ramirez called for “a much broader response to flaws in the patent system that fuel inefficient behavior by PAEs.” And while Ramirez said that understanding “the PAE business model will inform the policy dialogue,” she stated that “it will not change the pressing need for additional progress on patent reform.”

Likewise, in an early 2014 speech, Commissioner Julie Brill ignored the study’s inherent limitations and exploratory nature. She predicted that the study “will provide a fuller and more accurate picture of PAE activity” that “will be put to good use by Congress and others who examine closely the activities of PAEs.” Remarkably, Brill stated that “the FTC and other law enforcement agencies” should not “wait on the results of the 6(b) study before undertaking enforcement actions against PAE activity that crosses the line.” Even without the study’s results, she thought that “reforms to the patent system are clearly warranted.” In Brill’s view, the study would only be useful for determining whether “additional reforms are warranted” to curb the activities of PAEs.

It appears that these Commissioners have already decided—in the absence of any reliable data on the systemic effects of PAE activity—that drastic changes to the patent system are necessary. Given their clear bias in this area, there is little hope that they will acknowledge the deep limitations of the study once it is released.

Commentators Jump the Gun

Unsurprisingly, many supporters of the study have filed comments with the FTC arguing that the study is needed to fill the huge void in empirical data on the costs and benefits associated with PAEs. Some even simultaneously argue that the costs of PAEs far outweigh the benefits, suggesting that they have already jumped to their conclusion and just want the data to back it up. Despite the study’s serious limitations, these commentators appear primed to use it to justify their foregone policy recommendations.

For example, the Consumer Electronics Association applauded “the FTC’s efforts to assess the anticompetitive harms that PAEs cause on our economy as a whole,” and it argued that the study “will illuminate the many dimensions of PAEs’ conduct in a way that no other entity is capable.” At the same time, it stated that “completion of this FTC study should not stay or halt other actions by the administrative, legislative or judicial branches to address this serious issue.” The Internet Commerce Coalition stressed the importance of the study of “PAE activity in order to shed light on its effects on competition and innovation,” and it admitted that without the information, “the debate in this area cannot be empirically based.” Nonetheless, it presupposed that the study will uncover “hidden conduct of and abuses by PAEs” and that “it will still be important to reform the law in this area.”

Engine Advocacy admitted that “there is very little broad empirical data about the structure and conduct of patent assertion entities, and their effect on the economy.” It then argued that PAE activity “harms innovators, consumers, startups and the broader economy.” The Coalition for Patent Fairness called on the study “to contribute to the understanding of policymakers and the public” concerning PAEs, which it claimed “impose enormous costs on U.S. innovators, manufacturers, service providers, and, increasingly, consumers and end-users.” And to those suggesting “the potentially beneficial role of PAEs in the patent market,” it stressed that “reform be guided by the principle that the patent system is intended to incentivize and reward innovation,” not “rent-seeking” PAEs that are “exploiting problems.”

The joint comments of Public Knowledge, Electronic Frontier Foundation, & Engine Advocacyemphasized the fact that information about PAEs “currently remains limited” and that what is “publicly known largely consists of lawsuits filed in court and anecdotal information.” Despite admitting that “broad empirical data often remains lacking,” the groups also suggested that the study “does not mean that legislative efforts should be stalled” since “the harms of PAE activity are well known and already amenable to legislative reform.” In fact, they contended not only that “a problem exists,” but that there’s even “reason to believe the scope is even larger than what has already been reported.”

Given this pervasive and unfounded bias against PAEs, there’s little hope that these and other critics will acknowledge the study’s serious limitations. Instead, it’s far more likely that they will point to the study as concrete evidence that even more sweeping changes to the patent system are in order.

Conclusion

While the FTC study may generate interesting information about a handful of firms, it won’t tell us much about how PAEs affect competition and innovation in general. The study is simply not designed to do this. It instead is a fact-finding mission, the results of which could guide future missions. Such empirical research can be valuable, but it’s very important to recognize the limited utility of the information being collected. And it’s crucial not to draw policy conclusions from it. Unfortunately, if the comments of some of the Commissioners and supporters of the study are any indication, many critics have already made up their minds about the net effects of PAEs, and they will likely use the study to perpetuate the biased anti-patent fervor that has captured so much attention in recent years.

 

Yesterday a federal district court in Washington state granted the FTC’s motion for summary judgment against Amazon in FTC v. Amazon — the case alleging unfair trade practices in Amazon’s design of the in-app purchases interface for apps available in its mobile app store. The headlines score the decision as a loss for Amazon, and the FTC, of course, claims victory. But the court also granted Amazon’s motion for partial summary judgment on a significant aspect of the case, and the Commission’s win may be decidedly pyrrhic.

While the district court (very wrongly, in my view) essentially followed the FTC in deciding that a well-designed user experience doesn’t count as a consumer benefit for assessing substantial harm under the FTC Act, it rejected the Commission’s request for a permanent injunction against Amazon. It also called into question the FTC’s calculation of monetary damages. These last two may be huge. 

The FTC may have “won” the case, but it’s becoming increasingly apparent why it doesn’t want to take these cases to trial. First in Wyndham, and now in Amazon, courts have begun to chip away at the FTC’s expansive Section 5 discretion, even while handing the agency nominal victories.

The Good News

The FTC largely escapes judicial oversight in cases like these because its targets almost always settle (Amazon is a rare exception). These settlements — consent orders — typically impose detailed 20-year injunctions and give the FTC ongoing oversight of the companies’ conduct for the same period. The agency has wielded the threat of these consent orders as a powerful tool to micromanage tech companies, and it currently has at least one consent order in place with Twitter, Google, Apple, Facebook and several others.

As I wrote in a WSJ op-ed on these troubling consent orders:

The FTC prefers consent orders because they extend the commission’s authority with little judicial oversight, but they are too blunt an instrument for regulating a technology company. For the next 20 years, if the FTC decides that Google’s product design or billing practices don’t provide “express, informed consent,” the FTC could declare Google in violation of the new consent decree. The FTC could then impose huge penalties—tens or even hundreds of millions of dollars—without establishing that any consumer had actually been harmed.

Yesterday’s decision makes that outcome less likely. Companies will be much less willing to succumb to the FTC’s 20-year oversight demands if they know that courts may refuse the FTC’s injunction request and accept companies’ own, independent and market-driven efforts to address consumer concerns — without any special regulatory micromanagement.

In the same vein, while the court did find that Amazon was liable for repayment of unauthorized charges made without “express, informed authorization,” it also found the FTC’s monetary damages calculation questionable and asked for further briefing on the appropriate amount. If, as seems likely, it ultimately refuses to simply accept the FTC’s damages claims, that, too, will take some of the wind out of the FTC’s sails. Other companies have settled with the FTC and agreed to 20-year consent decrees in part, presumably, because of the threat of excessive damages if they litigate. That, too, is now less likely to happen.

Collectively, these holdings should help to force the FTC to better target its complaints to cases of still-ongoing and truly-harmful practices — the things the FTC Act was really meant to address, like actual fraud. Tech companies trying to navigate ever-changing competitive waters by carefully constructing their user interfaces and payment mechanisms (among other things) shouldn’t be treated the same way as fraudulent phishing scams.

The Bad News

The court’s other key holding is problematic, however. In essence, the court, like the FTC, seems to believe that regulators are better than companies’ product managers, designers and engineers at designing app-store user interfaces:

[A] clear and conspicuous disclaimer regarding in-app purchases and request for authorization on the front-end of a customer’s process could actually prove to… be more seamless than the somewhat unpredictable password prompt formulas rolled out by Amazon.

Never mind that Amazon has undoubtedly spent tremendous resources researching and designing the user experience in its app store. And never mind that — as Amazon is certainly aware — a consumer’s experience of a product is make-or-break in the cut-throat world of online commerce, advertising and search (just ask Jet).

Instead, for the court (and the FTC), the imagined mechanism of “affirmatively seeking a customer’s authorized consent to a charge” is all benefit and no cost. Whatever design decisions may have informed the way Amazon decided to seek consent are either irrelevant, or else the user-experience benefits they confer are negligible.

As I’ve written previously:

Amazon has built its entire business around the “1-click” concept — which consumers love — and implemented a host of notification and security processes hewing as much as possible to that design choice, but nevertheless taking account of the sorts of issues raised by in-app purchases. Moreover — and perhaps most significantly — it has implemented an innovative and comprehensive parental control regime (including the ability to turn off all in-app purchases) — Kindle Free Time — that arguably goes well beyond anything the FTC required in its Apple consent order.

Amazon is not abdicating its obligation to act fairly under the FTC Act and to ensure that users are protected from unauthorized charges. It’s just doing so in ways that also take account of the costs such protections may impose — particularly, in this case, on the majority of Amazon customers who didn’t and wouldn’t suffer such unauthorized charges.

Amazon began offering Kindle Free Time in 2012 as an innovative solution to a problem — children’s access to apps and in-app purchases — that affects only a small subset of Amazon’s customers. To dismiss that effort without considering that Amazon might have made a perfectly reasonable judgment that balanced consumer protection and product design disregards the cost-benefit balancing required by Section 5 of the FTC Act.

Moreover, the FTC Act imposes liability for harm only when they are not “reasonably avoidable.” Kindle Free Time is an outstanding example of an innovative mechanism that allows consumers at risk of unauthorized purchases by children to “reasonably avoid” harm. The court’s and the FTC’s disregard for it is inconsistent with the statute.

Conclusion

The court’s willingness to reinforce the FTC’s blackboard design “expertise” (such as it is) to second guess user-interface and other design decisions made by firms competing in real markets is unfortunate. But there’s a significant silver lining. By reining in the FTC’s discretion to go after these companies as if they were common fraudsters, the court has given consumers an important victory. After all, it is consumers who otherwise bear the costs (both directly and as a result of reduced risk-taking and innovation) of the FTC’s largely unchecked ability to extract excessive concessions from its enforcement targets.

Today’s Canadian Competition Bureau (CCB) Google decision marks yet another regulator joining the chorus of competition agencies around the world that have already dismissed similar complaints relating to Google’s Search or Android businesses (including the US FTC, the Korea FTC, the Taiwan FTC, and AG offices in Texas and Ohio).

A number of courts around the world have also rejected competition complaints against the company, including courts in the US, France, the UK, Germany, and Brazil.

After an extensive, three-year investigation into Google’s business practices in Canada, the CCB

did not find sufficient evidence that Google engaged in [search manipulation, preferential treatment of Google services, syndication agreements, distribution agreements, exclusion of competitors from its YouTube mobile app, or tying of mobile ads with those on PCs and tablets] for an anti-competitive purpose, and/or that the practices resulted in a substantial lessening or prevention of competition in any relevant market.

Like the US FTC, the CCB did find fault with Google’s use of restriction on its AdWords API — but Google had already revised those terms worldwide following the FTC investigation, and has committed to the CCB to maintain the revised terms for at least another 5 years.

Other than a negative ruling from Russia’s competition agency last year in favor of Yandex — essentially “the Russian Google,” and one of only a handful of Russian tech companies of significance (surely a coincidence…) — no regulator has found against Google on the core claims brought against it.

True, investigations in a few jurisdictions, including the EU and India, are ongoing. And a Statement of Objections in the EU’s Android competition investigation appears imminent. But at some point, regulators are going to have to take a serious look at the motivations of the entities that bring complaints before wasting more investigatory resources on their behalf.

Competitor after competitor has filed complaints against Google that amount to, essentially, a claim that Google’s superior services make it too hard to compete. But competition law doesn’t require that Google or any other large firm make life easier for competitors. Without a finding of exclusionary harm/abuse of dominance (and, often, injury to consumers), this just isn’t anticompetitive conduct — it’s competition. And the overwhelming majority of competition authorities that have examined the company have agreed.

Exactly when will regulators be a little more skeptical of competitors trying to game the antitrust laws for their own advantage?

Canada joins the chorus

The Canadian decision mirrors the reasoning that regulators around the world have employed in reaching the decision that Google hasn’t engaged in anticompetitive conduct.

Two of the more important results in the CCB’s decision relate to preferential treatment of Google’s services (e.g., promotion of its own Map or Shopping results, instead of links to third-party aggregators of the same services) — the tired “search bias” claim that started all of this — and the distribution agreements that Google enters into with device manufacturers requiring inclusion of Google search as a default installation on Google Android phones.

On these key issues the CCB was unequivocal in its conclusions.

On search bias:

The Bureau sought evidence of the harm allegedly caused to market participants in Canada as a result of any alleged preferential treatment of Google’s services. The Bureau did not find adequate evidence to support the conclusion that this conduct has had an exclusionary effect on rivals, or that it has resulted in a substantial lessening or prevention of competition in a market.

And on search distribution agreements:

Google competes with other search engines for the business of hardware manufacturers and software developers. Other search engines can and do compete for these agreements so they appear as the default search engine…. Consumers can and do change the default search engine on their desktop and mobile devices if they prefer a different one to the pre-loaded default…. Google’s distribution agreements have not resulted in a substantial lessening or prevention of competition in Canada.

And here is the crucial point of the CCB’s insight (which, so far, everyone but Russia seems to appreciate): Despite breathless claims from rivals alleging they can’t compete in the face of their placement in Google’s search results, data barriers to entry, or default Google search on mobile devices, Google does actually face significant competition. Both the search bias and Android distribution claims were dismissed essentially because, whatever competitors may prefer Google do, its conduct doesn’t actually preclude access to competing services.

The True North strong and free [of meritless competitor complaints]

Exclusionary conduct must, well, exclude. But surfacing Google’s own “subjective” search results, even if they aren’t as high quality, doesn’t exclude competitors, according to the CCB and the other regulatory agencies that have also dismissed such claims. Similarly, consumers’ ability to switch search engines (“competition is just a click away,” remember), as well as OEMs’ ability to ship devices with different search engine defaults, ensure that search competitors can access consumers.

Former FTC Commissioner Josh Wright’s analysis of “search bias” in Google’s results applies with equal force to these complaints:

It is critical to recognize that bias alone is not evidence of competitive harm and it must be evaluated in the appropriate antitrust economic context of competition and consumers, rather [than] individual competitors and websites… [but these results] are not useful from an antitrust policy perspective because they erroneously—and contrary to economic theory and evidence—presume natural and procompetitive product differentiation in search rankings to be inherently harmful.

The competitors that bring complaints to antitrust authorities seek to make a demand of Google that is rarely made of any company: that it must provide access to its competitors on equal terms. But one can hardly imagine a valid antitrust complaint arising because McDonald’s refuses to sell a Whopper. The law on duties to deal is heavily circumscribed for good reason, as Josh Wright and I have pointed out:

The [US Supreme] Court [in Trinko] warned that the imposition of a duty to deal would threaten to “lessen the incentive for the monopolist, the rival, or both to invest in… economically beneficial facilities.”… Because imposition of a duty to deal with rivals threatens to decrease the incentive to innovate by creating new ways of producing goods at lower costs, satisfying consumer demand, or creating new markets altogether, courts and antitrust agencies have been reluctant to expand the duty.

Requiring Google to link to other powerful and sophisticated online search companies, or to provide them with placement on Google Android mobile devices, on the precise terms it does its own products would reduce the incentives of everyone to invest in their underlying businesses to begin with.

This is the real threat to competition. And kudos to the CCB for recognizing it.

The CCB’s investigation was certainly thorough, and its decision appears to be well-reasoned. Other regulators should take note before moving forward with yet more costly investigations.

Earlier this month, Federal Communications Commission (FCC) Chairman Tom Wheeler released a “fact sheet” describing his proposal to have the FCC regulate the privacy policies of broadband Internet service providers (ISPs).  Chairman Wheeler’s detailed proposal will be embodied in a Notice of Proposed Rulemaking (NPRM) that the FCC may take up as early as March 31.  The FCC instead should shelve this problematic initiative and leave broadband privacy regulation (to the extent it is warranted) to the Federal Trade Commission (FTC).

In a March 23 speech before the Free State Foundation, FTC Commissioner Maureen Ohlhausen ably summarized the negative economic implications of the NPRM, contrasting the FCC’s proposal with the FTC’s approach to privacy-related enforcement (citations omitted):

The FCC’s proposal differs significantly from the choice architecture the FTC has established under its deception authority.  Our [FTC] deception authority enforces the promises companies make to consumers.  But companies are not required under our deception authority to make such privacy promises.  This is as it should be.  As I’ve already described, unfairness authority sets a baseline by prohibiting practices the vast majority of consumers would not embrace. Mandating practices above this baseline reduces consumer welfare because it denies some consumers options that best match their preferences.  Consumer demand and competitive forces spur companies to make privacy promises.  In fact, nearly all legitimate companies currently make detailed promises about their privacy practices.  This demonstrates a market demand for, and supply of, transparency about company uses of data.  Indeed, recent research . . . shows that broadband ISPs in particular already make strong privacy promises to consumers.

In contrast to the choice framework of the FTC, the FCC’s proposal, according to the recent [Wheeler] fact sheet, seeks to mandate that broadband ISPs adopt a specific opt in / opt-out regime.  The fact sheet repeatedly insists that this is about consumer choice. But, in fact, opt in mandates unavoidably reduce consumer choice. First, one subtle way in which a privacy baseline might be set too high is if the default opt in condition does not match the average consumer preference.  If the FCC mandates opt in for a specific data collection, but a majority of consumers already prefer to share that information, the mandate unnecessarily raises costs to companies and consumers.  Second, opt in mandates prevent unanticipated beneficial uses of data.  An effective and transparent opt-in regime requires that companies know at the time of collection how they will use the collected information. Yet data, including non-sensitive data, often yields significantconsumer benefits from uses that could not be known at the time of collection.  Ignoring this, the fact sheet proposes to ban all but a very few uses unless consumers opt in.  This proposed opt in regime would prohibit unforeseeable future uses of collected data, regardless of what consumers would prefer.  This approach is stricter and more limiting than the requirements that other internet companies face. Now, I agree such mandates may be appropriate for certain types of sensitive data such as credit card numbers or SSNs, but they likely will reduce consumer options if applied to non-sensitive data.

If the FCC wished to be consistent with the FTC’s approach of using prohibitions only for widely held consumer preferences, it would take a different approach and simply require opt in for specific, sensitive uses. . . . 

[Furthermore,] [h]ere, the FCC proposes, for the first time ever, to apply a statute created for telephone lines to broadband ISPs. That raises some significant statutory authority issues that the FCC may ultimately need to look to Congress to clarify. . . .

[In addition,] the current FCC proposal appears to reflect the preferences of privacy lobbyists who are frustrated with the lax privacy preferences of average American consumers.  Furthermore, the proposal doesn’t appear to have the support of the minority FCC Commissioners or Congress. 

[Also,] the FCC proposal applies to just one segment of the internet ecosystem broadband ISPs, even though there is good evidence that ISPs are not uniquely privy to your data. . . .

[In conclusion,] [a]t its core, protecting consumer privacy ought to be about effectuating consumers’ preferences.  If privacy rules impose the preferences of the few on the many, consumers will not be better off.  Therefore, prescriptive baseline privacy mandates like the FCC’s proposal should be reserved for practices that consumers overwhelmingly disfavor.  Otherwise, consumers should remain free to exercise their privacy preferences in the marketplace, and companies should be held to the promises they make.  This approach, which is a time-tested, emergent result of the FTC’s case-by-case application of its statutory authority, offers a good template for the FCC.

Commissioner Ohlhausen’s presentation comports with my May 2015 Heritage Foundation Legal Memorandum, which explained that the FTC’s highly structured, analytic, fact-based approach, combined with its vast experience in privacy and data security investigations, make it a far better candidate than the FCC to address competition and consumer protection problems in the area of broadband.

Regrettably, there is little reason to believe that the FCC, acting on its own, will heed Commissioner Ohlhausen’s call to focus on consumer preferences in evaluating broadband ISP privacy practices.  What’s worse, the FTC’s ability to act at all in this area is in doubt.  The FCC’s current regulation requiring broadband ISP “net neutrality,” and its proposed regulation of ISP privacy practices, are premised on the dubious reclassification of broadband as a “common carrier” service – and the FTC has no authority over common carriers.  If the D.C. Circuit fails to overturn the FCC’s broadband rule, Congress should carefully consider whether to strip the FCC of regulatory authority in this area (including, of course, privacy practices) and reassign it to the FTC.