Archives For federal trade commission

Next week the FCC is slated to vote on the second iteration of Chairman Wheeler’s proposed broadband privacy rules. Of course, as has become all too common, none of us outside the Commission has actually seen the proposal. But earlier this month Chairman Wheeler released a Fact Sheet that suggests some of the ways it would update the rules he initially proposed.

According to the Fact Sheet, the new proposed rules are

designed to evolve with changing technologies and encourage innovation, and are in harmony with other key privacy frameworks and principles — including those outlined by the Federal Trade Commission and the Administration’s Consumer Privacy Bill of Rights.

Unfortunately, the Chairman’s proposal appears to fall short of the mark on both counts.

As I discuss in detail in a letter filed with the Commission yesterday, despite the Chairman’s rhetoric, the rules described in the Fact Sheet fail to align with the FTC’s approach to privacy regulation embodied in its 2012 Privacy Report in at least two key ways:

  • First, the Fact Sheet significantly expands the scope of information that would be considered “sensitive” beyond that contemplated by the FTC. That, in turn, would impose onerous and unnecessary consumer consent obligations on commonplace uses of data, undermining consumer welfare, depriving consumers of information and access to new products and services, and restricting competition.
  • Second, unlike the FTC’s framework, the proposal described by the Fact Sheet ignores the crucial role of “context” in determining the appropriate level of consumer choice before affected companies may use consumer data. Instead, the Fact Sheet takes a rigid, acontextual approach that would stifle innovation and harm consumers.

The Chairman’s proposal moves far beyond the FTC’s definition of “sensitive” information requiring “opt-in” consent

The FTC’s privacy guidance is, in its design at least, appropriately flexible, aimed at balancing the immense benefits of information flows with sensible consumer protections. Thus it eschews an “inflexible list of specific practices” that would automatically trigger onerous consent obligations and “risk[] undermining companies’ incentives to innovate and develop new products and services….”

Under the FTC’s regime, depending on the context in which it is used (on which see the next section, below), the sensitivity of data delineates the difference between data uses that require “express affirmative” (opt-in) consent and those that do not (requiring only “other protections” short of opt-in consent — e.g., opt-out).

Because the distinction is so important — because opt-in consent is much more likely to staunch data flows — the FTC endeavors to provide guidance as to what data should be considered sensitive, and to cabin the scope of activities requiring opt-in consent. Thus, the FTC explains that “information about children, financial and health information, Social Security numbers, and precise geolocation data [should be treated as] sensitive.” But beyond those instances, the FTC doesn’t consider any other type of data as inherently sensitive.

By contrast, and without explanation, Chairman Wheeler’s Fact Sheet significantly expands what constitutes “sensitive” information requiring “opt-in” consent by adding “web browsing history,” “app usage history,” and “the content of communications” to the list of categories of data deemed sensitive in all cases.

By treating some of the most common and important categories of data as always “sensitive,” and by making the sensitivity of data the sole determinant for opt-in consent, the Chairman’s proposal would make it almost impossible for ISPs to make routine (to say nothing of innovative), appropriate, and productive uses of data comparable to those undertaken by virtually every major Internet company.  This goes well beyond anything contemplated by the FTC — with no evidence of any corresponding benefit to consumers and with obvious harm to competition, innovation, and the overall economy online.

And because the Chairman’s proposal would impose these inappropriate and costly restrictions only on ISPs, it would create a barrier to competition by ISPs in other platform markets, without offering a defensible consumer protection rationale to justify either the disparate treatment or the restriction on competition.

As Fred Cate and Michael Staten have explained,

“Opt-in” offers no greater privacy protection than allowing consumers to “opt-out”…, yet it imposes significantly higher costs on consumers, businesses, and the economy.

Not surprisingly, these costs fall disproportionately on the relatively poor and the less technology-literate. In the former case, opt-in requirements may deter companies from offering services at all, even to people who would make a very different trade-off between privacy and monetary price. In the latter case, because an initial decision to opt-in must be taken in relative ignorance, users without much experience to guide their decisions will face effectively higher decision-making costs than more knowledgeable users.

The Chairman’s proposal ignores the central role of context in the FTC’s privacy framework

In part for these reasons, central to the FTC’s more flexible framework is the establishment of a sort of “safe harbor” for data uses where the benefits clearly exceed the costs and consumer consent may be inferred:

Companies do not need to provide choice before collecting and using consumer data for practices that are consistent with the context of the transaction or the company’s relationship with the consumer….

Thus for many straightforward uses of data, the “context of the transaction,” not the asserted “sensitivity” of the underlying data, is the threshold question in evaluating the need for consumer choice in the FTC’s framework.

Chairman Wheeler’s Fact Sheet, by contrast, ignores this central role of context in its analysis. Instead, it focuses solely on data sensitivity, claiming that doing so is “in line with customer expectations.”

But this is inconsistent with the FTC’s approach.

In fact, the FTC’s framework explicitly rejects a pure “consumer expectations” standard:

Rather than relying solely upon the inherently subjective test of consumer expectations, the… standard focuses on more objective factors related to the consumer’s relationship with a business.

And while everyone agrees that sensitivity is a key part of pegging privacy regulation to actual consumer and corporate relationships, the FTC also recognizes that the importance of the sensitivity of the underlying data varies with the context in which it is used. Or, in the words of the White House’s 2012 Consumer Data Privacy in a Networked World Report (introducing its Consumer Privacy Bill of Rights), “[c]ontext should shape the balance and relative emphasis of particular principles” guiding the regulation of privacy.

By contrast, Chairman Wheeler’s “sensitivity-determines-consumer-expectations” framing is a transparent attempt to claim fealty to the FTC’s (and the Administration’s) privacy standards while actually implementing a privacy regime that is flatly inconsistent with them.

The FTC’s approach isn’t perfect, but that’s no excuse to double down on its failings

The FTC’s privacy guidance, and even more so its privacy enforcement practices under Section 5, are far from perfect. The FTC should be commended for its acknowledgement that consumers’ privacy preferences and companies’ uses of data will change over time, and that there are trade-offs inherent in imposing any constraints on the flow of information. But even the FTC fails to actually assess the magnitude of the costs and benefits of, and the deep complexities involved in, the trade-off, and puts an unjustified thumb on the scale in favor of limiting data use.  

But that’s no excuse for Chairman Wheeler to ignore what the FTC gets right, and to double down on its failings. Based on the Fact Sheet (and the initial NPRM), it’s a virtual certainty that the Chairman’s proposal doesn’t heed the FTC’s refreshing call for humility and flexibility regarding the application of privacy rules to ISPs (and other Internet platforms):

These are complex and rapidly evolving areas, and more work should be done to learn about the practices of all large platform providers, their technical capabilities with respect to consumer data, and their current and expected uses of such data.

The rhetoric of the Chairman’s Fact Sheet is correct: the FCC should in fact conform its approach to privacy to the framework established by the FTC. Unfortunately, the reality of the Fact Sheet simply doesn’t comport with its rhetoric.

As the FCC’s vote on the Chairman’s proposal rapidly nears, and in light of its significant defects, we can only hope that the rest of the Commission refrains from reflexively adopting the proposed regime, and works to ensure that these problematic deviations from the FTC’s framework are addressed before moving forward.

On October 6, 2016, the U.S. Federal Trade Commission (FTC) issued Patent Assertion Entity Activity: An FTC Study (PAE Study), its much-anticipated report on patent assertion entity (PAE) activity.  The PAE Study defined PAEs as follows:

Patent assertion entities (PAEs) are businesses that acquire patents from third parties and seek to generate revenue by asserting them against alleged infringers.  PAEs monetize their patents primarily through licensing negotiations with alleged infringers, infringement litigation, or both. In other words, PAEs do not rely on producing, manufacturing, or selling goods.  When negotiating, a PAE’s objective is to enter into a royalty-bearing or lump-sum license.  When litigating, to generate any revenue, a PAE must either settle with the defendant or ultimately prevail in litigation and obtain relief from the court.

The FTC was mindful of the costs that would be imposed on PAEs, required by compulsory process to respond to the agency’s requests for information.  Accordingly, the FTC obtained information from only 22 PAEs, 18 of which it called “Litigation PAEs” (which “typically sued potential licensees and settled shortly afterward by entering into license agreements with defendants covering small portfolios,” usually yielding total royalties of under $300,000) and 4 of which it dubbed “Portfolio PAEs” (which typically negotiated multimillion dollars licenses covering large portfolios of patents and raised their capital through institutional investors or manufacturing firms).

Furthermore, the FTC’s research was narrowly targeted, not broad-based.  The agency explained that “[o]f all the patents held by PAEs in the FTC’s study, 88% fell under the Computers & Communications or Other Electrical & Electronic technology categories, and more than 75% of the Study PAEs’ overall holdings were software-related patents.”  Consistent with the nature of this sample, the FTC concentrated primarily on a case study of PAE activity in the wireless chipset sector.  The case study revealed that PAEs were more likely to assert their patents through litigation than were wireless manufacturers, and that “30% of Portfolio PAE wireless patent licenses and nearly 90% of Litigation PAE wireless patent licenses resulted from litigation, while only 1% of Wireless Manufacturer wireless patent licenses resulted from litigation.”  But perhaps more striking than what the FTC found was what it did not uncover.  Due to data limitations, “[t]he FTC . . . [did not] attempt[] to determine if the royalties received by Study PAEs were higher or lower than those that the original assignees of the licensed patents could have earned.”  In addition, the case study did “not report how much revenue PAEs shared with others, including independent inventors, or the costs of assertion activity.”

Curiously, the PAE Study also leaped to certain conclusions regarding PAE settlements based on questionable assumptions and without considering legitimate potential incentives for such settlements.  Thus, for example, the FTC found it particularly significant that 77% of litigation PAE settlements were for less than $300,000.  Why?  Because $300,000 was a “de facto benchmark” for nuisance litigation settlements, merely based on one American Intellectual Property Law Association study that claimed defending a non-practicing entity patent lawsuit through the end of discovery costs between $300,000 and $2.5 million, depending on the amount in controversy.  In light of that one study, the FTC surmised “that discovery costs, and not the technological value of the patent, may set the benchmark for settlement value in Litigation PAE cases.”  Thus, according to the FTC, “the behavior of Litigation PAEs is consistent with nuisance litigation.”  As noted patent lawyer Gene Quinn has pointed out, however, the FTC ignored the alternative eminently logical possibility that many settlements for less than $300,000 merely represented reasonable valuations of the patent rights at issue.  Quinn pithily stated:

[T]he reality is the FTC doesn’t know enough about the industry to understand that $300,000 is an arbitrary line in the sand that holds no relevance in the real world. For the very same reason that they said the term “patent troll” is unhelpful (i.e., because it inappropriately discriminates against rights owners without understanding the business model and practices), so too is $300,000 equally unhelpful. Without any understanding or appreciation of the value of the core innovation subject to the license there is no way to know whether a license is being offered for nuisance value or whether it is being offered at full, fair and appropriate value to compensate the patent owner for the infringement they had to chase down in litigation.

I thought the FTC was charged with ensuring fair business practices? It seems what they are doing is radically discriminating against incremental innovations valued at less than $300,000 and actually encouraging patent owners to charge more for their licenses than they are worth so they don’t get labeled a nuisance. Talk about perverse incentives! The FTC should stick to areas where they have subject matter competence and leave these patent issues to the experts.     

In sum, the FTC found that in one particular specialized industry sector featuring a certain  category of patents (software patents), PAEs tended to sue more than manufacturers before agreeing to licensing terms – hardly a surprising finding or a sign of a problem.  (To the contrary, the existence of “substantial” PAE litigation that led to licenses might be a sign that PAEs were acting as efficient intermediaries representing the interests and effectively vindicating the rights of small patentees.)  The FTC was not, however, able to comment on the relative levels of royalties, the extent to which PAE revenues were distributed to inventors, or the costs of PAE litigation (as opposed to any other sort of litigation).  Additionally, the FTC made certain assumptions about certain PAE litigation settlements that ignored reasonable alternative explanations for the behavior that was observed.  Accordingly, the reasonable observer would conclude from this that the agency was (to say the least) in no position to make any sort of policy recommendations, given the absence of any hard evidence of PAE abuses or excessive waste from litigation.

Unfortunately, the reasonable observer would be mistaken.  The FTC recommended reforms to: (1) address discovery burden and “cost asymmetries” (the notion that PAEs are less subject to costly counterclaims because they are not producers) in PAE litigation; (2) provide the courts and defendants with more information about the plaintiffs that have filed infringement lawsuits; (3) streamline multiple cases brought against defendants on the same theories of infringement; and (4) provide sufficient notice of these infringement theories as courts continue to develop heightened pleading requirements for patent cases.

Without getting into the merits of these individual suggestions (and without in any way denigrating the hard work and dedication of the highly talented FTC staffers who drafted the PAE Study), it is sufficient to note that they bear no logical relationship to the factual findings of the report.  The recommendations, which closely echo certain elements of various “patent reform” legislative proposals that have been floated in recent years, could have been advanced before any data had been gathered – with a saving to the companies that had to respond.  In short, the recommendations are classic pre-baked “solutions” to problems that have long been hypothesized.  Advancing such recommendations based on discrete information regarding a small skewed sample of PAEs – without obtaining crucial information on the direct costs and benefits of the PAE transactions being observed, or the incentive effects of PAE activity – is at odds with the FTC’s proud tradition of empirical research.  Unfortunately, Devin Hartline of the Antonin Scalia Law School proved prescient when commenting last April on the possible problems with the PAE Report, based on what was known about it prior to its release (and based on the preliminary thoughts of noted economists and law professors):

While the FTC study may generate interesting information about a handful of firms, it won’t tell us much about how PAEs affect competition and innovation in general.  The study is simply not designed to do this.  It instead is a fact-finding mission, the results of which could guide future missions.  Such empirical research can be valuable, but it’s very important to recognize the limited utility of the information being collected.  And it’s crucial not to draw policy conclusions from it.  Unfortunately, if the comments of some of the Commissioners and supporters of the study are any indication, many critics have already made up their minds about the net effects of PAEs, and they will likely use the study to perpetuate the biased anti-patent fervor that has captured so much attention in recent years.

To the extent patent reform is warranted, it should be considered carefully in a measured fashion, with full consideration given to the costs, benefits, and potential unintended consequences of suggested changes to the patent system and to litigation procedures.  As John Malcolm and I explained in a 2015 Heritage Foundation Legal Backgrounder which explored the relative merits of individual proposed reforms:

Before deciding to take action, Congress should weigh the particular merits of individual reform proposals carefully and meticulously, taking into account their possible harmful effects as well as their intended benefits. Precipitous, unreflective action on legislation is unwarranted, and caution should be the byword, especially since the effects of 2011 legislative changes and recent Supreme Court decisions have not yet been fully absorbed. Taking time is key to avoiding the serious and costly errors that too often are the fruit of omnibus legislative efforts.

Notably, this Legal Backgrounder also noted potential beneficial aspects of PAE activity that were not reflected in the PAE Study:

[E]ven entities whose business model relies on purchasing patents and licensing them or suing those who refuse to enter into licensing agreements and infringe those patents can serve a useful—even a vital—purpose. Some infringers may be large companies that infringe the patents of smaller companies or individual inventors, banking on the fact that such a small-time inventor will be less likely to file a lawsuit against a well-financed entity. Patent aggregators, often backed by well-heeled investors, help to level the playing field and can prevent such abuses.

More important, patent aggregators facilitate an efficient division of labor between inventors and those who wish to use those inventions for the betterment of their fellow man, allowing inventors to spend their time doing what they do best: inventing. Patent aggregators can expand access to patent pools that allow third parties to deal with one vendor instead of many, provide much-needed capital to inventors, and lead to a variety of licensing and sublicensing agreements that create and reflect a valuable and vibrant marketplace for patent holders and provide the kinds of incentives that spur innovation. They can also aggregate patents for litigation purposes, purchasing patents and licensing them in bundles.

This has at least two advantages: It can reduce the transaction costs for licensing multiple patents, and it can help to outsource and centralize patent litigation for multiple patent holders, thereby decreasing the costs associated with such litigation. In the copyright space, the American Society of Composers, Authors, and Publishers (ASCAP) plays a similar role.

All of this is to say that there can be good patent assertion entities that seek licensing agreements and file claims to enforce legitimate patents and bad patent assertion entities that purchase broad and vague patents and make absurd demands to extort license payments or settlements. The proper way to address patent trolls, therefore, is by using the same means and methods that would likely work against ambulance chasers or other bad actors who exist in other areas of the law, such as medical malpractice, securities fraud, and product liability—individuals who gin up or grossly exaggerate alleged injuries and then make unreasonable demands to extort settlements up to and including filing frivolous lawsuits.

In conclusion, the FTC would be well advised to avoid putting forth patent reform recommendations based on the findings of the PAE Study.  At the very least, it should explicitly weigh the implications of other research, which explores PAE-related efficiencies and considers all the ramifications of procedural and patent law changes, before seeking to advance any “PAE reform” recommendations.

On October 6, the Heritage Foundation released a legal memorandum (authored by me) that recounts the Federal Communications Commission’s (FCC) recent sad history of ignoring the rule of law in its enforcement and regulatory actions.  The memorandum calls for a legislative reform agenda to rectify this problem by reining in the agency.  Key points culled from the memorandum are highlighted below (footnotes omitted).

1.  Background: The Rule of Law

The American concept of the rule of law is embodied in the Due Process Clause of the Fifth Amendment to the U.S. Constitution and in the constitutional principles of separation of powers, an independent judiciary, a government under law, and equality of all before the law.  As the late Friedrich Hayek explained:

[The rule of law] means the government in all its actions is bound by rules fixed and announced beforehand—rules which make it possible to see with fair certainty how the authority will use its coercive powers in given circumstances and to plan one’s individual affairs on the basis of this knowledge.

In other words, the rule of law involves a system of binding rules that have been adopted and applied by a valid government authority and that embody clarity, predictability, and equal applicability.   Practices employed by government agencies that undermine the rule of law ignore a fundamental duty that the government owes its citizens and thereby weaken America’s constitutional system.  It follows, therefore, that close scrutiny of federal administrative agencies’ activities is particularly important in helping to achieve public accountability for an agency’s failure to honor the rule of law standard.

2.  How the FCC Flouts the Rule of Law

Applying such scrutiny to the FCC reveals that it does a poor job in adhering to rule of law principles, both in its procedural practices and in various substantive actions that it has taken.

Opaque procedures that generate uncertainties regarding agency plans undermine the clarity and predictability of agency actions and thereby undermine the effectiveness of rule of law safeguards.  Process-based reforms designed to deal with these problems, to the extent that they succeed, strengthen the rule of law.  Procedural inadequacies at the FCC include inordinate delays and a lack of transparency, including the failure to promptly release the text of proposed and final rules.  The FCC itself has admitted that procedural improvements are needed, and legislative proposals have been advanced to make the Commission more transparent, efficient, and accountable.

Nevertheless, mere procedural reforms would not address the far more serious problem of FCC substantive actions that flout the rule of law.  Examples abound:

  • The FCC imposes a variety of “public interest” conditions on proposed mergers subject to its jurisdiction. Those conditions often are announced after inordinate delays, and typically have no bearing on the mergers’ actual effects.  The unpredictable nature and timing of such impositions generate a lack of certainty for businesses and thereby undermine the rule of law.
  • The FCC’s 2015 Municipal Broadband Order preempted state laws in Tennessee and North Carolina that prevented municipally owned broadband providers from providing broadband service beyond their geographic boundaries. Apart from its substantive inadequacies, this Order went beyond the FCC’s statutory authority and raised grave federalism problems (by interfering with a state’s sovereign right to oversee its municipalities), thereby ignoring the constitutional limitations placed on the exercise of governmental powers that lie at the heart of the rule of law.  The Order was struck down by the U.S. Court of Appeals for the Sixth Circuit in August 2016.
  • The FCC’s 2015 “net neutrality” rule (the Open Internet Order) subjects internet service providers (ISPs) to sweeping “reasonableness-based” FCC regulatory oversight. This “reasonableness” standard gives the FCC virtually unbounded discretion to impose sanctions on ISPs.  It does not provide, in advance, a knowable, predictable rule consistent with due process and rule of law norms.  In the dynamic and fast-changing “Internet ecosystem,” this lack of predictable guidance is a major drag on innovation.  Regrettably, in June 2014, a panel of the U.S. Court of Appeals for the District of Columbia, by a two-to-one vote, rejected a challenge to the order brought by ISPs and their trade association.
  • The FCC’s abrupt 2014 extension of its long-standing rules restricting common ownership of local television broadcast stations, to encompass Joint Sales Agreements (JSAs) likewise undermined the rule of law. JSAs, which allow one television station to sell advertising (but not programming) on another station, have long been used by stations that had no reason to believe that their actions in any way constituted illegal “ownership interests,” especially since many of them were originally approved by the FCC.  The U.S. Court of Appeals for the Third Circuit wisely vacated the television JSA rule in May 2016, stressing that the FCC had violated a statutory command by failing to carry out in a timely fashion the quadrennial review of the television ownership rules on which the JSA rule was based.
  • The FCC’s February 2016 proposed rules that are designed to “open” the market for video set-top boxes, appear to fly in the face of federal laws and treaty language protecting intellectual property rights, by arbitrarily denying protection to intellectual property based solely on a particular mode of information transmission. Such a denial is repugnant to rule of law principles.
  • FCC enforcement practices also show a lack of respect for rule of law principles, by seeking to obtain sanctions against behavior that has never been deemed contrary to law or regulatory edicts. Two examples illustrate this point.
    • In 2014, the FCC’s Enforcement Bureau proposed imposing a $10 million fine on TerraCom, Inc., and YourTelAmerica, Inc., two small telephone companies, for a data breach that exposed certain personally identifiable information to unauthorized access. In so doing, the FCC cited provisions of the Telecommunications Act of 1996 and accompanying regulations that had never been construed to authorize sanctions for failure to adopt “reasonable data security practices” to protect sensitive consumer information.
    • In November 2015, the FCC similarly imposed a $595,000 fine on Cox Communications for failure to prevent a data breach committed by a third-party hacker, although no statutory or regulatory language supported imposing any penalty on a firm that was itself victimized by a hack attack

3.  Legislative Reforms to Rein in the FCC

What is to be done?  One sure way to limit an agency’s ability to flout the rule of law is to restrict the scope of its legal authority.  As a matter of first principles, Congress should therefore examine the FCC’s activities with an eye to eliminating its jurisdiction over areas in which regulation is no longer needed:  For example, residual price regulation may be unnecessary in all markets where competition is effective. Regulation is called for only in the presence of serious market failure, coupled with strong evidence that government intervention will yield a better economic outcome than will a decision not to regulate.

Congress should craft legislation designed to sharply restrict the FCC’s ability to flout the rule of law.  At a minimum, no matter how it decides to pursue broad FCC reform, the following five proposals merit special congressional attention as a means of advancing rule of law principles:

  • Eliminate the FCC’s jurisdiction over all mergers. The federal antitrust agencies are best equipped to handle merger analysis, and this source of costly delay and uncertainty regarding ad hoc restrictive conditions should be eliminated.
  • Eliminate the FCC’s jurisdiction over broadband Internet service. Given the benefits associated with an open and unregulated Internet, Congress should provide clearly and unequivocally that the FCC has no jurisdiction, direct or indirect, in this area.
  • Shift FCC regulatory authority over broadband-related consumer protection (including, for example, deceptive advertising, privacy, and data protection) and competition to the Federal Trade Commission, which has longstanding experience and expertise in the area. This jurisdictional transfer would promote clarity and reduce uncertainty, thereby strengthening the rule of law.
  • Require that before taking regulatory action, the FCC carefully scrutinize regulatory language to seek to avoid the sorts of rule of law problems that have plagued prior commission rulemakings.
  • Require that the FCC not seek fines in an enforcement action unless the alleged infraction involves a violation of the precise language of a regulation or statutory provision.

4.  Conclusion

In recent years, the FCC too often has acted in a manner that undermines the rule of law. Internal agency reforms might be somewhat helpful in rectifying this situation, but they inevitably would be limited in scope and inherently malleable as FCC personnel changes. Accordingly, Congress should weigh major statutory reforms to rein in the FCC—reforms that will advance the rule of law and promote American economic well-being.

This week, the International Center for Law & Economics filed comments  on the proposed revision to the joint U.S. Federal Trade Commission (FTC) – U.S. Department of Justice (DOJ) Antitrust-IP Licensing Guidelines. Overall, the guidelines present a commendable framework for the IP-Antitrust intersection, in particular as they broadly recognize the value of IP and licensing in spurring both innovation and commercialization.

Although our assessment of the proposed guidelines is generally positive,  we do go on to offer some constructive criticism. In particular, we believe, first, that the proposed guidelines should more strongly recognize that a refusal to license does not deserve special scrutiny; and, second, that traditional antitrust analysis is largely inappropriate for the examination of innovation or R&D markets.

On refusals to license,

Many of the product innovation cases that have come before the courts rely upon what amounts to an implicit essential facilities argument. The theories that drive such cases, although not explicitly relying upon the essential facilities doctrine, encourage claims based on variants of arguments about interoperability and access to intellectual property (or products protected by intellectual property). But, the problem with such arguments is that they assume, incorrectly, that there is no opportunity for meaningful competition with a strong incumbent in the face of innovation, or that the absence of competitors in these markets indicates inefficiency … Thanks to the very elements of IP that help them to obtain market dominance, firms in New Economy technology markets are also vulnerable to smaller, more nimble new entrants that can quickly enter and supplant incumbents by leveraging their own technological innovation.

Further, since a right to exclude is a fundamental component of IP rights, a refusal to license IP should continue to be generally considered as outside the scope of antitrust inquiries.

And, with respect to conducting antitrust analysis of R&D or innovation “markets,” we note first that “it is the effects on consumer welfare against which antitrust analysis and remedies are measured” before going on to note that the nature of R&D makes it effects very difficult to measure on consumer welfare. Thus, we recommend that the the agencies continue to focus on actual goods and services markets:

[C]ompetition among research and development departments is not necessarily a reliable driver of innovation … R&D “markets” are inevitably driven by a desire to innovate with no way of knowing exactly what form or route such an effort will take. R&D is an inherently speculative endeavor, and standard antitrust analysis applied to R&D will be inherently flawed because “[a] challenge for any standard applied to innovation is that antitrust analysis is likely to occur after the innovation, but ex post outcomes reveal little about whether the innovation was a good decision ex ante, when the decision was made.”

Public comments on the proposed revision to the joint U.S. Federal Trade Commission (FTC) – U.S. Department of Justice (DOJ) Antitrust-IP Licensing Guidelines have, not surprisingly, focused primarily on fine points of antitrust analysis carried out by those two federal agencies (see, for example, the thoughtful recommendations by the Global Antitrust Institute, here).  In a September 23 submission to the FTC and the DOJ, however, U.S. International Trade Commissioner F. Scott Kieff focused on a broader theme – that patent-antitrust assessments should keep in mind the indirect effects on commercialization that stem from IP (and, in particular, patents).  Kieff argues that antitrust enforcers have employed a public law “rules-based” approach that balances the “incentive to innovate” created when patents prevent copying against the goals of competition.  In contrast, Kieff characterizes the commercialization approach as rooted in the property rights nature of patents and the use of private contracting to bring together complementary assets and facilitate coordination.  As Kieff explains (in italics, footnote citations deleted):

A commercialization approach to IP views IP more in the tradition of private law, rather than public law. It does so by placing greater emphasis on viewing IP as property rights, which in turn is accomplished by greater reliance on interactions among private parties over or around those property rights, including via contracts. Centered on the relationships among private parties, this approach to IP emphasizes a different target and a different mechanism by which IP can operate. Rather than target particular individuals who are likely to respond to IP as incentives to create or invent in particular, this approach targets a broad, diverse set of market actors in general; and it does so indirectly. This broad set of indirectly targeted actors encompasses the creator or inventor of the underlying IP asset as well as all those complementary users of a creation or an invention who can help bring it to market, such as investors (including venture capitalists), entrepreneurs, managers, marketers, developers, laborers, and owners of other key assets, tangible and intangible, including other creations or inventions. Another key difference in this approach to IP lies in the mechanism by which these private actors interact over and around IP assets. This approach sees IP rights as tools for facilitating coordination among these diverse private actors, in furtherance of their own private interests in commercializing the creation or invention.

This commercialization approach sees property rights in IP serving a role akin to beacons in the dark, drawing to themselves all of those potential complementary users of the IP-protected-asset to interact with the IP owner and each other. This helps them each explore through the bargaining process the possibility of striking contracts with each other.

Several payoffs can flow from using this commercialization approach. Focusing on such a beacon-and-bargain effect can relieve the governmental side of the IP system of the need to amass the detailed information required to reasonably tailor a direct targeted incentive, such as each actor’s relative interests and contributions, needs, skills, or the like. Not only is amassing all of that information hard for the government to do, but large, established market actors may be better able than smaller market entrants to wield the political influence needed to get the government to act, increasing risk of concerns about political economy, public choice, and fairness. Instead, when governmental bodies closely adhere to a commercialization approach, each private party can bring its own expertise and other assets to the negotiating table while knowing—without necessarily having to reveal to other parties or the government—enough about its own level of interest and capability when it decides whether to strike a deal or not.            

Such successful coordination may help bring new business models, products, and services to market, thereby decreasing anticompetitive concentration of market power. It also can allow IP owners and their contracting parties to appropriate the returns to any of the rival inputs they invested towards developing and commercializing creations or inventions—labor, lab space, capital, and the like. At the same time, the government can avoid having to then go back to evaluate and trace the actual relative contributions that each participant brought to a creation’s or an invention’s successful commercialization—including, again, the cost of obtaining and using that information and the associated risks of political influence—by enforcing the terms of the contracts these parties strike with each other to allocate any value resulting from the creation’s or invention’s commercialization. In addition, significant economic theory and empirical evidence suggests this can all happen while the quality-adjusted prices paid by many end users actually decline and public access is high. In keeping with this commercialization approach, patents can be important antimonopoly devices, helping a smaller “David” come to market and compete against a larger “Goliath.”

A commercialization approach thereby mitigates many of the challenges raised by the tension that is a focus of the other intellectual approaches to IP, as well as by the responses these other approaches have offered to that tension, including some – but not all – types of AT regulation and enforcement. Many of the alternatives to IP that are often suggested by other approaches to IP, such as rewards, tax credits, or detailed rate regulation of royalties by AT enforcers can face significant challenges in facilitating the private sector coordination benefits envisioned by the commercialization approach to IP. While such approaches often are motivated by concerns about rising prices paid by consumers and direct benefits paid to creators and inventors, they may not account for the important cases in which IP rights are associated with declines in quality-adjusted prices paid by consumers and other forms of commercial benefits accrued to the entire IP production team as well as to consumers and third parties, which are emphasized in a commercialization approach. In addition, a commercialization approach can embrace many of the practical checks on the market power of an IP right that are often suggested by other approaches to IP, such as AT review, government takings, and compulsory licensing. At the same time this approach can show the importance of maintaining self-limiting principles within each such check to maintain commercialization benefits and mitigate concerns about dynamic efficiency, public choice, fairness, and the like.

To be sure, a focus on commercialization does not ignore creators or inventors or creations or inventions themselves. For example, a system successful in commercializing inventions can have the collateral benefit of providing positive incentives to those who do invent through the possibility of sharing in the many rewards associated with successful commercialization. Nor does a focus on commercialization guarantee that IP rights cause more help than harm. Significant theoretical and empirical questions remain open about benefits and costs of each approach to IP. And significant room to operate can remain for AT enforcers pursuing their important public mission, including at the IP-AT interface.

Commissioner Kieff’s evaluation is in harmony with other recent scholarly work, including Professor Dan Spulber’s explanation that the actual nature of long-term private contracting arrangements among patent licensors and licensees avoids alleged competitive “imperfections,” such as harmful “patent hold-ups,” “patent thickets,” and “royalty stacking” (see my discussion here).  More generally, Commissioner Kieff’s latest pronouncement is part of a broader and growing theoretical and empirical literature that demonstrates close associations between strong patent systems and economic growth and innovation (see, for example, here).

There is a major lesson here for U.S. (and foreign) antitrust enforcement agencies.  As I have previously pointed out (see, for example, here), in recent years, antitrust enforcers here and abroad have taken positions that tend to weaken patent rights.  Those positions typically are justified by the existence of “patent policy deficiencies” such as those that Professor Spulber’s paper debunks, as well as an alleged epidemic of low quality “probabilistic patents” (see, for example, here) – justifications that ignore the substantial economic benefits patents confer on society through contracting and commercialization.  It is high time for antitrust to accommodate the insights drawn from this new learning.  Specifically, government enforcers should change their approach and begin incorporating private law/contracting/commercialization considerations into patent-antitrust analysis, in order to advance the core goals of antitrust – the promotion of consumer welfare and efficiency.  Better yet, if the FTC and DOJ truly want to maximize the net welfare benefits of antitrust, they should undertake a more general “policy reboot” and adopt a “decision-theoretic” error cost approach to enforcement policy, rooted in cost-benefit analysis (see here) and consistent with the general thrust of Roberts Court antitrust jurisprudence (see here).

The Global Antitrust Institute (GAI) at George Mason University’s Antonin Scalia Law School released today a set of comments on the joint U.S. Department of Justice (DOJ) – Federal Trade Commission (FTC) August 12 Proposed Update to their 1995 Antitrust Guidelines for the Licensing of Intellectual Property (Proposed Update).  As has been the case with previous GAI filings (see here, for example), today’s GAI Comments are thoughtful and on the mark.

For those of you who are pressed for time, the latest GAI comments make these major recommendations (summary in italics):

Standard Essential Patents (SEPs):  The GAI Comments commended the DOJ and the FTC for preserving the principle that the antitrust framework is sufficient to address potential competition issues involving all IPRs—including both SEPs and non-SEPs.  In doing so, the DOJ and the FTC correctly rejected the invitation to adopt a special brand of antitrust analysis for SEPs in which effects-based analysis was replaced with unique presumptions and burdens of proof. 

o   The GAI Comments noted that, as FTC Chairman Edith Ramirez has explained, “the same key enforcement principles [found in the 1995 IP Guidelines] also guide our analysis when standard essential patents are involved.”

o   This is true because SEP holders, like other IP holders, do not necessarily possess market power in the antitrust sense, and conduct by SEP holders, including breach of a voluntary assurance to license its SEP on fair, reasonable, and nondiscriminatory (FRAND) terms, does not necessarily result in harm to the competitive process or to consumers. 

o   Again, as Chairwoman Ramirez has stated, “it is important to recognize that a contractual dispute over royalty terms, whether the rate or the base used, does not in itself raise antitrust concerns.”

Refusals to License:  The GAI Comments expressed concern that the statements regarding refusals to license in Sections 2.1 and 3 of the Proposed Update seem to depart from the general enforcement approach set forth in the 2007 DOJ-FTC IP Report in which those two agencies stated that “[a]ntitrust liability for mere unilateral, unconditional refusals to license patents will not play a meaningful part in the interface between patent rights and antitrust protections.”  The GAI recommended that the DOJ and the FTC incorporate this approach into the final version of their updated IP Guidelines.

“Unreasonable Conduct”:  The GAI Comments recommended that Section 2.2 of the Proposed Update be revised to replace the phrase “unreasonable conduct” with a clear statement that the agencies will only condemn licensing restraints when anticompetitive effects outweigh procompetitive benefits.

R&D Markets:  The GAI Comments urged the DOJ and the FTC to reconsider the inclusion (or, at the very least, substantially limit the use) of research and development (R&D) markets because: (1) the process of innovation is often highly speculative and decentralized, making it impossible to identify all market participants to be; (2) the optimal relationship between R&D and innovation is unknown; (3) the market structure most conducive to innovation is unknown; (4) the capacity to innovate is hard to monopolize given that the components of modern R&D—research scientists, engineers, software developers, laboratories, computer centers, etc.—are continuously available on the market; and (5) anticompetitive conduct can be challenged under the actual potential competition theory or at a later time.

While the GAI Comments are entirely on point, even if their recommendations are all adopted, much more needs to be done.  The Proposed Update, while relatively sound, should be viewed in the larger context of the Obama Administration’s unfortunate use of antitrust policy to weaken patent rights (see my article here, for example).  In addition to strengthening the revised Guidelines, as suggested by the GAI, the DOJ and the FTC should work with other component agencies of the next Administration – including the Patent Office and the White House – to signal enhanced respect for IP rights in general.  In short, a general turnaround in IP policy is called for, in order to spur American innovation, which has been all too lacking in recent years.

Section 5(a)(2) of the Federal Trade Commission (FTC) Act authorizes the FTC to “prevent persons, partnerships, or corporations, except . . . common carriers subject to the Acts to regulate commerce . . . from using unfair methods of competition in or affecting commerce and unfair or deceptive acts or practices in or affecting commerce.”  On August 29, in FTC v. AT&T, the Ninth Circuit issued a decision that exempts non-common carrier data services from U.S. Federal Trade Commission (FTC) jurisdiction, merely because they are offered by a company that has common carrier status.  This case involved an FTC allegation that AT&T had “throttled” data (slowed down Internet service) for “unlimited mobile data” customers without adequate consent or disclosures, in violation of Section 5 of the FTC Act.  The FTC had claimed that although AT&T mobile wireless voice services were a common carrier service, the company’s mobile wireless data services were not, and, thus, were subject to FTC oversight.  Reversing a federal district court’s refusal to grant AT&T’s motion to dismiss, the Ninth Circuit concluded that “when Congress used the term ‘common carrier’ in the FTC Act, [there is no indication] it could only have meant ‘common carrier to the extent engaged in common carrier activity.’”  The Ninth Circuit therefore determined that “a literal reading of the words Congress selected simply does comport with [the FTC’s] activity-based approach.”  The FTC’s pending case against AT&T in the Northern District of California (which is within the Ninth Circuit) regarding alleged unfair and deceptive advertising of satellite services by AT&T subsidiary DIRECTTV (see here) could be affected by this decision.

The Ninth Circuit’s AT&T holding threatens to further extend the FCC’s jurisdictional reach at the expense of the FTC.  It comes on the heels of the divided D.C. Circuit’s benighted and ill-reasoned decision (see here) upholding the FCC’s “Open Internet Order,” including its decision to reclassify Internet broadband service as a common carrier service.  That decision subjects broadband service to heavy-handed and costly FCC “consumer protection” regulation, including in the area of privacy.  The FCC’s overly intrusive approach stands in marked contrast to the economic efficiency considerations (albeit not always perfectly applied) that underlie FTC consumer protection mode of analysis.  As I explained in a May 2015 Heritage Foundation Legal Memorandum,  the FTC’s highly structured, analytic, fact-based methodology, combined with its vast experience in privacy and data security investigations, make it a far better candidate than the FCC to address competition and consumer protection problems in the area of broadband.

I argued in this space in March 2016 that, should the D.C. Circuit uphold the FCC’s Open Internet Order, Congress should carefully consider whether to strip the FCC of regulatory authority in this area (including, of course, privacy practices) and reassign it to the FTC.  The D.C. Circuit’s decision upholding that Order, combined with the Ninth Circuit’s latest ruling, makes the case for potential action by the next Congress even more urgent.

While it is at it, the next Congress should also weigh whether to repeal the FTC’s common carrier exemption, as well as all special exemptions for specified categories of institutions, such as banks, savings and loans, and federal credit unions (see here).  In so doing, Congress might also do away with the Consumer Financial Protection Bureau, an unaccountable bureaucracy whose consumer protection regulatory responsibilities should cease (see my February 2016 Heritage Legal Memorandum here).

Finally, as Heritage Foundation scholars have urged, Congress should look into enacting additional regulatory reform legislation, such as requiring congressional approval of new major regulations issued by agencies (including financial services regulators) and subjecting “independent” agencies (including the FCC) to executive branch regulatory review.

That’s enough for now.  Stay tuned.

In recent years much ink has been spilled on the problem of online privacy breaches, involving the unauthorized use of personal information transmitted over the Internet.  Internet privacy concerns are warranted.  According to a 2016 National Telecommunications and Information Administration survey of Internet-using households, 19 percent of such households (representing nearly 19 million households) reported that they had been affected by an online security breach, identity theft, or similar malicious activity during the 12 months prior to the July 2015 survey.  Security breaches appear to be more common among the most intensive Internet-using households – 31 percent of those using at least five different types of online devices suffered such breaches.  Security breach statistics, of course, do not directly measure the consumer welfare losses attributable to the unauthorized use of personal data that consumers supply to Internet service providers and to the websites which they visit.

What is the correct overall approach government should take in dealing with Internet privacy problems?  In addressing this question, it is important to focus substantial attention on the effects of online privacy regulation on economic welfare.  In particular, policies should aim at addressing Internet privacy problems in a manner that does not unduly harm the private sector or deny opportunities to consumers who are not being harmed.  The U.S. Federal Trade Commission (FTC), the federal government’s primary consumer protection agency, has been the principal federal regulator of online privacy practices.  Very recently, however, the U.S. Federal Communications Commission (FCC) has asserted the authority to regulate the privacy practices of broadband Internet service providers, and is proposing an extremely burdensome approach to such regulation that would, if implemented, have harmful economic consequences.

In March 2016, FTC Commissioner Maureen Ohlhausen succinctly summarized the FTC’s general approach to online privacy-related enforcement under Section 5 of the FTC Act, which proscribes unfair or deceptive acts or practices:

[U]nfairness establishes a baseline prohibition on practices that the overwhelming majority of consumers would never knowingly approve. Above that baseline, consumers remain free to find providers that match their preferences, and our deception authority governs those arrangements. . . .  The FTC’s case-by-case enforcement of our unfairness authority shapes our baseline privacy practices.  Like the common law, this incremental approach has proven both relatively predictable and adaptable as new technologies and business models emerge.

In November 2015, Professor (and former FTC Commissioner) Joshua Wright argued the FTC’s approach is insufficiently attuned to economic analysis, in particular, the “tradeoffs between the value to consumers and society of the free flow and exchange of data and the creation of new products and services on the one hand, against the value lost by consumers from any associated reduction in privacy.”  Nevertheless, on balance, FTC enforcement in this area generally is restrained and somewhat attentive to cost-benefit considerations.  (This undoubtedly reflects the fact (see my Heritage Legal Memorandum, here) that the statutory definition of “unfairness” in Section 5(n) of the FTC Act embodies cost-benefit analysis, and that the FTC’s Policy Statement on Deception requires detriment to consumers acting reasonably in the circumstances.)  In other words, federal enforcement policy with respect to online privacy, although it could be improved, is in generally good shape.

Or it was in good shape.  Unfortunately, on April 1, 2016, the Federal Communications Commission (FCC) decided to inject itself into “privacy space” by issuing a Notice of Proposed Rulemaking entitled “Protecting the Privacy of Customers of Broadband and Other Telecommunications Services.”  This “Privacy NPRM” sets forth detailed rules that, if adopted, would impose onerous privacy obligations on “Broadband Internet Access Service” (BIAS) Providers, the firms that provide the cables, wires, and telecommunications equipment through which Internet traffic flows – primarily cable (Comcast, for example) and telephone (Verizon, for example) companies.   The Privacy NPRM reclassifies BIAS provision as a “common carrier” service, thereby totally precluding the FTC from regulating BIAS Providers’ privacy practices (since the FTC is barred by law from regulating common carriers, under 15 U.S. Code § 45(a)(2)).  Put simply, the NPRM required BIAS Providers “to obtain express consent in advance of practically every use of a customer[s] data”, without regard to the effects of such a requirement on economic welfare.  All other purveyors of Internet services, however – in particular, the large numbers of “edge providers” that generate Internet content and services (Google, Amazon, and Facebook, for example) – are exempt from the new FCC regulatory requirements.  In short, the Privacy NPRM establishes a two-tier privacy regulatory system, with BIAS Providers subject to tight FCC privacy rules, while all other Internet service firms are subject to more nuanced, case-by-case, effects-based evaluation of their privacy practices by the FTC.  This disparate regulatory approach is peculiar (if not wholly illogical), since edge providers in general have greater access than BIAS Providers to consumers’ non-public information, and thus may appear to pose a greater threat to consumers’ interest in privacy.

The FCC’s proposal to regulate BIAS Providers’ privacy practices represents bad law and horrible economic policy.  First, it undermines the rule of law by extending the FCC’s authority beyond its congressional mandate.  It does this by basing its regulation of a huge universe of information exchanges on Section 222 of the Telecommunications Act of 1996, a narrow provision aimed at a very limited type of customer-related data obtained in connection with old-style voice telephony transmissions.  This is egregious regulatory overreach.  Second, if implemented, it will harm consumers, producers, and the overall economic by imposing a set of sweeping opt-in consent requirements on BIAS Providers, without regard to private sector burdens or actual consumer welfare (see here); by reducing BIAS Provider revenues and thereby dampening investment that is vital to the continued growth of and innovation in Internet-related industries (see here); by reducing the ability of BIAS Providers to provide welfare-enhancing competitive pressure on providers on Internet edge providers (see here); and by raising consumer prices for Internet services and deny discount programs desired by consumers (see here).

What’s worse, the FCC’s proposed involvement in online privacy oversight comes at a time of increased Internet privacy regulation by foreign countries, much of it highly intrusive and lacking in economic sophistication.  A particularly noteworthy effort to clarify cross-national legal standards is the Privacy Shield, a 2016 United States – European Union agreement that establishes regulatory online privacy protection norms, backed by FTC enforcement, that U.S. companies transmitting data into Europe may choose to accept on a voluntary basis.  (If they do not accede to the Shield, they may be subject to uncertain and heavy-handed European sanctions.)  The Privacy NPRM, if implemented, will create an additional concern for BIAS Providers, since they will have to evaluate the implications of new FCC regulation (rather than simply rely on FTC oversight) in deciding whether to opt in to the Shield’s standards and obligations.

In sum, the FCC’s Privacy NPRM would, if implemented, harm consumers and producers, slow innovation, and offend the rule of law.  This prompts four recommendations.

  • The FCC should withdraw the NPRM and leave it to the FTC to oversee all online privacy practices, under its Section 5 unfairness and deception authority. The adoption of the Privacy Shield, which designates the FTC as the responsible American privacy oversight agency, further strengthens the case against FCC regulation in this area. 
  • In overseeing online privacy practices, the FTC should employ a very light touch that stresses economic analysis and cost-benefit considerations. Moreover, it should avoid requiring that rigid privacy policy conditions be kept in place for long periods of time through consent decree conditions, in order to allow changing market conditions to shape and improve business privacy policies. 
  • Moreover, the FTC should borrow a page from former FTC Commissioner Joshua Wright by implementing an “economic approach” to privacy. Under such an approach:  

o             FTC economists would help make the Commission a privacy “thought leader” by developing a rigorous academic research agenda on the economics of privacy, featuring the economic evaluation of industry sectors and practices; 

o             the FTC would bear the burden of proof of showing that violations of a company’s privacy policy are material to consumer decision-making;

o             FTC economists would report independently to the FTC about proposed privacy-related enforcement initiatives; and

o             the FTC would publish the views of its Bureau of Economics in all privacy-related consent decrees that are placed on the public record.   

  • The FTC should encourage the European Commission and other foreign regulators to take into account the economics of privacy in developing their privacy regulatory policies. In so doing, it should emphasize that innovation is harmed, the beneficial development of the Internet is slowed, and consumer welfare and rights are undermined through highly prescriptive regulation in this area (well-intentioned though it may be).  Relatedly, the FTC and other U.S. Government negotiators should argue against adoption of a “one-size-fits-all” global privacy regulation framework.   Such a global framework could harmfully freeze into place over-regulatory policies and preclude beneficial experimentation in alternative forms of “lighter-touch” regulation and enforcement. 

While no panacea, these recommendations would help deter (or, at least, constrain) the economically harmful government micromanagement of businesses’ privacy practices, in the United States and abroad.

In the wake of the recent OIO decision, separation of powers issues should be at the forefront of everyone’s mind. In reaching its decision, the DC Circuit relied upon Chevron to justify its extreme deference to the FCC. The court held, for instance, that

Our job is to ensure that an agency has acted “within the limits of [Congress’s] delegation” of authority… and that its action is not “arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law.”… Critically, we do not “inquire as to whether the agency’s decision is wise as a policy matter; indeed, we are forbidden from substituting our judgment for that of the agency.”… Nor do we inquire whether “some or many economists would disapprove of the [agency’s] approach” because “we do not sit as a panel of referees on a professional economics journal, but as a panel of generalist judges obliged to defer to a reasonable judgment by an agency acting pursuant to congressionally delegated authority.

The DC Circuit’s decision takes a broad view of Chevron deference and, in so doing, ignores or dismisses some of the limits placed upon the doctrine by cases like Michigan v. EPA and UARG v. EPA (though Judge Williams does bring up UARG in dissent).

Whatever one thinks of the validity of the FCC’s approach to regulating the Internet, there is no question that it has, at best, a weak statutory foothold. Without prejudging the merits of the OIO, or the question of deference to agencies that find “[regulatory] elephants in [statutory] mouseholes,”  such broad claims of authority, based on such limited statutory language, should give one pause. That the court upheld the FCC’s interpretation of the Act without expressing reservations, suggesting any limits, or admitting of any concrete basis for challenging the agency’s authority beyond circular references to “abuse of discretion” is deeply troubling.

Separation of powers is a fundamental feature of our democracy, and one that has undoubtedly contributed to the longevity of our system of self-governance. Not least among the important features of separation of powers is the ability of courts to review the lawfulness of legislation and executive action.

The founders presciently realized the dangers of allowing one part of the government to centralize power in itself. In Federalist 47, James Madison observed that

The accumulation of all powers, legislative, executive, and judiciary, in the same hands, whether of one, a few, or many, and whether hereditary, selfappointed, or elective, may justly be pronounced the very definition of tyranny. Were the federal Constitution, therefore, really chargeable with the accumulation of power, or with a mixture of powers, having a dangerous tendency to such an accumulation, no further arguments would be necessary to inspire a universal reprobation of the system. (emphasis added)

The modern administrative apparatus has become the sort of governmental body that the founders feared and that we have somehow grown to accept. The FCC is not alone in this: any member of the alphabet soup that constitutes our administrative state, whether “independent” or otherwise, is typically vested with great, essentially unreviewable authority over the economy and our daily lives.

As Justice Thomas so aptly put it in his must-read concurrence in Michigan v. EPA:

Perhaps there is some unique historical justification for deferring to federal agencies, but these cases reveal how paltry an effort we have made to understand it or to confine ourselves to its boundaries. Although we hold today that EPA exceeded even the extremely permissive limits on agency power set by our precedents, we should be alarmed that it felt sufficiently emboldened by those precedents to make the bid for deference that it did here. As in other areas of our jurisprudence concerning administrative agencies, we seem to be straying further and further from the Constitution without so much as pausing to ask why. We should stop to consider that document before blithely giving the force of law to any other agency “interpretations” of federal statutes.

Administrative discretion is fantastic — until it isn’t. If your party is the one in power, unlimited discretion gives your side the ability to run down a wish list, checking off controversial items that could never make it past a deliberative body like Congress. That same discretion, however, becomes a nightmare under extreme deference as political opponents, newly in power, roll back preferred policies. In the end, regulation tends toward the extremes, on both sides, and ultimately consumers and companies pay the price in the form of excessive regulatory burdens and extreme uncertainty.

In theory, it is (or should be) left to the courts to rein in agency overreach. Unfortunately, courts have been relatively unwilling to push back on the administrative state, leaving the task up to Congress. And Congress, too, has, over the years, found too much it likes in agency power to seriously take on the structural problems that give agencies effectively free reign. At least, until recently.

In March of this year, Representative Ratcliffe (R-TX) proposed HR 4768: the Separation of Powers Restoration Act (“SOPRA”). Arguably this is first real effort to fix the underlying problem since the 1995 “Comprehensive Regulatory Reform Act” (although, it should be noted, SOPRA is far more targeted than was the CRRA). Under SOPRA, 5 U.S.C. § 706 — the enacted portion of the APA that deals with judicial review of agency actions —  would be amended to read as follows (with the new language highlighted):

(a) To the extent necessary to decision and when presented, the reviewing court shall determine the meaning or applicability of the terms of an agency action and decide de novo all relevant questions of law, including the interpretation of constitutional and statutory provisions, and rules made by agencies. Notwithstanding any other provision of law, this subsection shall apply in any action for judicial review of agency action authorized under any provision of law. No law may exempt any such civil action from the application of this section except by specific reference to this section.

These changes to the scope of review would operate as a much-needed check on the unlimited discretion that agencies currently enjoy. They give courts the ability to review “de novo all relevant questions of law,” which includes agencies’ interpretations of their own rules.

The status quo has created a negative feedback cycle. The Chevron doctrine, as it has played out, gives outsized incentives to both the federal agencies, as well as courts, to essentially disregard Congress’s intended meaning for particular statutes. Today an agency can write rules and make decisions safe in the knowledge that Chevron will likely insulate it from any truly serious probing by a district court with regards to how well the agency’s action actually matches up with congressional intent or with even rudimentary cost-benefit analysis.

Defenders of the administrative state may balk at changing this state of affairs, of course. But defending an institution that is almost entirely immune from judicial and legal review seems to be a particularly hard row to hoe.

Public Knowledge, for instance, claims that

Judicial deference to agency decision-making is critical in instances where Congress’ intent is unclear because it balances each branch of government’s appropriate role and acknowledges the realities of the modern regulatory state.

To quote Justice Scalia, an unfortunate champion of the Chevron doctrine, this is “pure applesauce.”

The very core of the problem that SOPRA addresses is that the administrative state is not a proper branch of government — it’s a shadow system of quasi-legislation and quasi-legal review. Congress can be chastened by popular vote. Judges who abuse discretion can be overturned (or impeached). The administrative agencies, on the other hand, are insulated through doctrines like Chevron and Auer, and their personnel subject more or less to the political whims of the executive branch.

Even agencies directly under the control of the executive branch  — let alone independent agencies — become petrified caricatures of their original design as layers of bureaucratic rule and custom accrue over years, eventually turning the organization into an entity that serves, more or less, to perpetuate its own existence.

Other supporters of the status quo actually identify the unreviewable see-saw of agency discretion as a feature, not a bug:

Even people who agree with the anti-government premises of the sponsors [of SOPRA] should recognize that a change in the APA standard of review is an inapt tool for advancing that agenda. It is shortsighted, because it ignores the fact that, over time, political administrations change. Sometimes the administration in office will generally be in favor of deregulation, and in these circumstances a more intrusive standard of judicial review would tend to undercut that administration’s policies just as surely as it may tend to undercut a more progressive administration’s policies when the latter holds power. The APA applies equally to affirmative regulation and to deregulation.

But presidential elections — far from justifying this extreme administrative deference — actually make the case for trimming the sails of the administrative state. Presidential elections have become an important part about how candidates will wield the immense regulatory power vested in the executive branch.

Thus, for example, as part of his presidential bid, Jeb Bush indicated he would use the EPA to roll back every policy that Obama had put into place. One of Donald Trump’s allies suggested that Trump “should turn off [CNN’s] FCC license” in order to punish the news agency. And VP hopeful Elizabeth Warren has suggested using the FDIC to limit the growth of financial institutions, and using the FCC and FTC to tilt the markets to make it easier for the small companies to get an advantage over the “big guys.”

Far from being neutral, technocratic administrators of complex social and economic matters, administrative agencies have become one more political weapon of majority parties as they make the case for how their candidates will use all the power at their disposal — and more — to work their will.

As Justice Thomas, again, noted in Michigan v. EPA:

In reality…, agencies “interpreting” ambiguous statutes typically are not engaged in acts of interpretation at all. Instead, as Chevron itself acknowledged, they are engaged in the “formulation of policy.” Statutory ambiguity thus becomes an implicit delegation of rulemaking authority, and that authority is used not to find the best meaning of the text, but to formulate legally binding rules to fill in gaps based on policy judgments made by the agency rather than Congress.

And this is just the thing: SOPRA would bring far-more-valuable predictability and longevity to our legal system by imposing a system of accountability on the agencies. Currently, commissions often believe they can act with impunity (until the next election at least), and even the intended constraints of the APA frequently won’t do much to tether their whims to statute or law if they’re intent on deviating. Having a known constraint (or, at least, a reliable process by which judicial constraint may be imposed) on their behavior will make them think twice about exactly how legally and economically sound proposed rules and other actions are.

The administrative state isn’t going away, even if SOPRA were passed; it will continue to be the source of the majority of the rules under which our economy operates. We have long believed that a benefit of our judicial system is its consistency and relative lack of politicization. If this is a benefit for interpreting laws when agencies aren’t involved, it should also be a benefit when they are involved. Particularly as more and more law emanates from agencies rather than Congress, the oversight of largely neutral judicial arbiters is an essential check on the administrative apparatus’ “accumulation of all powers.”

The interest of judges tends to include a respect for the development of precedent that yields consistent and transparent rules for all future litigants and, more broadly, for economic actors and consumers making decisions in the shadow of the law. This is markedly distinct from agencies which, more often than not, promote the particular, shifting, and often-narrow political sentiments of the day.

Whether a Republican- or a Democrat— appointed district judge reviews an agency action, that judge will be bound (more or less) by the precedent that came before, regardless of the judge’s individual political preferences. Contrast this with the FCC’s decision to reclassify broadband as a Title II service, for example, where previously it had been committed to the idea that broadband was an information service, subject to an entirely different — and far less onerous — regulatory regime.  Of course, the next FCC Chairman may feel differently, and nothing would stop another regulatory shift back to the pre-OIO status quo. Perhaps more troublingly, the enormous discretion afforded by courts under current standards of review would permit the agency to endlessly tweak its rules — forbearing from some regulations but not others, un-forbearing, re-interpreting, etc., with precious few judicial standards available to bring certainty to the rules or to ensure their fealty to the statute or the sound economics that is supposed to undergird administrative decisionmaking.

SOPRA, or a bill like it, would have required the Commission to actually be accountable for its historical regulations, and would force it to undergo at least rudimentary economic analysis to justify its actions. This form of accountability can only be to the good.

The genius of our system is its (potential) respect for the rule of law. This is an issue that both sides of the aisle should be able to get behind: minority status is always just one election cycle away. We should all hope to see SOPRA — or some bill like it — gain traction, rooted in long-overdue reflection on just how comfortable we are as a polity with a bureaucratic system increasingly driven by unaccountable discretion.

Thanks to Geoff for the introduction. I look forward to posting a few things over the summer.

I’d like to begin by discussing Geoff’s post on the pending legislative proposals designed to combat strategic abuse of drug safety regulations to prevent generic competition. Specifically, I’d like to address the economic incentive structure that is in effect in this highly regulated market.

Like many others, I first noticed the abuse of drug safety regulations to prevent competition when Turing Pharmaceuticals—then led by now infamous CEO Martin Shkreli—acquired the manufacturing rights for the anti-parasitic drug Daraprim, and raised the price of the drug by over 5,000%. The result was a drug that cost $750 per tablet. Daraprim (pyrimethamine) is used to combat malaria and toxoplasma gondii infections in immune-compromised patients, especially those with HIV. The World Health Organization includes Daraprim on its “List of Essential Medicines” as a medicine important to basic health systems. After the huge price hike, the drug was effectively out of reach for many insurance plans and uninsured patients who needed it for the six to eight week course of treatment for toxoplasma gondii infections.

It’s not unusual for drugs to sell at huge multiples above their manufacturing cost. Indeed, a primary purpose of patent law is to allow drug companies to earn sufficient profits to engage in the expensive and risky business of developing new drugs. But Daraprim was first sold in 1953 and thus has been off patent for decades. With no intellectual property protection Daraprim should, theoretically, now be available from generic drug manufactures for only a little above cost. Indeed, this is what we see in the rest of the world. Daraprim is available all over the world for very cheap prices. The per tablet price is 3 rupees (US$0.04) in India, R$0.07 (US$0.02) in Brazil, US$0.18 in Australia, and US$0.66 in the UK.

So what gives in the U.S.? Or rather, what does not give? What in our system of drug distribution has gotten stuck and is preventing generic competition from swooping in to compete down the high price of off-patent drugs like Daraprim? The answer is not market failure, but rather regulatory failure, as Geoff noted in his post. While generics would love to enter a market where a drug is currently selling for high profits, they cannot do so without getting FDA approval for their generic version of the drug at issue. To get approval, a generic simply has to file an Abbreviated New Drug Application (“ANDA”) that shows that its drug is equivalent to the branded drug with which it wants to compete. There’s no need for the generic to repeat the safety and efficacy tests that the brand manufacturer originally conducted. To test for equivalence, the generic needs samples of the brand drug. Without those samples, the generic cannot meet its burden of showing equivalence. This is where the strategic use of regulation can come into play.

Geoff’s post explains the potential abuse of Risk Evaluation and Mitigation Strategies (“REMS”). REMS are put in place to require certain safety steps (like testing a woman for pregnancy before prescribing a drug that can cause birth defects) or to restrict the distribution channels for dangerous or addictive drugs. As Geoff points out, there is evidence that a few brand name manufacturers have engaged in bad-faith refusals to provide samples using the excuse of REMS or restricted distribution programs to (1) deny requests for samples, (2) prevent generic manufacturers from buying samples from resellers, and (3) deny generics whose drugs have won approval access to the REMS system that is required for generics to distribute their drugs. Once the FDA has certified that a generic manufacturer can safely handle the drug at issue, there is no legitimate basis for the owners of brand name drugs to deny samples to the generic maker. Expressed worries about liability from entering joint REMS programs with generics also ring hollow, for the most part, and would be ameliorated by the pending legislation.

It’s important to note that this pricing situation is unique to drugs because of the regulatory framework surrounding drug manufacture and distribution. If a manufacturer of, say, an off-patent vacuum cleaner wants to prevent competitors from copying its vacuum cleaner design, it is unlikely to be successful. Even if the original manufacturer refuses to sell any vacuum cleaners to a competitor, and instructs its retailers not to sell either, this will be very difficult to monitor and enforce. Moreover, because of an unrestricted resale market, a competitor would inevitably be able to obtain samples of the vacuum cleaner it wishes to copy. Only patent law can successfully protect against the copying of a product sold to the general public, and when the patent expires, so too will the ability to prevent copying.

Drugs are different. The only way a consumer can resell prescription drugs is by breaking the law. Pills bought from an illegal secondary market would be useless to generics for purposes of FDA approval anyway, because the chain of custody would not exist to prove that the samples are the real thing. This means generics need to get samples from the authorized manufacturer or distribution company. When a drug is subject to a REMS-required restricted distribution program, it is even more difficult, if not impossible, for a generic maker to get samples of the drugs for which it wants to make generic versions. Restricted distribution programs, which are used for dangerous or addictive drugs, by design very tightly control the chain of distribution so that the drugs go only to patients with proper prescriptions from authorized doctors.

A troubling trend has arisen recently in which drug owners put their branded drugs into restricted distribution programs not because of any FDA REMS requirement, but instead as a method to prevent generics from obtaining samples and making generic versions of the drugs. This is the strategy that Turing used before it raised prices over 5,000% on Daraprim. And Turing isn’t the only company to use this strategy. It is being emulated by others, although perhaps not so conspicuously. For instance, in 2015, Valeant Pharmaceuticals completed a hostile takeover of Allergan Pharmaceuticals, with the help of the hedge fund, Pershing Square. Once Valeant obtained ownership of Allergan and its drug portfolio, it adopted restricted distribution programs and raised the prices on its off-patent drugs substantially. It raised the price of two life-saving heart drugs by 212% and 525% respectively. Others have followed suit.

A key component of the strategy to profit from hiking prices on off-patent drugs while avoiding competition from generics is to select drugs that do not currently have generic competitors. Sometimes this is because a drug has recently come off patent, and sometimes it is because the drug is for a small patient population, and thus generics haven’t bothered to enter the market given that brand name manufacturers generally drop their prices to close to cost after the drug comes off patent. But with the strategic control of samples and refusals to allow generics to enter REMS programs, the (often new) owners of the brand name drugs seek to prevent the generic competition that we count on to make products cheap and plentiful once their patent protection expires.

Most brand name drug makers do not engage in withholding samples from generics and abusing restricted distribution and REMS programs. But the few that do cost patients and insurers dearly for important medicines that should be much cheaper once they go off patent. More troubling still is the recent strategy of taking drugs that have been off patent and cheap for years, and abusing the regulatory regime to raise prices and block competition. This growing trend of abusing restricted distribution and REMS to facilitate rent extraction from drug purchasers needs to be corrected.

Two bills addressing this issue are pending in Congress. Both bills (1) require drug companies to provide samples to generics after the FDA has certified the generic, (2) require drug companies to enter into shared REMS programs with generics, (3) allow generics to set up their own REMS compliant systems, and (4) exempt drug companies from liability for sharing products and REMS-compliant systems with generic companies in accordance with the steps set out in the bills. When it comes to remedies, however, the Senate version is significantly better. The penalties provided in the House bill are both vague and overly broad. The bill provides for treble damages and costs against the drug company “of the kind described in section 4(a) of the Clayton Act.” Not only is the application of the Clayton Act unclear in the context of the heavily regulated market for drugs (see Trinko), but treble damages may over-deter reasonably restrictive behavior by drug companies when it comes to distributing dangerous drugs.

The remedies in the Senate version are very well crafted to deter rent seeking behavior while not overly deterring reasonable behavior. The remedial scheme is particularly good, because it punishes most those companies that attempt to make exorbitant profits on drugs by denying generic entry. The Senate version provides as a remedy for unreasonable delay that the plaintiff shall be awarded attorneys’ fees, costs, and the defending drug company’s profits on the drug at issue during the time of the unreasonable delay. This means that a brand name drug company that sells an old drug for a low price and delays sharing only because of honest concern about the safety standards of a particular generic company will not face terribly high damages if it is found unreasonable. On the other hand, a company that sends the price of an off-patent drug soaring and then attempts to block generic entry will know that it can lose all of its rent-seeking profits, plus the cost of the victorious generic company’s attorneys fees. This vastly reduces the incentive for the company owning the brand name drug to raise prices and keep competitors out. It likewise greatly increases the incentive of a generic company to enter the market and–if it is unreasonably blocked–to file a civil action the result of which would be to transfer the excess profits to the generic. This provides a rather elegant fix to the regulatory gaming in this area that has become an increasing problem. The balancing of interests and incentives in the Senate bill should leave many congresspersons feeling comfortable to support the bill.