Archives For ftc

Earlier this week I testified before the U.S. House Subcommittee on Commerce, Manufacturing, and Trade regarding several proposed FTC reform bills.

You can find my written testimony here. That testimony was drawn from a 100 page report, authored by Berin Szoka and me, entitled “The Federal Trade Commission: Restoring Congressional Oversight of the Second National Legislature — An Analysis of Proposed Legislation.” In the report we assess 9 of the 17 proposed reform bills in great detail, and offer a host of suggested amendments or additional reform proposals that, we believe, would help make the FTC more accountable to the courts. As I discuss in my oral remarks, that judicial oversight was part of the original plan for the Commission, and an essential part of ensuring that its immense discretion is effectively directed toward protecting consumers as technology and society evolve around it.

The report is “Report 2.0” of the FTC: Technology & Reform Project, which was convened by the International Center for Law & Economics and TechFreedom with an inaugural conference in 2013. Report 1.0 lays out some background on the FTC and its institutional dynamics, identifies the areas of possible reform at the agency, and suggests the key questions/issues each of them raises.

The text of my oral remarks follow, or, if you prefer, you can watch them here:

Chairman Burgess, Ranking Member Schakowsky, and Members of the Subcommittee, thank you for the opportunity to appear before you today.

I’m Executive Director of the International Center for Law & Economics, a non-profit, non-partisan research center. I’m a former law professor, I used to work at Microsoft, and I had what a colleague once called the most illustrious FTC career ever — because, at approximately 2 weeks, it was probably the shortest.

I’m not typically one to advocate active engagement by Congress in anything (no offense). But the FTC is different.

Despite Congressional reforms, the FTC remains the closest thing we have to a second national legislature. Its jurisdiction covers nearly every company in America. Section 5, at its heart, runs just 20 words — leaving the Commission enormous discretion to make policy decisions that are essentially legislative.

The courts were supposed to keep the agency on course. But they haven’t. As Former Chairman Muris has written, “the agency has… traditionally been beyond judicial control.”

So it’s up to Congress to monitor the FTC’s processes, and tweak them when the FTC goes off course, which is inevitable.

This isn’t a condemnation of the FTC’s dedicated staff. Rather, this one way ratchet of ever-expanding discretion is simply the nature of the beast.

Yet too many people lionize the status quo. They see any effort to change the agency from the outside as an affront. It’s as if Congress was struck by a bolt of lightning in 1914 and the Perfect Platonic Agency sprang forth.

But in the real world, an agency with massive scope and discretion needs oversight — and feedback on how its legal doctrines evolve.

So why don’t the courts play that role? Companies essentially always settle with the FTC because of its exceptionally broad investigatory powers, its relatively weak standard for voting out complaints, and the fact that those decisions effectively aren’t reviewable in federal court.

Then there’s the fact that the FTC sits in judgment of its own prosecutions. So even if a company doesn’t settle and actually wins before the ALJ, FTC staff still wins 100% of the time before the full Commission.

Able though FTC staffers are, this can’t be from sheer skill alone.

Whether by design or by neglect, the FTC has become, as Chairman Muris again described it, “a largely unconstrained agency.”

Please understand: I say this out of love. To paraphrase Churchill, the FTC is the “worst form of regulatory agency — except for all the others.”

Eventually Congress had to course-correct the agency — to fix the disconnect and to apply its own pressure to refocus Section 5 doctrine.

So a heavily Democratic Congress pressured the Commission to adopt the Unfairness Policy Statement in 1980. The FTC promised to restrain itself by balancing the perceived benefits of its unfairness actions against the costs, and not acting when injury is insignificant or consumers could have reasonably avoided injury on their own. It is, inherently, an economic calculus.

But while the Commission pays lip service to the test, you’d be hard-pressed to identify how (or whether) it’s implemented it in practice. Meanwhile, the agency has essentially nullified the “materiality” requirement that it volunteered in its 1983 Deception Policy Statement.

Worst of all, Congress failed to anticipate that the FTC would resume exercising its vast discretion through what it now proudly calls its “common law of consent decrees” in data security cases.

Combined with a flurry of recommended best practices in reports that function as quasi-rulemakings, these settlements have enabled the FTC to circumvent both Congressional rulemaking reforms and meaningful oversight by the courts.

The FTC’s data security settlements aren’t an evolving common law. They’re a static statement of “reasonable” practices, repeated about 55 times over the past 14 years. At this point, it’s reasonable to assume that they apply to all circumstances — much like a rule (which is, more or less, the opposite of the common law).

Congressman Pompeo’s SHIELD Act would help curtail this practice, especially if amended to include consent orders and reports. It would also help focus the Commission on the actual elements of the Unfairness Policy Statement — which should be codified through Congressman Mullins’ SURE Act.

Significantly, only one data security case has actually come before an Article III court. The FTC trumpets Wyndham as an out-and-out win. But it wasn’t. In fact, the court agreed with Wyndham on the crucial point that prior consent orders were of little use in trying to understand the requirements of Section 5.

More recently the FTC suffered another rebuke. While it won its product design suit against Amazon, the Court rejected the Commission’s “fencing in” request to permanently hover over the company and micromanage practices that Amazon had already ended.

As the FTC grapples with such cutting-edge legal issues, it’s drifting away from the balance it promised Congress.

But Congress can’t fix these problems simply by telling the FTC to take its bedrock policy statements more seriously. Instead it must regularly reassess the process that’s allowed the FTC to avoid meaningful judicial scrutiny. The FTC requires significant course correction if its model is to move closer to a true “common law.”

[Below is an excellent essay by Devlin Hartline that was first posted at the Center for the Protection of Intellectual Property blog last week, and I’m sharing it here.]

ACKNOWLEDGING THE LIMITATIONS OF THE FTC’S “PAE” STUDY

By Devlin Hartline

The FTC’s long-awaited case study of patent assertion entities (PAEs) is expected to be released this spring. Using its subpoena power under Section 6(b) to gather information from a handful of firms, the study promises us a glimpse at their inner workings. But while the results may be interesting, they’ll also be too narrow to support any informed policy changes. And you don’t have to take my word for it—the FTC admits as much. In one submission to the Office of Management and Budget (OMB), which ultimately decided whether the study should move forward, the FTC acknowledges that its findings “will not be generalizable to the universe of all PAE activity.” In another submission to the OMB, the FTC recognizes that “the case study should be viewed as descriptive and probative for future studies seeking to explore the relationships between organizational form and assertion behavior.”

However, this doesn’t mean that no one will use the study to advocate for drastic changes to the patent system. Even before the study’s release, many people—including some FTC Commissioners themselves—have already jumped to conclusions when it comes to PAEs, arguing that they are a drag on innovation and competition. Yet these same people say that we need this study because there’s no good empirical data analyzing the systemic costs and benefits of PAEs. They can’t have it both ways. The uproar about PAEs is emblematic of the broader movement that advocates for the next big change to the patent system before we’ve even seen how the last one panned out. In this environment, it’s unlikely that the FTC and other critics will responsibly acknowledge that the study simply cannot give us an accurate assessment of the bigger picture.

Limitations of the FTC Study 

Many scholars have written about the study’s fundamental limitations. As statistician Fritz Scheuren points out, there are two kinds of studies: exploratory and confirmatory. An exploratory study is a starting point that asks general questions in order to generate testable hypotheses, while a confirmatory study is then used to test the validity of those hypotheses. The FTC study, with its open-ended questions to a handful of firms, is a classic exploratory study. At best, the study will generate answers that could help researchers begin to form theories and design another round of questions for further research. Scheuren notes that while the “FTC study may well be useful at generating exploratory data with respect to PAE activity,” it “is not designed to confirm supportable subject matter conclusions.”

One significant constraint with the FTC study is that the sample size is small—only twenty-five PAEs—and the control group is even smaller—a mixture of fifteen manufacturers and non-practicing entities (NPEs) in the wireless chipset industry. Scheuren reasons that there “is also the risk of non-representative sampling and potential selection bias due to the fact that the universe of PAEs is largely unknown and likely quite diverse.” And the fact that the control group comes from one narrow industry further prevents any generalization of the results. Scheuren concludes that the FTC study “may result in potentially valuable information worthy of further study,” but that it is “not designed in a way as to support public policy decisions.”

Professor Michael Risch questions the FTC’s entire approach: “If the FTC is going to the trouble of doing a study, why not get it done right the first time and a) sample a larger number of manufacturers, in b) a more diverse area of manufacturing, and c) get identical information?” He points out that the FTC won’t be well-positioned to draw conclusions because the control group is not even being asked the same questions as the PAEs. Risch concludes that “any report risks looking like so many others: a static look at an industry with no benchmark to compare it to.” Professor Kristen Osenga echoes these same sentiments and notes that “the study has been shaped in a way that will simply add fuel to the anti–‘patent troll’ fire without providing any data that would explain the best way to fix the real problems in the patent field today.”

Osenga further argues that the study is flawed since the FTC’s definition of PAEs perpetuates the myth that patent licensing firms are all the same. The reality is that many different types of businesses fall under the “PAE” umbrella, and it makes no sense to impute the actions of a small subset to the entire group when making policy recommendations. Moreover, Osenga questions the FTC’s “shortsighted viewpoint” of the potential benefits of PAEs, and she doubts how the “impact on innovation and competition” will be ascertainable given the questions being asked. Anne Layne-Farrar expresses similar doubts about the conclusions that can be drawn from the FTC study since only licensors are being surveyed. She posits that it “cannot generate a full dataset for understanding the conduct of the parties in patent license negotiation or the reasons for the failure of negotiations.”

Layne-Farrar concludes that the FTC study “can point us in fruitful directions for further inquiry and may offer context for interpreting quantitative studies of PAE litigation, but should not be used to justify any policy changes.” Consistent with the FTC’s own admissions of the study’s limitations, this is the real bottom line of what we should expect. The study will have no predictive power because it only looks at how a small sample of firms affect a few other players within the patent ecosystem. It does not quantify how that activity ultimately affects innovation and competition—the very information needed to support policy recommendations. The FTC study is not intended to produce the sort of compelling statistical data that can be extrapolated to the larger universe of firms.

FTC Commissioners Put Cart Before Horse

The FTC has a history of bias against PAEs, as demonstrated in its 2011 report that skeptically questioned the “uncertain benefits” of PAEs while assuming their “detrimental effects” in undermining innovation. That report recommended special remedy rules for PAEs, even as the FTC acknowledged the lack of objective evidence of systemic failure and the difficulty of distinguishing “patent transactions that harm innovation from those that promote it.” With its new study, the FTC concedes to the OMB that much is still not known about PAEs and that the findings will be preliminary and non-generalizable. However, this hasn’t prevented some Commissioners from putting the cart before the horse with PAEs.

In fact, the very call for the FTC to institute the PAE study started with its conclusion. In her 2013 speech suggesting the study, FTC Chairwoman Edith Ramirez recognized that “we still have only snapshots of the costs and benefits of PAE activity” and that “we will need to learn a lot more” in order “to see the full competitive picture.” While acknowledging the vast potential benefits of PAEs in rewarding invention, benefiting competition and consumers, reducing enforcement hurdles, increasing liquidity, encouraging venture capital investment, and funding R&D, she nevertheless concluded that “PAEs exploit underlying problems in the patent system to the detriment of innovation and consumers.” And despite the admitted lack of data, Ramirez stressed “the critical importance of continuing the effort on patent reform to limit the costs associated with some types of PAE activity.”

This position is duplicitous: If the costs and benefits of PAEs are still unknown, what justifies Ramirez’s rushed call for immediate action? While benefits have to be weighed against costs, it’s clear that she’s already jumped to the conclusion that the costs outweigh the benefits. In another speech a few months later, Ramirez noted that the “troubling stories” about PAEs “don’t tell us much about the competitive costs and benefits of PAE activity.” Despite this admission, Ramirez called for “a much broader response to flaws in the patent system that fuel inefficient behavior by PAEs.” And while Ramirez said that understanding “the PAE business model will inform the policy dialogue,” she stated that “it will not change the pressing need for additional progress on patent reform.”

Likewise, in an early 2014 speech, Commissioner Julie Brill ignored the study’s inherent limitations and exploratory nature. She predicted that the study “will provide a fuller and more accurate picture of PAE activity” that “will be put to good use by Congress and others who examine closely the activities of PAEs.” Remarkably, Brill stated that “the FTC and other law enforcement agencies” should not “wait on the results of the 6(b) study before undertaking enforcement actions against PAE activity that crosses the line.” Even without the study’s results, she thought that “reforms to the patent system are clearly warranted.” In Brill’s view, the study would only be useful for determining whether “additional reforms are warranted” to curb the activities of PAEs.

It appears that these Commissioners have already decided—in the absence of any reliable data on the systemic effects of PAE activity—that drastic changes to the patent system are necessary. Given their clear bias in this area, there is little hope that they will acknowledge the deep limitations of the study once it is released.

Commentators Jump the Gun

Unsurprisingly, many supporters of the study have filed comments with the FTC arguing that the study is needed to fill the huge void in empirical data on the costs and benefits associated with PAEs. Some even simultaneously argue that the costs of PAEs far outweigh the benefits, suggesting that they have already jumped to their conclusion and just want the data to back it up. Despite the study’s serious limitations, these commentators appear primed to use it to justify their foregone policy recommendations.

For example, the Consumer Electronics Association applauded “the FTC’s efforts to assess the anticompetitive harms that PAEs cause on our economy as a whole,” and it argued that the study “will illuminate the many dimensions of PAEs’ conduct in a way that no other entity is capable.” At the same time, it stated that “completion of this FTC study should not stay or halt other actions by the administrative, legislative or judicial branches to address this serious issue.” The Internet Commerce Coalition stressed the importance of the study of “PAE activity in order to shed light on its effects on competition and innovation,” and it admitted that without the information, “the debate in this area cannot be empirically based.” Nonetheless, it presupposed that the study will uncover “hidden conduct of and abuses by PAEs” and that “it will still be important to reform the law in this area.”

Engine Advocacy admitted that “there is very little broad empirical data about the structure and conduct of patent assertion entities, and their effect on the economy.” It then argued that PAE activity “harms innovators, consumers, startups and the broader economy.” The Coalition for Patent Fairness called on the study “to contribute to the understanding of policymakers and the public” concerning PAEs, which it claimed “impose enormous costs on U.S. innovators, manufacturers, service providers, and, increasingly, consumers and end-users.” And to those suggesting “the potentially beneficial role of PAEs in the patent market,” it stressed that “reform be guided by the principle that the patent system is intended to incentivize and reward innovation,” not “rent-seeking” PAEs that are “exploiting problems.”

The joint comments of Public Knowledge, Electronic Frontier Foundation, & Engine Advocacyemphasized the fact that information about PAEs “currently remains limited” and that what is “publicly known largely consists of lawsuits filed in court and anecdotal information.” Despite admitting that “broad empirical data often remains lacking,” the groups also suggested that the study “does not mean that legislative efforts should be stalled” since “the harms of PAE activity are well known and already amenable to legislative reform.” In fact, they contended not only that “a problem exists,” but that there’s even “reason to believe the scope is even larger than what has already been reported.”

Given this pervasive and unfounded bias against PAEs, there’s little hope that these and other critics will acknowledge the study’s serious limitations. Instead, it’s far more likely that they will point to the study as concrete evidence that even more sweeping changes to the patent system are in order.

Conclusion

While the FTC study may generate interesting information about a handful of firms, it won’t tell us much about how PAEs affect competition and innovation in general. The study is simply not designed to do this. It instead is a fact-finding mission, the results of which could guide future missions. Such empirical research can be valuable, but it’s very important to recognize the limited utility of the information being collected. And it’s crucial not to draw policy conclusions from it. Unfortunately, if the comments of some of the Commissioners and supporters of the study are any indication, many critics have already made up their minds about the net effects of PAEs, and they will likely use the study to perpetuate the biased anti-patent fervor that has captured so much attention in recent years.

 

Yesterday a federal district court in Washington state granted the FTC’s motion for summary judgment against Amazon in FTC v. Amazon — the case alleging unfair trade practices in Amazon’s design of the in-app purchases interface for apps available in its mobile app store. The headlines score the decision as a loss for Amazon, and the FTC, of course, claims victory. But the court also granted Amazon’s motion for partial summary judgment on a significant aspect of the case, and the Commission’s win may be decidedly pyrrhic.

While the district court (very wrongly, in my view) essentially followed the FTC in deciding that a well-designed user experience doesn’t count as a consumer benefit for assessing substantial harm under the FTC Act, it rejected the Commission’s request for a permanent injunction against Amazon. It also called into question the FTC’s calculation of monetary damages. These last two may be huge. 

The FTC may have “won” the case, but it’s becoming increasingly apparent why it doesn’t want to take these cases to trial. First in Wyndham, and now in Amazon, courts have begun to chip away at the FTC’s expansive Section 5 discretion, even while handing the agency nominal victories.

The Good News

The FTC largely escapes judicial oversight in cases like these because its targets almost always settle (Amazon is a rare exception). These settlements — consent orders — typically impose detailed 20-year injunctions and give the FTC ongoing oversight of the companies’ conduct for the same period. The agency has wielded the threat of these consent orders as a powerful tool to micromanage tech companies, and it currently has at least one consent order in place with Twitter, Google, Apple, Facebook and several others.

As I wrote in a WSJ op-ed on these troubling consent orders:

The FTC prefers consent orders because they extend the commission’s authority with little judicial oversight, but they are too blunt an instrument for regulating a technology company. For the next 20 years, if the FTC decides that Google’s product design or billing practices don’t provide “express, informed consent,” the FTC could declare Google in violation of the new consent decree. The FTC could then impose huge penalties—tens or even hundreds of millions of dollars—without establishing that any consumer had actually been harmed.

Yesterday’s decision makes that outcome less likely. Companies will be much less willing to succumb to the FTC’s 20-year oversight demands if they know that courts may refuse the FTC’s injunction request and accept companies’ own, independent and market-driven efforts to address consumer concerns — without any special regulatory micromanagement.

In the same vein, while the court did find that Amazon was liable for repayment of unauthorized charges made without “express, informed authorization,” it also found the FTC’s monetary damages calculation questionable and asked for further briefing on the appropriate amount. If, as seems likely, it ultimately refuses to simply accept the FTC’s damages claims, that, too, will take some of the wind out of the FTC’s sails. Other companies have settled with the FTC and agreed to 20-year consent decrees in part, presumably, because of the threat of excessive damages if they litigate. That, too, is now less likely to happen.

Collectively, these holdings should help to force the FTC to better target its complaints to cases of still-ongoing and truly-harmful practices — the things the FTC Act was really meant to address, like actual fraud. Tech companies trying to navigate ever-changing competitive waters by carefully constructing their user interfaces and payment mechanisms (among other things) shouldn’t be treated the same way as fraudulent phishing scams.

The Bad News

The court’s other key holding is problematic, however. In essence, the court, like the FTC, seems to believe that regulators are better than companies’ product managers, designers and engineers at designing app-store user interfaces:

[A] clear and conspicuous disclaimer regarding in-app purchases and request for authorization on the front-end of a customer’s process could actually prove to… be more seamless than the somewhat unpredictable password prompt formulas rolled out by Amazon.

Never mind that Amazon has undoubtedly spent tremendous resources researching and designing the user experience in its app store. And never mind that — as Amazon is certainly aware — a consumer’s experience of a product is make-or-break in the cut-throat world of online commerce, advertising and search (just ask Jet).

Instead, for the court (and the FTC), the imagined mechanism of “affirmatively seeking a customer’s authorized consent to a charge” is all benefit and no cost. Whatever design decisions may have informed the way Amazon decided to seek consent are either irrelevant, or else the user-experience benefits they confer are negligible.

As I’ve written previously:

Amazon has built its entire business around the “1-click” concept — which consumers love — and implemented a host of notification and security processes hewing as much as possible to that design choice, but nevertheless taking account of the sorts of issues raised by in-app purchases. Moreover — and perhaps most significantly — it has implemented an innovative and comprehensive parental control regime (including the ability to turn off all in-app purchases) — Kindle Free Time — that arguably goes well beyond anything the FTC required in its Apple consent order.

Amazon is not abdicating its obligation to act fairly under the FTC Act and to ensure that users are protected from unauthorized charges. It’s just doing so in ways that also take account of the costs such protections may impose — particularly, in this case, on the majority of Amazon customers who didn’t and wouldn’t suffer such unauthorized charges.

Amazon began offering Kindle Free Time in 2012 as an innovative solution to a problem — children’s access to apps and in-app purchases — that affects only a small subset of Amazon’s customers. To dismiss that effort without considering that Amazon might have made a perfectly reasonable judgment that balanced consumer protection and product design disregards the cost-benefit balancing required by Section 5 of the FTC Act.

Moreover, the FTC Act imposes liability for harm only when they are not “reasonably avoidable.” Kindle Free Time is an outstanding example of an innovative mechanism that allows consumers at risk of unauthorized purchases by children to “reasonably avoid” harm. The court’s and the FTC’s disregard for it is inconsistent with the statute.

Conclusion

The court’s willingness to reinforce the FTC’s blackboard design “expertise” (such as it is) to second guess user-interface and other design decisions made by firms competing in real markets is unfortunate. But there’s a significant silver lining. By reining in the FTC’s discretion to go after these companies as if they were common fraudsters, the court has given consumers an important victory. After all, it is consumers who otherwise bear the costs (both directly and as a result of reduced risk-taking and innovation) of the FTC’s largely unchecked ability to extract excessive concessions from its enforcement targets.

The FCC doesn’t have authority over the edge and doesn’t want authority over the edge. Well, that is until it finds itself with no choice but to regulate the edge as a result of its own policies. As the FCC begins to explore its new authority to regulate privacy under the Open Internet Order (“OIO”), for instance, it will run up against policy conflicts and inconsistencies that will make it increasingly hard to justify forbearance from regulating edge providers.

Take for example the recently announced NPRM titled “Expanding Consumers’ Video Navigation Choices” — a proposal that seeks to force cable companies to provide video programming to third party set-top box manufacturers. Under the proposed rules, MVPD distributors would be required to expose three data streams to competitors: (1) listing information about what is available to particular customers; (2) the rights associated with accessing such content; and (3) the actual video content. As Geoff Manne has aptly noted, this seems to be much more of an effort to eliminate the “nightmare” of “too many remote controls” than it is to actually expand consumer choice in a market that is essentially drowning in consumer choice. But of course even so innocuous a goal—which is probably more about picking on cable companies because… “eww cable companies”—suggests some very important questions.

First, the market for video on cable systems is governed by a highly interdependent web of contracts that assures to a wide variety of parties that their bargained-for rights are respected. Among other things, channels negotiate for particular placements and channel numbers in a cable system’s lineup, IP rights holders bargain for content to be made available only at certain times and at certain locations, and advertisers pay for their ads to be inserted into channel streams and broadcasts.

Moreover, to a large extent, the content industry develops its content based on a stable regime of bargained-for contractual terms with cable distribution networks (among others). Disrupting the ability of cable companies to control access to their video streams will undoubtedly alter the underlying assumptions upon which IP companies rely when planning and investing in content development. And, of course, the physical networks and their related equipment have been engineered around the current cable-access regimes. Some non-trivial amount of re-engineering will have to take place to make the cable-networks compatible with a more “open” set-top box market.

The FCC nods to these concerns in its NPRM, when it notes that its “goal is to preserve the contractual arrangements between programmers and MVPDs, while creating additional opportunities for programmers[.]” But this aspiration is not clearly given effect in the NPRM, and, as noted, some contractual arrangements are simply inconsistent with the NPRM’s approach.

Second, the FCC proposes to bind third-party manufacturers to the public interest privacy commitments in §§ 629, 551 and 338(i) of the Communications Act (“Act”) through a self-certification process. MVPDs would be required to pass the three data streams to third-party providers only once such a certification is received. To the extent that these sections, enforced via self-certification, do not sufficiently curtail third-parties’ undesirable behavior, the FCC appears to believe that “the strictest state regulatory regime[s]” and the “European Union privacy regulations” will serve as the necessary regulatory gap fillers.

This seems hard to believe, however, particularly given the recently announced privacy and cybersecurity NPRM, through which the FCC will adopt rules detailing the agency’s new authority (under the OIO) to regulate privacy at the ISP level. Largely, these rules will grow out of §§ 222 and 201 of the Act, which the FCC in Terracom interpreted together to be a general grant of privacy and cybersecurity authority.

I’m apprehensive of the asserted scope of the FCC’s power over privacy — let alone cybersecurity — under §§ 222 and 201. In truth, the FCC makes an admirable showing in Terracom of demonstrating its reasoning; it does a far better job than the FTC in similar enforcement actions. But there remains a problem. The FTC’s authority is fundamentally cabined by the limitations contained within the FTC Act (even if it frequently chooses to ignore them, they are there and are theoretically a protection against overreach).

But the FCC’s enforcement decisions are restrained (if at all) by a vague “public interest” mandate, and a claim that it will enforce these privacy principles on a case-by-case basis. Thus, the FCC’s proposed regime is inherently one based on vast agency discretion. As in many other contexts, enforcers with wide discretion and a tremendous power to penalize exert a chilling effect on innovation and openness, as well as a frightening power over a tremendous swath of the economy. For the FCC to claim anything like an unbounded UDAP authority for itself has got to be outside of the archaic grant of authority from § 201, and is certainly a long stretch for the language of § 706 (a provision of the Act which it used as one of the fundamental justifications for the OIO)— leading very possibly to a bout of Chevron problems under precedent such as King v. Burwell and UARG v. EPA.

And there is a real risk here of, if not hypocrisy, then… deep conflict in the way the FCC will strike out on the set-top box and privacy NPRMs. The Commission has already noted in its NPRM that it will not be able to bind third-party providers of set-top boxes under the same privacy requirements that apply to current MVPD providers. Self-certification will go a certain length, but even there agitation from privacy absolutists will possibly sway the FCC to consider more stringent requirements. For instance, §§ 551 and 338 of the Act — which the FCC focuses on in the set-top box NPRM — are really only about disclosing intended uses of consumer data. And disclosures can come in many forms, including burying them in long terms of service that customers frequently do not read. Such “weak” guarantees of consumer privacy will likely become a frequent source of complaint (and FCC filings) for privacy absolutists.  

Further, many of the new set-top box entrants are going to be current providers of OTT video or devices that redistribute OTT video. And many of these providers make a huge share of their revenue from data mining and selling access to customer data. Which means one of two things: Either the FCC is going to just allow us to live in a world of double standards where these self-certifying entities are permitted significantly more leeway in their uses of consumer data than MVPD providers or, alternatively, the FCC is going to discover that it does in fact need to “do something.” If only there were a creative way to extend the new privacy authority under Title II to these providers of set-top boxes… . Oh! there is: bring edge providers into the regulation fold under the OIO.

It’s interesting that Wheeler’s announcement of the FCC’s privacy NPRM explicitly noted that the rules would not be extended to edge providers. That Wheeler felt the need to be explicit in this suggests that he believes that the FCC has the authority to extend the privacy regulations to edge providers, but that it will merely forbear (for now) from doing so.

If edge providers are swept into the scope of Title II they would be subject to the brand new privacy rules the FCC is proposing. Thus, despite itself (or perhaps not), the FCC may find itself in possession of a much larger authority over some edge providers than any of the pro-Title II folks would have dared admit was possible. And the hook (this time) could be the privacy concerns embedded in the FCC’s ill-advised attempt to “open” the set-top box market.

This is a complicated set of issues, and it’s contingent on a number of moving parts. This week, Chairman Wheeler will be facing an appropriations hearing where I hope he will be asked to unpack his thinking regarding the true extent to which the OIO may in fact be extended to the edge.

Thanks to the Truth on the Market bloggers for having me. I’m a long-time fan of the blog, and excited to be contributing.

The Third Circuit will soon review the appeal of generic drug manufacturer, Mylan Pharmaceuticals, in the latest case involving “product hopping” in the pharmaceutical industry — Mylan Pharmaceuticals v. Warner Chilcott.

Product hopping occurs when brand pharmaceutical companies shift their marketing efforts from an older version of a drug to a new, substitute drug in order to stave off competition from cheaper generics. This business strategy is the predictable business response to the incentives created by the arduous FDA approval process, patent law, and state automatic substitution laws. It costs brand companies an average of $2.6 billion to bring a new drug to market, but only 20 percent of marketed brand drugs ever earn enough to recoup these costs. Moreover, once their patent exclusivity period is over, brand companies face the likely loss of 80-90 percent of their sales to generic versions of the drug under state substitution laws that allow or require pharmacists to automatically substitute a generic-equivalent drug when a patient presents a prescription for a brand drug. Because generics are automatically substituted for brand prescriptions, generic companies typically spend very little on advertising, instead choosing to free ride on the marketing efforts of brand companies. Rather than hand over a large chunk of their sales to generic competitors, brand companies often decide to shift their marketing efforts from an existing drug to a new drug with no generic substitutes.

Generic company Mylan is appealing U.S. District Judge Paul S. Diamond’s April decision to grant defendant and brand company Warner Chilcott’s summary judgment motion. Mylan and other generic manufacturers contend that Defendants engaged in a strategy to impede generic competition for branded Doryx (an acne medication) by executing several product redesigns and ceasing promotion of prior formulations. Although the plaintiffs generally changed their products to keep up with the brand-drug redesigns, they contend that these redesigns were intended to circumvent automatic substitution laws, at least for the periods of time before the generic companies could introduce a substitute to new brand drug formulations. The plaintiffs argue that product redesigns that prevent generic manufacturers from benefitting from automatic substitution laws violate Section 2 of the Sherman Act.

Product redesign is not per se anticompetitive. Retiring an older branded version of a drug does not block generics from competing; they are still able to launch and market their own products. Product redesign only makes competition tougher because generics can no longer free ride on automatic substitution laws; instead they must either engage in their own marketing efforts or redesign their product to match the brand drug’s changes. Moreover, product redesign does not affect a primary source of generics’ customers—beneficiaries that are channeled to cheaper generic drugs by drug plans and pharmacy benefit managers.

The Supreme Court has repeatedly concluded that “the antitrust laws…were enacted for the protection of competition not competitors” and that even monopolists have no duty to help a competitor. The district court in Mylan generally agreed with this reasoning, concluding that the brand company Defendants did not exclude Mylan and other generics from competition: “Throughout this period, doctors remained free to prescribe generic Doryx; pharmacists remained free to substitute generics when medically appropriate; and patients remained free to ask their doctors and pharmacists for generic versions of the drug.” Instead, the court argued that Mylan was a “victim of its own business strategy”—a strategy that relied on free-riding off brand companies’ marketing efforts rather than spending any of their own money on marketing. The court reasoned that automatic substitution laws provide a regulatory “bonus” and denying Mylan the opportunity to take advantage of that bonus is not anticompetitive.

Product redesign should only give rise to anticompetitive claims if combined with some other wrongful conduct, or if the new product is clearly a “sham” innovation. Indeed, Senior Judge Douglas Ginsburg and then-FTC Commissioner Joshua D. Wright recently came out against imposing competition law sanctions on product redesigns that are not sham innovations. If lawmakers are concerned that product redesigns will reduce generic usage and the cost savings they create, they could follow the lead of several states that have broadened automatic substitution laws to allow the substitution of generics that are therapeutically-equivalent but not identical in other ways, such as dosage form or drug strength.

Mylan is now asking the Third Circuit to reexamine the case. If the Third Circuit reverses the lower courts decision, it would imply that brand drug companies have a duty to continue selling superseded drugs in order to allow generic competitors to take advantage of automatic substitution laws. If the Third Circuit upholds the district court’s ruling on summary judgment, it will likely create a circuit split between the Second and Third Circuits. In July 2015, the Second Circuit court upheld an injunction in NY v. Actavis that required a brand company to continue manufacturing and selling an obsolete drug until after generic competitors had an opportunity to launch their generic versions and capture a significant portion of the market through automatic substitution laws. I’ve previously written about the duty created in this case.

Regardless of whether the Third Circuit’s decision causes a split, the Supreme Court should take up the issue of product redesign in pharmaceuticals to provide guidance to brand manufacturers that currently operate in a world of uncertainty and under the constant threat of litigation for decisions they make when introducing new products.

On October 7, 2015, the Senate Judiciary Committee held a hearing on the “Standard Merger and Acquisition Reviews Through Equal Rules” (SMARTER) Act of 2015.  As former Antitrust Modernization Commission Chair (and former Acting Assistant Attorney General for Antitrust) Deborah Garza explained in her testimony, “t]he premise of the SMARTER Act is simple:  A merger should not be treated differently depending on which antitrust enforcement agency – DOJ or the FTC – happens to review it.  Regulatory outcomes should not be determined by a flip of the merger agency coin.”

Ms. Garza is clearly correct.  Both the U.S. Justice Department (DOJ) and the U.S. Federal Trade Commission (FTC) enforce the federal antitrust merger review provision, Section 7 of the Clayton Act, and employ a common set of substantive guidelines (last revised in 2010) to evaluate merger proposals.  Neutral “rule of law” principles indicate that private parties should expect to have their proposed mergers subject to the same methods of assessment and an identical standard of judicial review, regardless of which agency reviews a particular transaction.  (The two agencies decide by mutual agreement which agency will review any given merger proposal.)

Unfortunately, however, that is not the case today.  The FTC’s independent ability to challenge mergers administratively, combined with the difference in statutory injunctive standards that apply to FTC and DOJ merger reviews, mean that a particular merger application may face more formidable hurdles if reviewed by the FTC, rather than DOJ.  These two differences commendably would be eliminated by the SMARTER Act, which would subject the FTC to current DOJ standards.  The SMARTER Act would not deal with a third difference – the fact that DOJ merger consent decrees, but not FTC merger consent decrees, must be filed with a federal court for “public interest” review.  This commentary briefly addresses those three issues.  The first and second ones present significant “rule of law” problems, in that they involve differences in statutory language applied to the same conduct.  The third issue, the question of judicial review of settlements, is of a different nature, but nevertheless raises substantial policy concerns.

  1. FTC Administrative Authority

The first rule of law problem stems from the broader statutory authority the FTC possesses to challenge mergers.  In merger cases, while DOJ typically consolidates actions for a preliminary and permanent injunction in district court, the FTC merely seeks a preliminary injunction (which is easier to obtain than a permanent injunction) and “holds in its back pocket” the ability to challenge a merger in an FTC administrative proceeding – a power DOJ does not possess.  In short, the FTC subjects proposed mergers to a different and more onerous method of assessment than DOJ.  In Ms. Garza’s words (footnotes deleted):

“Despite the FTC’s legal ability to seek permanent relief from the district court, it prefers to seek a preliminary injunction only, to preserve the status quo while it proceeds with its administrative litigation.

This approach has great strategic significance. First, the standard for obtaining a preliminary injunction in government merger challenges is lower than the standard for obtaining a permanent injunction. That is, it is easier to get a preliminary injunction.

Second, as a practical matter, the grant of a preliminary injunction is typically sufficient to end the matter. In nearly every case, the parties will abandon their transaction rather than incur the heavy cost and uncertainty of trying to hold the merger together through further proceedings—which is why merging parties typically seek to consolidate proceedings for preliminary and permanent relief under Rule 65(a)(2). Time is of the essence. As one witness testified before the [Antitrust Modernization Commission], “it is a rare seller whose business can withstand the destabilizing effect of a year or more of uncertainty” after the issuance of a preliminary injunction.

Third, even if the court denies the FTC its preliminary injunction and the parties close their merger, the FTC can still continue to pursue an administrative challenge with an eye to undoing or restructuring the transaction. This is the “heads I win, tails you lose” aspect of the situation today. It is very difficult for the parties to get to the point of a full hearing in court given the effect of time on transactions, even with the FTC’s expedited administrative procedures adopted in about 2008. . . . 

[Moreover,] [while] [u]nder its new procedures, parties can move to dismiss an administrative proceeding if the FTC has lost a motion for preliminary injunction and the FTC will consider whether to proceed on a case-by-case basis[,] . . . th[is] [FTC] policy could just as easily change again, unless Congress speaks.”

Typically time is of the essence in proposed mergers, so substantial delays occasioned by extended reviews of those transactions may prevent many transactions from being consummated, even if they eventually would have passed antitrust muster.  Ms. Garza’s testimony, plus testimony by former Assistant Deputy Assistant Attorney General for Antitrust Abbott (Tad) Lipsky, document cases of substantial delay in FTC administrative reviews of merger proposals.  (As Mr. Lipsky explained, “[a]ntitrust practitioners have long perceived that the possibility of continued administrative litigation by the FTC following a court decision constitutes a significant disincentive for parties to invest resources in transaction planning and execution.”)  Congress should weigh these delay-specific costs, as well as the direct costs of any additional burdens occasioned by FTC administrative procedures, in deciding whether to require the FTC (like DOJ) to rely solely on federal court proceedings.

  1. Differences Between FTC and DOJ Injunctive Standards

The second rule of law problem arises from the lighter burden the FTC must satisfy to obtain injunctive relief in federal court.  Under Section 13(b) of the FTC Act, an injunction shall be granted the FTC “[u]pon a proper showing that, weighing the equities and considering the Commission’s likelihood of success, such action would be in the public interest.”  The D.C. Circuit (in FTC v. H.J. Heinz Co. and in FTC v. Whole Foods Market, Inc.) has stated that, to meet this burden, the FTC need merely have raised questions “so serious, substantial, difficult and doubtful as to make them fair ground for further investigation.”  By contrast, as Ms. Garza’s testimony points out, “under Section 15 of the Clayton Act, courts generally apply a traditional equities test requiring DOJ to show a reasonable likelihood of success on the merits—not merely that there is ‘fair ground for further investigation.’”  In a similar vein, Mr. Lipsky’s testimony stated that “[t]he cumulative effect of several recent contested merger decisions has been to allow the FTC to argue that it needn’t show likelihood of success in order to win a preliminary injunction; specifically these decisions suggest that the Commission need only show ‘serious, substantial, difficult and doubtful’ questions regarding the merits.”  Although some commentators have contended that, in reality, the two standards generally will be interpreted in a similar fashion (“whatever theoretical difference might exist between the FTC and DOJ standards has no practical significance”), there is no doubt that the language of the two standards is different – and basic principles of statutory construction indicate that differences in statutory language should be given meaning and not ignored.  Accordingly, merging parties face the real prospect that they might fare worse under federal court review of an FTC challenge to their merger proposal than they would have fared had DOJ challenged the same transaction.  Such an outcome, even if it is rare, would be at odds with neutral application of the rule of law.

  1. The Tunney Act

Finally, helpful as it is, the SMARTER Act does not entirely eliminate the disparate treatment of proposed mergers by DOJ and the FTC.  The Tunney Act, 15 U.S.C. § 16, enacted in 1974, which applies to DOJ but not to the FTC, requires that DOJ submit all proposed consent judgments under the antitrust laws (including Section 7 of the Clayton Act) to a federal district court for 60 days of public comment prior to being entered.

a.  Economic Costs (and Potential Benefits) of the Tunney Act

The Tunney Act potentially interjects uncertainty into the nature of the “deal” struck between merging parties and DOJ in merger cases.  It does this by subjecting proposed DOJ merger settlements (and other DOJ non-merger civil antitrust settlements) to a 60 day public review period, requiring federal judges to determine whether a proposed settlement is “in the public interest” before entering it, and instructing the court to consider the impact of the entry of judgment “upon competition and upon the public generally.”  Leading antitrust practitioners have noted that this uncertainty “could affect shareholders, customers, or even employees. Moreover, the merged company must devote some measure of resources to dealing with the Tunney Act review—resources that instead could be devoted to further integration of the two companies or generation of any planned efficiencies or synergies.”  More specifically:

“[W]hile Tunney Act proceedings are pending, a merged company may have to consider how its post-close actions and integration could be perceived by the court, and may feel the need to compete somewhat less aggressively, lest its more muscular competitive actions be taken by the court, amici, or the public at large to be the actions of a merged company exercising enhanced market power. Such a distortion in conduct probably was not contemplated by the Tunney Act’s drafters, but merger partners will need to be cognizant of how their post-close actions may be perceived during Tunney Act review. . . .  [And, in addition,] while Tunney Act proceedings are pending, a merged company may have to consider how its post-close actions and integration could be perceived by the court, and may feel the need to compete somewhat less aggressively, lest its more muscular competitive actions be taken by the court, amici, or the public at large to be the actions of a merged company exercising enhanced market power.”

Although the Tunney Act has been justified on traditional “public interest” grounds, even its scholarly supporters (a DOJ antitrust attorney), in praising its purported benefits, have acknowledged its potential for abuse:

“Properly interpreted and applied, the Tunney Act serves a number of related, useful functions. The disclosure provisions and judicial approval requirement for decrees can help identify, and more importantly deter, “influence peddling” and other abuses. The notice-and-comment procedures force the DOJ to explain its rationale for the settlement and provide its answers to objections, thus providing transparency. They also provide a mechanism for third-party input, and, thus, a way to identify and correct potentially unnoticed problems in a decree. Finally, the court’s public interest review not only helps ensure that the decree benefits the public, it also allows the court to protect itself against ambiguous provisions and enforcement problems and against an objectionable or pointless employment of judicial power. Improperly applied, the Tunney Act does more harm than good. When a district court takes it upon itself to investigate allegations not contained in a complaint, or attempts to “re-settle” a case to provide what it views as stronger, better relief, or permits lengthy, unfocused proceedings, the Act is turned from a useful check to an unpredictable, costly burden.”

The justifications presented by the author are open to serious question.  Whether “influence peddling” can be detected merely from the filing of proposed decree terms is doubtful – corrupt deals to settle a matter presumably would be done “behind the scenes” in a manner not available to public scrutiny.  The economic expertise and detailed factual knowledge that informs a DOJ merger settlement cannot be fully absorbed by a judge (who may fall prey to his or her personal predilections as to what constitutes good policy) during a brief review period.  “Transparency” that facilitates “third-party input” can too easily be manipulated by rent-seeking competitors who will “trump up” justifications for blocking an efficient merger.  Moreover, third parties who are opposed to mergers in general may also be expected to file objections to efficient arrangements.  In short, the “sunshine” justification for Tunney Act filings is more likely to cloud the evaluation of DOJ policy calls than to provide clarity.

b.  Constitutional Issues Raised by the Tunney Act

In addition to potential economic inefficiencies, the judicial review feature of the Tunney Act raises serious separation of powers issues, as emphasized by the DOJ Office of Legal Counsel (OLC, which advises the Attorney General and the President on questions of constitutional interpretation) in a 1989 opinion regarding qui tam provisions of the False Claims Act:

“There are very serious doubts as to the constitutionality . . . of the Tunney Act:  it intrudes into the Executive power and requires the courts to decide upon the public interest – that is, to exercise a policy discretion normally reserved to the political branches.  Three Justices of the Supreme Court questioned the constitutionality of the Tunney Act in Maryland v. United States, 460 U.S. 1001 (1983) (Rehnquist, J., joined by Burger, C.J., and White, J., dissenting).”

Notably, this DOJ critique of the Tunney Act was written before the 2004 amendments to that statute that specifically empower courts to consider the impact of proposed settlements “upon competition and upon the public generally” – language that significantly trenches upon Executive Branch prerogatives.  Admittedly, the Tunney Act has withstood judicial scrutiny – no court has ruled it unconstitutional.   Moreover, a federal judge can only accept or reject a Tunney Act settlement, not rewrite it, somewhat ameliorating its affront to the separation of powers.  In short, even though it may not be subject to serious constitutional challenge in the courts, the Tunney Act is problematic as a matter of sound constitutional policy.

c.  Congressional Reexamination of the Tunney Act

These economic and constitutional policy concerns suggest that Congress may wish to carefully reexamine the merits of the Tunney Act.  Any such reexamination, however, should be independent of, and not delay expedited consideration of, the SMARTER Act.  The Tunney Act, although of undoubted significance, is only a tangential aspect of the divergent legal standards that apply to FTC and DOJ merger reviews.  It is beyond the scope of current legislative proposals but it merits being taken up at an appropriate time – perhaps in the next Congress.  When Congress turns to the Tunney Act, it may wish to consider four options:  (1) repealing the Act in its entirety; (2) retaining the Act as is; (3) partially repealing it only with respect to merger reviews; or, (4) applying it in full force to the FTC.  A detailed evaluation of those options is beyond the scope of this commentary.

Conclusion

In sum, in order to eliminate inconsistencies between FTC and DOJ standards for reviewing proposed mergers, Congress should give serious consideration to enacting the SMARTER Act, which would both eliminate FTC administrative review of merger proposals and subject the FTC to the same injunctive standard as the DOJ in judicial review of those proposals.  Moreover, if the SMARTER Act is enacted, Congress should also consider going further and amending the Tunney Act to make it apply to FTC as well as to DOJ merger settlements – or, alternatively, to have it not apply at all to any merger settlements (a result which would better respect the constitutional separation of powers and reduce a potential source of economic inefficiency).

Applying antitrust law to combat “hold-up” attempts (involving demands for “anticompetitively excessive” royalties) or injunctive actions brought by standard essential patent (SEP) owners is inherently problematic, as explained by multiple scholars (see here and here, for example).  Disputes regarding compensation to SEP holders are better handled in patent infringement and breach of contract lawsuits, and adding antitrust to the mix imposes unnecessary costs and may undermine involvement in standard setting and harm innovation.  What’s more, as FTC Commissioner Maureen Ohlhausen and former FTC Commissioner Joshua Wright have pointed out (citing research), empirical evidence suggests there is no systematic problem with hold-up.  Indeed, to the contrary, a recent empirical study by Professors from Stanford, Berkeley, and the University of the Andes, accepted for publication in the Journal of Competition Law and Economics, finds that SEP-reliant industries have the fastest quality-adjusted price declines in the U.S. economy – a result totally at odds with theories of SEP-related competitive harm.  Thus, application of a cost-benefit approach that seeks to maximize the welfare benefits of antitrust enforcement strongly militates against continuing to pursue “SEP abuse” cases.  Enforcers should instead focus on more traditional investigations that seek to ferret out conduct that is far more likely to be welfare-inimical, if they are truly concerned about maximizing consumer welfare.

But are the leaders at the U.S. Department of Justice Antitrust Division (DOJ) and the Federal Trade paying any attention?  The most recent public reports are not encouraging.

In a very recent filing with the U.S. International Trade Commission (ITC), FTC Chairwoman Edith Ramirez stated that “the danger that bargaining conducted in the shadow of an [ITC] exclusion order will lead to patent hold-up is real.”  (Comparable to injunctions, ITC exclusion orders preclude the importation of items that infringe U.S. patents.  They are the only effective remedy the ITC can give for patent infringement, since the ITC cannot assess damages or royalties.)  She thus argued that, before issuing an exclusion order, the ITC should require an SEP holder to show that the infringer is unwilling or unable to enter into a patent license on “fair, reasonable, and non-discriminatory” (FRAND) terms – a new and major burden on the vindication of patent rights.  In justifying this burden, Chairwoman Ramirez pointed to Motorola’s allegedly excessive SEP royalty demands from Microsoft – $6-$8 per gaming console, as opposed to a federal district court finding that pennies per console was the appropriate amount.  She also cited LSI Semiconductor’s demand for royalties that exceeded the selling price of Realtek’s standard-compliant product, whereas a federal district court found the appropriate royalty to be only .19% of the product’s selling price.  But these two examples do not support Chairwoman Ramirez’s point – quite the contrary.  The fact that high initial royalty requests subsequently are slashed by patent courts shows that the patent litigation system is working, not that antitrust enforcement is needed, or that a special burden of proof must be placed on SEP holders.  Moreover, differences in bargaining positions are to be expected as part of the normal back-and-forth of bargaining.  Indeed, if anything, the extremely modest judicial royalty assessments in these cases raise the concern that SEP holders are being undercompensated, not overcompensated.

A recent speech by DOJ Assistant Attorney General for Antitrust (AAG) William J. Baer, delivered at the International Bar Association’s Competition Conference, suffers from the same sort of misunderstanding as Chairman Ramirez’s ITC filing.  Stating that “[h]old up concerns are real”, AAG Baer cited the two examples described by Chairwoman Ramirez.  He also mentioned the fact that Innovatio requested a royalty rate of over $16 per smart tablet for its SEP portfolio, but was awarded a rate of less than 10 cents per unit by the court.  While admitting that the implementers “proved victorious in court” in those cases, he asserted that “not every implementer has the wherewithal to litigate”, that “[s]ometimes implementers accede to licensors’ demands, fearing exclusion and costly litigation”, that “consumers can be harmed and innovation incentives are distorted”, and that therefore “[a] future of exciting new products built atop existing technology may be . . . deferred”.  These theoretical concerns are belied by the lack of empirical support for hold-up, and are contradicted by the recent finding, previously noted, that SEP-reliant industries have the fastest quality-adjusted price declines in the U.S. economy.  (In addition, the implementers of patented technology tend to be large corporations; AAG Baer’s assertion that some may not have “the wherewithal to litigate” is a bare proposition unsupported by empirical evidence or more nuanced analysis.)  In short, DOJ, like FTC, is advancing an argument that undermines, rather than bolsters, the case for applying antitrust to SEP holders’ efforts to defend their patent rights.

Ideally the FTC and DOJ should reevaluate their recent obsession with allegedly abusive unilateral SEP behavior and refocus their attention on truly serious competitive problems.  (Chairwoman Ramirez and AAG Baer are both outstanding and highly experienced lawyers who are well-versed in policy analysis; one would hope that they would be open to reconsidering current FTC and DOJ policy toward SEPs, in light of hard evidence.)  Doing so would benefit consumer welfare and innovation – which are, after all, the goals that those important agencies are committed to promote.

On August 24, the Third Circuit issued its much anticipated decision in FTC v. Wyndham Worldwide Corp., holding that the U.S. Federal Trade Commission (FTC) has authority to challenge cybersecurity practices under its statutory “unfairness” authority.  This case brings into focus both legal questions regarding the scope of the FTC’s cybersecurity authority and policy questions regarding the manner in which that authority should be exercised.

1.     Wyndham: An Overview

Rather than “reinventing the wheel,” let me begin by quoting at length from Gus Hurwitz’s excellent summary of the relevant considerations in this case:

In 2012, the FTC sued Wyndham Worldwide, the parent company and franchisor of the Wyndham brand of hotels, arguing that its allegedly lax data security practices allowed hackers to repeatedly break into its franchiseescomputer systems. The FTC argued that these breaches resulted in harm to consumers totaling over $10 million in fraudulent activity. The FTC brought its case under Section 5 of the FTC Act, which declares “unfair and deceptive acts and practices” to be illegal. The FTCs basic arguments are that it was, first, deceptive for Wyndham – which had a privacy policy indicating how it handled customer data – to assure consumers that the company took industry-standard security measures to protect customer data; and second, independent of any affirmative assurances that customer data was safe, it was unfair for Wyndham to handle customer data in an insecure way.

This case arose in the broader context of the FTCs efforts to establish a general law of data security. Over the past two decades, the FTC has begun aggressively pursuing data security claims against companies that suffer data breaches. Almost all of these cases have settled out of court, subject to consent agreements with the FTC. The Commission points to these agreements, along with other public documents that it views as guidance, as creating a “common law of data security.” Responding to a request from the Third Circuit for supplemental briefing on this question, the FTC asserted in no uncertain terms its view that “the FTC has acted under its procedures to establish that unreasonable data security practices that harm consumers are indeed unfair within the meaning of Section 5.”

Shortly after the FTCs case was filed, Wyndham asked the District Court judge to dismiss the case, arguing that the FTC didnt have authority under Section 5 to take action against a firm that had suffered a criminal theft of its data. The judge denied this motion. But, recognizing the importance and uncertainty of part of the issue – the scope of the FTCs “unfairness” authority – she allowed Wyndham to immediately appeal that part of her decision. The Third Circuit agreed to hear the appeal, framing the question as whether the FTC has authority to regulate cybersecurity under its Section 5 “unfairness” authority, and, if so, whether the FTCs application of that authority satisfied Constitutional Due Process requirements. Oral arguments were heard last March, and the courts opinion was issued on Monday [August 24]. . . . 

In its opinion, the Court of Appeals rejects Wyndhams arguments that its data security practices cannot be unfair. As such, the case will be allowed to proceed to determine whether Wyndhams security practices were in fact “unfair” under Section 5. . . .

 Recall the setting in which this case arose: the FTC has spent more than a decade trying to create a general law of data security. The reason this case was – and still is – important is because Wyndham was challenging the FTCs general law of data security.

But the court, in the second part of its opinion, accepts Wyndhams arguments that the FTC has not developed such a law. This is central to the courts opinion, because different standards apply to interpretations of laws that courts have developed as opposed to those that agencies have developed. The court outlines these standards, explaining that “a higher standard of fair notice applies [in the context of agency rules] than in the typical civil statutory interpretation case because agencies engage in interpretation differently than courts.”

The court goes on to find that Wyndham had sufficient notice of the requirements of Section 5 under the standard that applies to judicial interpretations of statutes. And it expressly notes that, should the district court decide that the higher standard applies – that is, if the court agrees to apply the general law of data security that the FTC has tried to develop in recent years – the court will need to reevaluate whether the FTCs rules meet Constitutional muster. That review would be subject to the tougher standard applied to agency interpretations of statutes.

Stressing the Third Circuit’s statement that the FTC had failed to explain how it had “informed the public that it needs to look at [FTC] complaints and consent decrees for guidance[,]” Gus concludes that the Third Circuit’s opinion indicates that  the FTC “has lost its war to create a general law of data security” based merely on its prior actions.  According to Gus:

The takeaway, it seems, is that the FTC does have the power to take action against bad security practices, but if it wants to do so in a way that shapes industry norms and legal standards – if it wants to develop a general law of data security – a patchwork of consent decrees and informal statements is insufficient to the task. Rather, it must either pursue its cases to a decision on the merits or develop legally binding rules through . . . rulemaking procedures.

2.     Wyndham’s Implications for the Scope of the FTC’s Legal Authority

I highly respect Gus’s trenchant legal and policy analysis of Wyndham.  I believe, however, that it may somewhat understate the strength of the FTC’s legal position going forward.  The Third Circuit also explained (citations omitted):

Wyndham is only entitled to notice of the meaning of the statute and not to the agencys interpretation of the statute. . . . 

[Furthermore,] Wyndham is entitled to a relatively low level of statutory notice for several reasons. Subsection 45(a) [of the FTC Act, which states “unfair acts or practices” are illegal] does not implicate any constitutional rights here. . . .  It is a civil rather than criminal statute. . . .  And statutes regulating economic activity receive a “less strict” test because their “subject matter is often more narrow, and because businesses, which face economic demands to plan behavior carefully, can be expected to consult relevant legislation in advance of action.” . . . .  In this context, the relevant legal rule is not “so vague as to be ‘no rule or standard at all.’” . . . .  Subsection 45(n) [of the FTC Act, as a prerequisite to a finding of unfairness,] asks whether “the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.” While far from precise, this standard informs parties that the relevant inquiry here is a cost-benefit analysis, . . . that considers a number of relevant factors, including the probability and expected size of reasonably unavoidable harms to consumers given a certain level of cybersecurity and the costs to consumers that would arise from investment in stronger cybersecurity. We acknowledge there will be borderline cases where it is unclear if a particular companys conduct falls below the requisite legal threshold. But under a due process analysis a company is not entitled to such precision as would eliminate all close calls. . . .  Fair notice is satisfied here as long as the company can reasonably foresee that a court could construe its conduct as falling within the meaning of the statute. . . . 

[In addition, in 2007, the FTC issued a guidebook on business data security, which] could certainly have helped Wyndham determine in advance that its conduct might not survive the [§ 45(n)] cost-benefit analysis.  Before the [cybersecurity] attacks [on Wyndhams network], the FTC also filed complaints and entered into consent decrees in administrative cases raising unfairness claims based on inadequate corporate cybersecurity. . . .  That the FTC Commissioners – who must vote on whether to issue a complaint . . . – believe that alleged cybersecurity practices fail the cost-benefit analysis of § 45(n) certainly helps companies with similar practices apprehend the possibility that their cybersecurity could fail as well.  

In my view, a fair reading of this Third Circuit language is that:  (1) courts should read key provisions of the FTC Act to encompass cybersecurity practices that the FTC finds are not cost-beneficial; and (2) the FTC’s history of guidance and consent decrees regarding cybersecurity give sufficient notice to companies regarding the nature of cybersecurity plans that the FTC may challenge.   Based on that reading, I conclude that even if a court adopts a very exacting standard for reviewing the FTC’s interpretation of its own statute, the FTC is likely to succeed in future case-specific cybersecurity challenges, assuming that it builds a solid factual record that appears to meet cost-benefit analysis.  Whether other Circuits would agree with the Third Circuit’s analysis is, of course, open to debate (I myself suspect that they probably would).

3.     Sound Policy in Light of Wyndham

Apart from our slightly different “takes” on the legal implications of the Third Circuit’s Wyndham decision, I fully agree with Gus that, as a policy matter, the FTC’s “patchwork of consent decrees and informal statements is insufficient to the task” of building a general law of cybersecurity.  In a 2014 Heritage Foundation Legal Memorandum on the FTC and cybersecurity, I stated:

The FTCs regulation of business systems by decree threatens to stifle innovation by companies related to data security and to impose costs that will be passed on in part to consumers. Missing from the consent decree calculus is the question of whether the benefits in diminished data security breaches justify those costs—a question that should be at the heart of unfairness analysis. There are no indications that the FTC has even asked this question in fashioning data security consents, let alone made case-specific cost-benefit analyses. This is troubling.

Equally troubling is the that the FTC apparently expects businesses to divine from a large number of ad hoc, fact-specific consent decrees with varying provisions what they must do vis-à-vis data security to avoid possible FTC targeting. The uncertainty engendered by sole reliance on complicated consent decrees for guidance (in the absence of formal agency guidelines or litigated court decisions) imposes additional burdens on business planners. . . .

[D]ata security investigations that are not tailored to the size and capacity of the firm may impose competitive disadvantages on smaller rivals in industries in which data protection issues are paramount.

Moreover, it may be in the interest of very large firms to support costlier and more intrusive FTC data security initiatives, knowing that they can better afford the adoption of prohibitively costly data security protocols than their smaller competitors can. This is an example of a “raising rivalscosts” strategy, which reduces competition by crippling or eliminating rivals.

Given these and related concerns (including the failure of existing FTC reports to give appropriate guidance), I concluded, among other recommendations, that:

[T]he FTC should issue data security guidelines that clarify its enforcement policy regarding data security breaches pursuant to Section 5 of the Federal Trade Commission Act. Such guidelines should be framed solely as limiting principles that tie the FTC’s hands to avoid enforcement excesses. They should studiously avoid dictating to industry the data security principles that firms should adopt. . . .

[T]he FTC should [also] employ a strict cost-benefit analysis before pursuing any new regulatory initiatives, legislative recommendations, or investigations related to other areas of data protection, such as data brokerage or the uses of big data.

In sum, the Third Circuit’s Wyndham decision, while interesting, in no way alters the fact that the FTC’s existing cybersecurity enforcement program is inadequate and unsound.  Whether through guidelines or formal FTC rules (which carry their own costs, including the risk of establishing inflexible standards that ignore future changes in business conditions and technology), the FTC should provide additional guidance to the private sector, rooted in sound cost-benefit analysis.  The FTC should also be ever mindful of the costs it imposes on the economy (including potential burdens on business innovation) whenever it considers bringing enforcement actions in this area.

4.     Conclusion

The debate over the appropriate scope of federal regulation of business cybersecurity programs will continue to rage, as serious data breaches receive public attention and the FTC considers new initiatives.  Let us hope that, as we move forward, federal regulators will fully take into account costs as well as benefits – including, in particular, the risk that federal overregulation will undermine innovation, harm businesses, and weaken the economy.

As the organizer of this retrospective on Josh Wright’s tenure as FTC Commissioner, I have the (self-conferred) honor of closing out the symposium.

When Josh was confirmed I wrote that:

The FTC will benefit enormously from Josh’s expertise and his error cost approach to antitrust and consumer protection law will be a tremendous asset to the Commission — particularly as it delves further into the regulation of data and privacy. His work is rigorous, empirically grounded, and ever-mindful of the complexities of both business and regulation…. The Commissioners and staff at the FTC will surely… profit from his time there.

Whether others at the Commission have really learned from Josh is an open question, but there’s no doubt that Josh offered an enormous amount from which they could learn. As Tim Muris said, Josh “did not disappoint, having one of the most important and memorable tenures of any non-Chair” at the agency.

Within a month of his arrival at the Commission, in fact, Josh “laid down the cost-benefit-analysis gauntlet” in a little-noticed concurring statement regarding a proposed amendment to the Hart-Scott-Rodino Rules. The technical details of the proposed rule don’t matter for these purposes, but, as Josh noted in his statement, the situation intended to be avoided by the rule had never arisen:

The proposed rulemaking appears to be a solution in search of a problem. The Federal Register notice states that the proposed rules are necessary to prevent the FTC and DOJ from “expend[ing] scarce resources on hypothetical transactions.” Yet, I have not to date been presented with evidence that any of the over 68,000 transactions notified under the HSR rules have required Commission resources to be allocated to a truly hypothetical transaction.

What Josh asked for in his statement was not that the rule be scrapped, but simply that, before adopting the rule, the FTC weigh its costs and benefits.

As I noted at the time:

[I]t is the Commission’s responsibility to ensure that the rules it enacts will actually be beneficial (it is a consumer protection agency, after all). The staff, presumably, did a perfectly fine job writing the rule they were asked to write. Josh’s point is simply that it isn’t clear the rule should be adopted because it isn’t clear that the benefits of doing so would outweigh the costs.

As essentially everyone who has contributed to this symposium has noted, Josh was singularly focused on the rigorous application of the deceptively simple concept that the FTC should ensure that the benefits of any rule or enforcement action it adopts outweigh the costs. The rest, as they say, is commentary.

For Josh, this basic principle should permeate every aspect of the agency, and permeate the way it thinks about everything it does. Only an entirely new mindset can ensure that outcomes, from the most significant enforcement actions to the most trivial rule amendments, actually serve consumers.

While the FTC has a strong tradition of incorporating economic analysis in its antitrust decision-making, its record in using economics in other areas is decidedly mixed, as Berin points out. But even in competition policy, the Commission frequently uses economics — but it’s not clear it entirely understands economics. The approach that others have lauded Josh for is powerful, but it’s also subtle.

Inherent limitations on anyone’s knowledge about the future of technology, business and social norms caution skepticism, as regulators attempt to predict whether any given business conduct will, on net, improve or harm consumer welfare. In fact, a host of factors suggests that even the best-intentioned regulators tend toward overconfidence and the erroneous condemnation of novel conduct that benefits consumers in ways that are difficult for regulators to understand. Coase’s famous admonition in a 1972 paper has been quoted here before (frequently), but bears quoting again:

If an economist finds something – a business practice of one sort or another – that he does not understand, he looks for a monopoly explanation. And as in this field we are very ignorant, the number of ununderstandable practices tends to be very large, and the reliance on a monopoly explanation, frequent.

Simply “knowing” economics, and knowing that it is important to antitrust enforcement, aren’t enough. Reliance on economic formulae and theoretical models alone — to say nothing of “evidence-based” analysis that doesn’t or can’t differentiate between probative and prejudicial facts — doesn’t resolve the key limitations on regulatory decisionmaking that threaten consumer welfare, particularly when it comes to the modern, innovative economy.

As Josh and I have written:

[O]ur theoretical knowledge cannot yet confidently predict the direction of the impact of additional product market competition on innovation, much less the magnitude. Additionally, the multi-dimensional nature of competition implies that the magnitude of these impacts will be important as innovation and other forms of competition will frequently be inversely correlated as they relate to consumer welfare. Thus, weighing the magnitudes of opposing effects will be essential to most policy decisions relating to innovation. Again, at this stage, economic theory does not provide a reliable basis for predicting the conditions under which welfare gains associated with greater product market competition resulting from some regulatory intervention will outweigh losses associated with reduced innovation.

* * *

In sum, the theoretical and empirical literature reveals an undeniably complex interaction between product market competition, patent rules, innovation, and consumer welfare. While these complexities are well understood, in our view, their implications for the debate about the appropriate scale and form of regulation of innovation are not.

Along the most important dimensions, while our knowledge has expanded since 1972, the problem has not disappeared — and it may only have magnified. As Tim Muris noted in 2005,

[A] visitor from Mars who reads only the mathematical IO literature could mistakenly conclude that the U.S. economy is rife with monopoly power…. [Meanwhile, Section 2’s] history has mostly been one of mistaken enforcement.

It may not sound like much, but what is needed, what Josh brought to the agency, and what turns out to be absolutely essential to getting it right, is unflagging awareness of and attention to the institutional, political and microeconomic relationships that shape regulatory institutions and regulatory outcomes.

Regulators must do their best to constantly grapple with uncertainty, problems of operationalizing useful theory, and, perhaps most important, the social losses associated with error costs. It is not (just) technicians that the FTC needs; it’s regulators imbued with the “Economic Way of Thinking.” In short, what is needed, and what Josh brought to the Commission, is humility — the belief that, as Coase also wrote, sometimes the best answer is to “do nothing at all.”

The technocratic model of regulation is inconsistent with the regulatory humility required in the face of fast-changing, unexpected — and immeasurably valuable — technological advance. As Virginia Postrel warns in The Future and Its Enemies:

Technocrats are “for the future,” but only if someone is in charge of making it turn out according to plan. They greet every new idea with a “yes, but,” followed by legislation, regulation, and litigation…. By design, technocrats pick winners, establish standards, and impose a single set of values on the future.

For Josh, the first JD/Econ PhD appointed to the FTC,

economics provides a framework to organize the way I think about issues beyond analyzing the competitive effects in a particular case, including, for example, rulemaking, the various policy issues facing the Commission, and how I weigh evidence relative to the burdens of proof and production. Almost all the decisions I make as a Commissioner are made through the lens of economics and marginal analysis because that is the way I have been taught to think.

A representative example will serve to illuminate the distinction between merely using economics and evidence and understanding them — and their limitations.

In his Nielson/Arbitron dissent Josh wrote:

The Commission thus challenges the proposed transaction based upon what must be acknowledged as a novel theory—that is, that the merger will substantially lessen competition in a market that does not today exist.

[W]e… do not know how the market will evolve, what other potential competitors might exist, and whether and to what extent these competitors might impose competitive constraints upon the parties.

Josh’s straightforward statement of the basis for restraint stands in marked contrast to the majority’s decision to impose antitrust-based limits on economic activity that hasn’t even yet been contemplated. Such conduct is directly at odds with a sensible, evidence-based approach to enforcement, and the economic problems with it are considerable, as Josh also notes:

[I]t is an exceedingly difficult task to predict the competitive effects of a transaction where there is insufficient evidence to reliably answer the[] basic questions upon which proper merger analysis is based.

When the Commission’s antitrust analysis comes unmoored from such fact-based inquiry, tethered tightly to robust economic theory, there is a more significant risk that non-economic considerations, intuition, and policy preferences influence the outcome of cases.

Compare in this regard Josh’s words about Nielsen with Deborah Feinstein’s defense of the majority from such charges:

The Commission based its decision not on crystal-ball gazing about what might happen, but on evidence from the merging firms about what they were doing and from customers about their expectations of those development plans. From this fact-based analysis, the Commission concluded that each company could be considered a likely future entrant, and that the elimination of the future offering of one would likely result in a lessening of competition.

Instead of requiring rigorous economic analysis of the facts, couched in an acute awareness of our necessary ignorance about the future, for Feinstein the FTC fulfilled its obligation in Nielsen by considering the “facts” alone (not economic evidence, mind you, but customer statements and expressions of intent by the parties) and then, at best, casually applying to them the simplistic, outdated structural presumption – the conclusion that increased concentration would lead inexorably to anticompetitive harm. Her implicit claim is that all the Commission needed to know about the future was what the parties thought about what they were doing and what (hardy disinterested) customers thought they were doing. This shouldn’t be nearly enough.

Worst of all, Nielsen was “decided” with a consent order. As Josh wrote, strongly reflecting the essential awareness of the broader institutional environment that he brought to the Commission:

[w]here the Commission has endorsed by way of consent a willingness to challenge transactions where it might not be able to meet its burden of proving harm to competition, and which therefore at best are competitively innocuous, the Commission’s actions may alter private parties’ behavior in a manner that does not enhance consumer welfare.

Obviously in this regard his successful effort to get the Commission to adopt a UMC enforcement policy statement is a most welcome development.

In short, Josh is to be applauded not because he brought economics to the Commission, but because he brought the economic way of thinking. Such a thing is entirely too rare in the modern administrative state. Josh’s tenure at the FTC was relatively short, but he used every moment of it to assiduously advance his singular, and essential, mission. And, to paraphrase the last line of the movie The Right Stuff (it helps to have the rousing film score playing in the background as you read this): “for a brief moment, [Josh Wright] became the greatest [regulator] anyone had ever seen.”

I would like to extend my thanks to everyone who participated in this symposium. The contributions here will stand as a fitting and lasting tribute to Josh and his legacy at the Commission. And, of course, I’d also like to thank Josh for a tenure at the FTC very much worth honoring.

Imagine

totmauthor —  27 August 2015

by Michael Baye, Bert Elwert Professor of Business at the Kelley School of Business, Indiana University, and former Director of the Bureau of Economics, FTC

Imagine a world where competition and consumer protection authorities base their final decisions on scientific evidence of potential harm. Imagine a world where well-intentioned policymakers do not use “possibility theorems” to rationalize decisions that are, in reality, based on idiosyncratic biases or beliefs. Imagine a world where “harm” is measured using a scientific yardstick that accounts for the economic benefits and costs of attempting to remedy potentially harmful business practices.

Many economists—conservatives and liberals alike—have the luxury of pondering this world in the safe confines of ivory towers; they publish in journals read by a like-minded audience that also relies on the scientific method.

Congratulations and thanks, Josh, for superbly articulating these messages in the more relevant—but more hostile—world outside of the ivory tower.

To those of you who might disagree with a few (or all) of Josh’s decisions, I challenge you to examine honestly whether your views on a particular matter are based on objective (scientific) evidence, or on your personal, subjective beliefs. Evidence-based policymaking can be discomforting: It sometimes induces those with philosophical biases in favor of intervention to make laissez-faire decisions, and it sometimes induces people with a bias for non-intervention to make decisions to intervene.