Last week, FCC General Counsel Jonathan Sallet pulled back the curtain on the FCC staff’s analysis behind its decision to block Comcast’s acquisition of Time Warner Cable. As the FCC staff sets out on its reported Rainbow Tour to reassure regulated companies that it’s not “hostile to the industries it regulates,” Sallet’s remarks suggest it will have an uphill climb. Unfortunately, the staff’s analysis appears to have been unduly speculative, disconnected from critical market realities, and decidedly biased — not characteristics in a regulator that tend to offer much reassurance.

Merger analysis is inherently speculative, but, as courts have repeatedly had occasion to find, the FCC has a penchant for stretching speculation beyond the breaking point, adopting theories of harm that are vaguely possible, even if unlikely and inconsistent with past practice, and poorly supported by empirical evidence. The FCC’s approach here seems to fit this description.

The FCC’s fundamental theory of anticompetitive harm

To begin with, as he must, Sallet acknowledged that there was no direct competitive overlap in the areas served by Comcast and Time Warner Cable, and no consumer would have seen the number of providers available to her changed by the deal.

But the FCC staff viewed this critical fact as “not outcome determinative.” Instead, Sallet explained that the staff’s opposition was based primarily on a concern that the deal might enable Comcast to harm “nascent” OVD competitors in order to protect its video (MVPD) business:

Simply put, the core concern came down to whether the merged firm would have an increased incentive and ability to safeguard its integrated Pay TV business model and video revenues by limiting the ability of OVDs to compete effectively, especially through the use of new business models.

The justification for the concern boiled down to an assumption that the addition of TWC’s subscriber base would be sufficient to render an otherwise too-costly anticompetitive campaign against OVDs worthwhile:

Without the merger, a company taking action against OVDs for the benefit of the Pay TV system as a whole would incur costs but gain additional sales – or protect existing sales — only within its footprint. But the combined entity, having a larger footprint, would internalize more of the external “benefits” provided to other industry members.

The FCC theorized that, by acquiring a larger footprint, Comcast would gain enough bargaining power and leverage, as well as the means to profit from an exclusionary strategy, leading it to employ a range of harmful tactics — such as impairing the quality/speed of OVD streams, imposing data caps, limiting OVD access to TV-connected devices, imposing higher interconnection fees, and saddling OVDs with higher programming costs. It’s difficult to see how such conduct would be permitted under the FCC’s Open Internet Order/Title II regime, but, nevertheless, the staff apparently believed that Comcast would possess a powerful “toolkit” with which to harm OVDs post-transaction.

Comcast’s share of the MVPD market wouldn’t have changed enough to justify the FCC’s purported fears

First, the analysis turned on what Comcast could and would do if it were larger. But Comcast was already the largest ISP and MVPD (now second largest MVPD, post AT&T/DIRECTV) in the nation, and presumably it has approximately the same incentives and ability to disadvantage OVDs today.

In fact, there’s no reason to believe that the growth of Comcast’s MVPD business would cause any material change in its incentives with respect to OVDs. Whatever nefarious incentives the merger allegedly would have created by increasing Comcast’s share of the MVPD market (which is where the purported benefits in the FCC staff’s anticompetitive story would be realized), those incentives would be proportional to the size of increase in Comcast’s national MVPD market share — which, here, would be about eight percentage points: from 22% to under 30% of the national market.

It’s difficult to believe that Comcast would gain the wherewithal to engage in this costly strategy by adding such a relatively small fraction of the MVPD market (which would still leave other MVPDs serving fully 70% of the market to reap the purported benefits instead of Comcast), but wouldn’t have it at its current size – and there’s no evidence that it has ever employed such strategies with its current market share.

It bears highlighting that the D.C. Circuit has already twice rejected FCC efforts to impose a 30% market cap on MVPDs, based on the Commission’s inability to demonstrate that a greater-than-30% share would create competitive problems, especially given the highly dynamic nature of the MVPD market. In vacating the FCC’s most recent effort to do so in 2009, the D.C. Circuit was resolute in its condemnation of the agency, noting:

In sum, the Commission has failed to demonstrate that allowing a cable operator to serve more than 30% of all [MVPD] subscribers would threaten to reduce either competition or diversity in programming.

The extent of competition and the amount of available programming (including original programming distributed by OVDs themselves) has increased substantially since 2009; this makes the FCC’s competitive claims even less sustainable today.

It’s damning enough to the FCC’s case that there is no marketplace evidence of such conduct or its anticompetitive effects in today’s market. But it’s truly impossible to square the FCC’s assertions about Comcast’s anticompetitive incentives with the fact that, over the past decade, Comcast has made massive investments in broadband, steadily increased broadband speeds, and freely licensed its programming, among other things that have served to enhance OVDs’ long-term viability and growth. Chalk it up to the threat of regulatory intervention or corporate incompetence if you can’t believe that competition alone could be responsible for this largesse, but, whatever the reason, the FCC staff’s fears appear completely unfounded in a marketplace not significantly different than the landscape that would have existed post-merger.

OVDs aren’t vulnerable, and don’t need the FCC’s “help”

After describing the “new entrants” in the market — such unfamiliar and powerless players as Dish, Sony, HBO, and CBS — Sallet claimed that the staff was principally animated by the understanding that

Entrants are particularly vulnerable when competition is nascent. Thus, staff was particularly concerned that this transaction could damage competition in the video distribution industry.

Sallet’s description of OVDs makes them sound like struggling entrepreneurs working in garages. But, in fact, OVDs have radically reshaped the media business and wield enormous clout in the marketplace.

Netflix, for example, describes itself as “the world’s leading Internet television network with over 65 million members in over 50 countries.” New services like Sony Vue and Sling TV are affiliated with giant, well-established media conglomerates. And whatever new offerings emerge from the FCC-approved AT&T/DIRECTV merger will be as well-positioned as any in the market.

In fact, we already know that the concerns of the FCC are off-base because they are of a piece with the misguided assumptions that underlie the Chairman’s recent NPRM to rewrite the MVPD rules to “protect” just these sorts of companies. But the OVDs themselves — the ones with real money and their competitive futures on the line — don’t see the world the way the FCC does, and they’ve resolutely rejected the Chairman’s proposal. Notably, the proposed rules would “protect” these services from exactly the sort of conduct that Sallet claims would have been a consequence of the Comcast-TWC merger.

If they don’t want or need broad protection from such “harms” in the form of revised industry-wide rules, there is surely no justification for the FCC to throttle a merger based on speculation that the same conduct could conceivably arise in the future.

The realities of the broadband market post-merger wouldn’t have supported the FCC’s argument, either

While a larger Comcast might be in a position to realize more of the benefits from the exclusionary strategy Sallet described, it would also incur more of the costs — likely in direct proportion to the increased size of its subscriber base.

Think of it this way: To the extent that an MVPD can possibly constrain an OVD’s scope of distribution for programming, doing so also necessarily makes the MVPD’s own broadband offering less attractive, forcing it to incur a cost that would increase in proportion to the size of the distributor’s broadband market. In this case, as noted, Comcast would have gained MVPD subscribers — but it would have also gained broadband subscribers. In a world where cable is consistently losing video subscribers (as Sallet acknowledged), and where broadband offers higher margins and faster growth, it makes no economic sense that Comcast would have valued the trade-off the way the FCC claims it would have.

Moreover, in light of the existing conditions imposed on Comcast under the Comcast/NBCU merger order from 2011 (which last for a few more years) and the restrictions adopted in the Open Internet Order, Comcast’s ability to engage in the sort of exclusionary conduct described by Sallet would be severely limited, if not non-existent. Nor, of course, is there any guarantee that former or would-be OVD subscribers would choose to subscribe to, or pay more for, any MVPD in lieu of OVDs. Meanwhile, many of the relevant substitutes in the MVPD market (like AT&T and Verizon FiOS) also offer broadband services – thereby increasing the costs that would be incurred in the broadband market even more, as many subscribers would shift not only their MVPD, but also their broadband service, in response to Comcast degrading OVDs.

And speaking of the Open Internet Order — wasn’t that supposed to prevent ISPs like Comcast from acting on their alleged incentives to impede the quality of, or access to, edge providers like OVDs? Why is merger enforcement necessary to accomplish the same thing once Title II and the rest of the Open Internet Order are in place? And if the argument is that the Open Internet Order might be defeated, aside from the completely speculative nature of such a claim, why wouldn’t a merger condition that imposed the same constraints on Comcast – as was done in the Comcast/NBCU merger order by imposing the former net neutrality rules on Comcast – be perfectly sufficient?

While the FCC staff analysis accepted as true (again, contrary to current marketplace evidence) that a bigger Comcast would have more incentive to harm OVDs post-merger, it rejected arguments that there could be countervailing benefits to OVDs and others from this same increase in scale. Thus, things like incremental broadband investments and speed increases, a larger Wi-Fi network, and greater business services market competition – things that Comcast is already doing and would have done on a greater and more-accelerated scale in the acquired territories post-transaction – were deemed insufficient to outweigh the expected costs of the staff’s entirely speculative anticompetitive theory.

In reality, however, not only OVDs, but consumers – and especially TWC subscribers – would have benefitted from the merger by access to Comcast’s faster broadband speeds, its new investments, and its superior video offerings on the X1 platform, among other things. Many low-income families would have benefitted from expansion of Comcast’s Internet Essentials program, and many businesses would have benefited from the addition of a more effective competitor to the incumbent providers that currently dominate the business services market. Yet these and other verifiable benefits were given short shrift in the agency’s analysis because they “were viewed by staff as incapable of outweighing the potential harms.”

The assumptions underlying the FCC staff’s analysis of the broadband market are arbitrary and unsupportable

Sallet’s claim that the combined firm would have 60% of all high-speed broadband subscribers in the U.S. necessarily assumes a national broadband market measured at 25 Mbps or higher, which is a red herring.

The FCC has not explained why 25 Mbps is a meaningful benchmark for antitrust analysis. The FCC itself endorsed a 10 Mbps baseline for its Connect America fund last December, noting that over 70% of current broadband users subscribe to speeds less than 25 Mbps, even in areas where faster speeds are available. And streaming online video, the most oft-cited reason for needing high bandwidth, doesn’t require 25 Mbps: Netflix says that 5 Mbps is the most that’s required for an HD stream, and the same goes for Amazon (3.5 Mbps) and Hulu (1.5 Mbps).

What’s more, by choosing an arbitrary, faster speed to define the scope of the broadband market (in an effort to assert the non-competitiveness of the market, and thereby justify its broadband regulations), the agency has – without proper analysis or grounding, in my view – unjustifiably shrunk the size of the relevant market. But, as it happens, doing so also shrinks the size of the increase in “national market share” that the merger would have brought about.

Recall that the staff’s theory was premised on the idea that the merger would give Comcast control over enough of the broadband market that it could unilaterally impose costs on OVDs sufficient to impair their ability to reach or sustain minimum viable scale. But Comcast would have added only one percent of this invented “market” as a result of the merger. It strains credulity to assert that there could be any transaction-specific harm from an increase in market share equivalent to a rounding error.

In any case, basing its rejection of the merger on a manufactured 25 Mbps relevant market creates perverse incentives and will likely do far more to harm OVDs than realization of even the staff’s worst fears about the merger ever could have.

The FCC says it wants higher speeds, and it wants firms to invest in faster broadband. But here Comcast did just that, and then was punished for it. Rather than acknowledging Comcast’s ongoing broadband investments as strong indication that the FCC staff’s analysis might be on the wrong track, the FCC leadership simply sidestepped that inconvenient truth by redefining the market.

The lesson is that if you make your product too good, you’ll end up with an impermissibly high share of the market you create and be punished for it. This can’t possibly promote the public interest.

Furthermore, the staff’s analysis of competitive effects even in this ersatz market aren’t likely supportable. As noted, most subscribers access OVDs on connections that deliver content at speeds well below the invented 25 Mbps benchmark, and they pay the same prices for OVD subscriptions as subscribers who receive their content at 25 Mbps. Confronted with the choice to consume content at 25 Mbps or 10 Mbps (or less), the majority of consumers voluntarily opt for slower speeds — and they purchase service from Netflix and other OVDs in droves, nonetheless.

The upshot? Contrary to the implications on which the staff’s analysis rests, if Comcast were to somehow “degrade” OVD content on the 25 Mbps networks so that it was delivered with characteristics of video content delivered over a 10-Mbps network, real-world, observed consumer preferences suggest it wouldn’t harm OVDs’ access to consumers at all. This is especially true given that OVDs often have a global focus and reach (again, Netflix has 65 million subscribers in over 50 countries), making any claims that Comcast could successfully foreclose them from the relevant market even more suspect.

At the same time, while the staff apparently viewed the broadband alternatives as “limited,” the reality is that Comcast, as well as other broadband providers, are surrounded by capable competitors, including, among others, AT&T, Verizon, CenturyLink, Google Fiber, many advanced VDSL and fiber-based Internet service providers, and high-speed mobile wireless providers. The FCC understated the complex impact of this robust, dynamic, and ever-increasing competition, and its analysis entirely ignored rapidly growing mobile wireless broadband competition.

Finally, as noted, Sallet claimed that the staff determined that merger conditions would be insufficient to remedy its concerns, without any further explanation. Yet the Commission identified similar concerns about OVDs in both the Comcast/NBCUniversal and AT&T/DIRECTV transactions, and adopted remedies to address those concerns. We know the agency is capable of drafting behavioral conditions, and we know they have teeth, as demonstrated by prior FCC enforcement actions. It’s hard to understand why similar, adequate conditions could not have been fashioned for this transaction.

In the end, while I appreciate Sallet’s attempt to explain the FCC’s decision to reject the Comcast/TWC merger, based on the foregoing I’m not sure that Comcast could have made any argument or showing that would have dissuaded the FCC from challenging the merger. Comcast presented a strong economic analysis answering the staff’s concerns discussed above, all to no avail. It’s difficult to escape the conclusion that this was a politically-driven result, and not one rigorously based on the facts or marketplace reality.

The costs imposed by government regulation are huge and growing.  The Heritage Foundation produces detailed annual reports cataloguing the rising burden of the American regulatory state, and the Competitive Enterprise Institute recently estimated that regulations impose a $1.88 trillion annual tax on the U.S. economy.  Yet the need to rein in the regulatory behemoth has attracted relatively little attention in the early stages of the 2016 U.S. presidential campaign.  That may be changing, however.

On September 23, former Florida Governor Jeb Bush authored a short Wall Street Journal op-ed that set forth his ideas for curbing the “regulation tax.”  Governor Bush’s op-ed focuses on a host of particulars – including, for example, repealing specific onerous Environmental Protection rules, repealing significant parts of the Dodd-Frank Act, repealing and replacing Obamacare, putting federal agencies on a “regulatory budget” (requiring a dollar of regulatory savings per each dollar of regulatory costs proposed), curbing frivolous regulatory litigation, streamlining regulatory approval processes, and a greater emphasis on private and state-driven solutions.  Logical extensions of these initiatives, such as supplemental executive orders putting more “teeth” into routine regulatory review and support for the REINS Act (which would require congressional approval of “major” regulations), readily suggest themselves.

Regulatory reform initiatives have a long history.  A particularly notable example is the Reagan Administration’s 1981 efforts to curb excessive regulation through the Task Force on Regulatory Relief, which was linked to systematic White House review (through the Office of Management and Budget) of significant proposed regulations – a process that continues to this day (albeit imperfectly, to say the least).  It is to be hoped that all other presidential candidates will also think about and prepare their own regulatory reform proposals.  This should not be deemed a partisan issue.  President Carter, after all, promoted regulatory reform and ushered in welfare-enhancing transportation deregulation, and President Clinton touted deregulation accomplished during the first term of his presidency.)

In short, done properly, reducing regulatory burdens should “supercharge” U.S. economic growth and enhance efficiency, without harming consumers or the environment – indeed, consumers and the environment should benefit long-term from smarter, streamlined, cost-beneficial regulation.

The Ninth Circuit made waves recently with its decision in Lenz v. Universal Music Corp., in which it decided that a plaintiff in a copyright infringement case must first take potential fair use considerations into account before filing a takedown notice under the DMCA. Lenz, represented by the EFF, claimed that Universal had not formed a good faith belief that an infringement had occurred as required by § 512(c)(3)(A)(v). Consequently, Lenz sought damages under § 512(f), alleging that Universal made material misrepresentations in issuing a takedown notice without first considering a fair use defense.

In reaching its holding, the Ninth Circuit decided that fair use should not be considered an affirmative defense–which is to say that it is not properly considered after an allegation, but must be considered when determining whether a prima facie claim exists. It starts from the text of the Copyright Act itself. According to 17 U.S.C. § 107

Notwithstanding the provisions of sections 106 and 106A, the fair use of a copyrighted work … is not an infringement of copyright.

In support of its contention, the Ninth Circuit goes on to cite a case in the Eleventh Circuit as well as legislative material suggesting that Congress intended that fair use no longer be considered as an affirmative defense. Thus, in the Ninth Circuit’s view, such fair use at best qualifies as a sort of quasi-defense, and most likely constitutes an element of an infringement claim. After all, if fair use is literally non-infringing, then establishing infringement requires ruling out fair use, as well.

Or so says the Ninth Circuit. But it takes little more than a Google search — let alone the legal research one should expect of federal judges and their clerks — to realize that the court is woefully, and utterly, incorrect.

Is Fair Use an Affirmative Defense ?

The Supreme Court has been perfectly clear that fair use is in fact an affirmative defense. In Campbell v. Acuff-Rose, the Supreme Court had occasion to consider the nature of fair use under § 107 in the context of determining whether 2 Live Crew’s parody of Roy Orbison’s “Pretty Woman” was a permissible use. In considering the fourth fair use factor, “the effect of the use upon the potential market for or value of the copyrighted work,” the Court held that “[s]ince fair use is an affirmative defense, its proponent would have difficulty carrying the burden of demonstrating fair use without favorable evidence about relevant markets.”

Further, in reaching this opinion the Court relied on its earlier precedent in Harper & Row, where, in discussing the “purpose of the use” prong of § 107, the Court said that “[t]he drafters [of § 107] resisted pressures from special interest groups to create presumptive categories of fair use, but structured the provision as an affirmative defense requiring a case-by-case analysis.”  Not surprisingly, other courts are inclined to follow the the Supreme Court. Thus the Eleventh Circuit, the Southern District of New York, and the Central District of California (here and here), to name but a few, all explicitly refer to fair use as an affirmative defense. Oh, and the Ninth Circuit did too, at least until Lenz.

The Ninth Circuit Dissembles

As part of its appeal, Universal relied on the settled notion that fair use is an affirmative defense in building its case. Perhaps because this understanding of fair use is so well established, Universal failed to cite extensively why this was so. And so (apparently unable to perform its own legal research), the Ninth Circuit dismissed § 107 as an affirmative defense out of hand, claiming that

Universal’s sole textual argument is that fair use is not “authorized by the law” because it is an affirmative defense that excuses otherwise infringing conduct … Supreme Court precedent squarely supports the conclusion that fair use does not fall into the latter camp: “[A]nyone who . . . makes a fair use of the work is not an infringer of the copyright with respect to such use.” Sony Corp. of Am. v. Universal City Studios, Inc., 464 U.S. 417, 433 (1984).”

It bears noting that the Court in Sony Corp. did not discuss whether or not fair use is an affirmative defense, whereas Acuff Rose (decided 10 years after Sony Corp.) and Harper & Row decisions do.

To shore up its argument, the Ninth Circuit then goes on to cite the Eleventh Circuit for the notion that the 1976 Act fundamentally changed the nature of fair use, moving it away from its affirmative defense roots. Quoting Bateman v. Mnemonics, Inc., the court claims that

Although the traditional approach is to view “fair use” as an affirmative defense, . . . it is better viewed as a right granted by the Copyright Act of 1976. Originally, as a judicial doctrine without any statutory basis, fair use was an infringement that was excused—this is presumably why it was treated as a defense. As a statutory doctrine, however, fair use is not an infringement. Thus, since the passage of the 1976 Act, fair use should no longer be considered an infringement to be excused; instead, it is logical to view fair use as a right. Regardless of how fair use is viewed, it is clear that the burden of proving fair use is always on the putative infringer.

But wait — didn’t I list the Eleventh Circuit as one of the (many) courts that have held fair use to be an affirmative defense? Why yes I did. It turns out that, as Devlin Hartline pointed out last week, the Ninth Circuit actually ripped the Eleventh Circuit text completely out of context. The full Bateman quote (from a footnote, it should be noted) is as follows:

Fair use traditionally has been treated as an affirmative defense to a charge of copyright infringement …. In viewing fair use as an excused infringement, the court must, in addressing this mixed question of law and fact, determine whether the use made of the original components of a copyrighted work is “fair” under 17 U.S.C. § 107 … Although the traditional approach is to view “fair use” as an affirmative defense, this writer, speaking only for himself, is of the opinion that it is better viewed as a right granted by the Copyright Act of 1976. Originally, as a judicial doctrine without any statutory basis, fair use was an infringement that was excused—this is presumably why it was treated as a defense. As a statutory doctrine, however, fair use is not an infringement. Thus, since the passage of the 1976 Act, fair use should no longer be considered an infringement to be excused; instead, it is logical to view fair use as a right. Regardless of how fair use is viewed, it is clear that the burden of proving fair use is always on the putative infringer.” (internal citations omitted, but emphasis added)

Better yet, in a subsequent opinion the Eleventh Circuit further clarified the position that the view of fair use as an affirmative defense is binding Supreme Court precedent, notwithstanding any judge’s personal preferences to the contrary.

But that’s not the worst of it. Not only did the court shamelessly misquote the Eleventh Circuit in stretching to find a justification for its prefered position, the court actually ignored its own precedent to the contrary. In Dr. Seuss Enterprises, L.P. v. Penguin Books USA, Inc., the Ninth Circuit held that

Since fair use is an affirmative defense, [the Defendant-Appellants] must bring forward favorable evidence about relevant markets. Given their failure to submit evidence on this point … we conclude that “it is impossible to deal with [fair use] except by recognizing that a silent record on an important factor bearing on fair use disentitle[s] the proponent of the defense[.]

Further, even if the Lenz court is correct that § 107 “unambiguously contemplates fair use as a use authorized by the law” — despite Supreme Court precedent — the authority the Ninth Circuit attempts to rely upon would still require defendants to raise a fair use defense after a prima facie claim was made, as “the burden of proving fair use is always on the putative infringer.”  

It Also Violates a Common Sense Reading of the DMCA

As with all other affirmative defenses, a plaintiff must first make out a prima facie case before the defense can be raised. So how do we make sense of the language in § 107 that determines fair use to not be infringement? In essence, it appears to be a case of inartful drafting.  Particularly in light of the stated aims of the DMCA — a law that was enacted after the Supreme Court established that fair use was an affirmative defense — the nature of fair use as an affirmative defense that can only be properly raised by an accused infringer is as close to black letter law as it gets.

The DMCA was enacted to strike a balance between the interests of rightsholders in protecting their property, and the interests of society in having an efficient mechanism for distributing content. Currently, rightsholders send out tens of millions of takedown notices every year to deal with the flood of piracy and other infringing uses. If rightsholders were required to consider fair use in advance of each of these, the system would be utterly unworkable — for instance, in Google’s search engine alone, over 54 million removal requests were made in just the month of August 2015 owing to potential copyright violations. While the evisceration of the DMCA is, of course, exactly what the plaintiffs (or more accurately, EFF, which represented the plaintiffs) in Lenz wanted, it’s not remotely what the hard-wrought compromise of the statute contemplates.

And the reason it would be unworkable is not just because of the volume of the complaints, but because fair use is such an amorphous concept that ultimately requires adjudication.

Not only are there four factors to consider in a fair use analysis, but there are no bright line rules to guide the application of the factors. The open ended nature of the defense essentially leaves it up to a defendant to explain just why his situation should not constitute infringement. Until a judge or a jury says otherwise, how is one to know whether a particular course of conduct qualifies for a fair use defense?

The Lenz court even acknowledges as much when it says

If, however, a copyright holder forms a subjective good faith belief the allegedly infringing material does not constitute fair use, we are in no position to dispute the copyright holder’s belief even if we would have reached the opposite conclusion. (emphasis added)

Thus, it is the slightest of fig leaves that is necessary to satisfy the Lenz court’s new requirement that fair use be considered before issuing a takedown notice.

What’s more, this statement from the court also demonstrates the near worthlessness of reading a prima facie fair use requirement into the takedown requirements. Short of a litigant explicitly disclaiming any efforts to consider fair use, the standard could be met with a bare assertion. It does, of course, remain an open question whether the computer algorithms the rightsholders employ in scanning for infringing content are actually capable of making fair use determinations — but perhaps throwing a monkey wrench — any monkey wrench — into the rightsholders’ automated notice-and-takedown systems was all the court was really after. I think we can at least be sure that that was EFF’s aim, anyway, as they apparently think that § 512 tends to be a tool of censorship in the hands of rightsholders.

The structure of the takedown and put-back provisions of the DMCA also cut against the Lenz court’s view. The put-back requirements of Section 512(g) suggest that affirmative defenses and other justifications for accused infringement would be brought up after a takedown notice was submitted. What would be the purpose of put-back response, if not to offer the accused infringers justifications and defenses to an allegation of infringement? Along with excuses such as having a license, or a work’s copyright being expired, an alleged infringer can bring up the fair use grounds under which he believed he was entitled to use the work in question.

In short, to require a rightsholder to analyze fair use in advance of a takedown request effectively requires her to read the mind of an infringer and figure out what excuse that party plans to raise as part of her defense. This surely can’t have been what Congress intended with the takedown provisions of the DMCA — enacted as they were years after the Supreme Court had created the widely recognized rule that fair use is an affirmative defense.

Well, widely recognized, that is, except in the Ninth Circuit. This month, anyway.

Update: I received some feedback on this piece which pointed out an assumption I was making with respect to the Ninth Circuit’s opinion, and which deserves a clarifying note. Essentially, the Lenz court splits the concept of affirmative defenses into two categories: (1) an affirmative defense that is merely a label owing to the procedural posture of a case and (2) an affirmative defense, as it is traditionally understood and that always puts the burden of production on a defendant.  By characterizing affirmative defenses in this way, the Lenz court gets to have its cake and eat it too:  when an actual proceeding is filed, a defendant will procedurally have the burden of production on the issue, but since fair use is at most a quasi-affirmative defense, the court felt it was fair to shift that same burden onto rightsholders when issuing a takedown letter.  So technically the court says that fair use is an affirmative defense (as a labeling matter), but it does not practically treat is as such for the purposes of takedown notices.

My article with Thom Lambert arguing that the Supreme Court – but not the Obama Administration – has substantially adopted an error cost approach to antitrust enforcement, appears in the newly released September 2015 issue of the Journal of Competition Law and Economics.  To whet your appetite, I am providing the abstract:

In his seminal 1984 article, The Limits of Antitrust, Judge Frank Easterbrook proposed that courts and enforcers adopt a simple set of screening rules for application in antitrust cases, in order to minimize error and decision costs and thereby maximize antitrust’s social value. Over time, federal courts in general—and the U.S. Supreme Court in particular, under Chief Justice Roberts—have in substantial part adopted Easterbrook’s “limits of antitrust” approach, thereby helping to reduce costly antitrust uncertainty. Recently, however, antitrust enforcers in the Obama Administration (unlike their predecessors in the Reagan, Bush, and Clinton Administrations) have been less attuned to this approach, and have undertaken initiatives that reduce clarity and predictability in antitrust enforcement. Regardless of the cause of the diverging stances on the limits of antitrust, two things are clear. First, recent enforcement agency policies are severely at odds with the philosophy that informs Supreme Court antitrust jurisprudence. Second, if the agencies do not reverse course, acknowledge antitrust’s limits, and seek to optimize the law in light of those limits, consumers will suffer.

Let us hope that error cost considerations figure more prominently in antitrust enforcement under the next Administration.

A basic premise of antitrust law (also called competition law) is that competition among private entities enhances economic welfare by reducing costs, increasing efficiency, and spurring innovation.  Government competition agencies around the world also compete, by devising different substantive and procedural rules to constrain private conduct in the name of promoting competition.  The welfare implications of that form of inter-jurisdictional competition are, however, ambiguous.  Public choice considerations suggest that self-interested competition agency staff have a strong incentive to promote rules that spawn many investigations and cases, in order to increase their budgets and influence.  Indeed, an agency may measure its success, both domestically and on the world stage, by the size of its budget and staff and the amount of enforcement activity it generates.  That activity, however, imposes costs on the private sector, and may produce restrictive rules that deter vigorous, welfare-enhancing competition.  Furthermore, and relatedly, it may generate substantial costs due to “false positives” – agency challenges to efficient conduct that should not have been brought.  (There are also costs stemming from “false negatives,” the failure to bring welfare-enhancing enforcement actions.  Decision theory indicates an agency should seek to minimize the sum of costs due to false positives and false negatives.)  Private enforcement of competition laws, until recently largely relegated to the United States, brings additional costs and complications, to the extent it yields ill-advised lawsuits.  Thus one should cast a wary eye at any increase in the scope of enforcement authority within a jurisdiction, and not assume automatically that it is desirable on public policy grounds.

These considerations should be brought to bear in assessing the implications of the 2014 European Union (EU) Damages Actions Directive (Directive), which is expected to yield a dramatic increase in private competition law enforcement in the EU.  The Directive establishes standards EU nations must adopt for the bringing of private competition lawsuits, including class actions.  The 28 EU member states have until December 27, 2016 to adopt national laws, regulations, and administrative provisions that implement the Directive.  In short, the Directive (1) makes it easier for private plaintiffs to have access to evidence; (2) gives a final finding of violation by a national competition agency conclusive effect in private actions brought in national courts and prima facie presumptive effect in private actions brought in other EU nations; (3) establishes clear and uniform statutes of limitation; (4) allows both direct and indirect purchasers of overpriced goods to bring private actions; (5) clarifies that private victims are entitled to full compensation for losses suffered, including compensation for actual loss and for loss of profit, plus interest; (6) establishes a rebuttable presumption that cartels cause harm; and (7) provides for joint and several liability (any participant in a competition law infringement will be responsible towards the victims for the whole harm caused by the infringement, but may seek contribution from other infringers).

By facilitating the bringing of lawsuits for cartel overcharges by both direct and indirect purchasers (see here), the Directive should substantially expand private cartel litigation in Europe.  (It may also redirect some cartel-related litigation from United States tribunals, which up to now have been the favorite venues for such actions.  Potential treble damages recoveries still make U.S. antitrust courts an attractive venue, but limitations on indirect purchaser suits and Sherman Act jurisdictional constraints requiring a “direct, substantial and reasonably foreseeable effect” effect on U.S. commerce create complications for foreign plaintiffs.)  Given the fact that cartels have no redeeming features, this feature may be expected to increase disincentives for cartel conduct and thereby raise welfare.  (The degree of welfare enhancement depends on the extent to which legitimate activity may be misidentified as cartel conduct, yielding “false positive” damage actions.)

The outlook is less sanguine for non-cartel cases, however.  The Directive applies equally to vertical restraints and abuse of dominance cases, which are far more likely to yield false positives.  In my experience, EU enforcers are more comfortable than U.S. enforcers at pursuing cases based on attenuated theories of exclusionary conduct that have a weak empirical basis.  (The EU’s continued investigation of Google, based on economically inappropriate theories that were rejected by the U.S. FTC, is a prime example.)  In particular, the implementation of the Directive will raise the financial risks for “dominant” or “potentially dominant” firms operating in Europe, who may be further disincentivized from undertaking novel welfare-enhancing business practices that preserve or raise their market share.  This could further harm the vitality of the European business sector.

Hopefully, individual EU states will seek to implement the Directive in a manner that takes into account the serious risk of false positives in non-cartel cases.  The welfare implications of the Directive’s implementation are well worth further competition law scholarship.

On August 24, the Third Circuit issued its much anticipated decision in FTC v. Wyndham Worldwide Corp., holding that the U.S. Federal Trade Commission (FTC) has authority to challenge cybersecurity practices under its statutory “unfairness” authority.  This case brings into focus both legal questions regarding the scope of the FTC’s cybersecurity authority and policy questions regarding the manner in which that authority should be exercised.

1.     Wyndham: An Overview

Rather than “reinventing the wheel,” let me begin by quoting at length from Gus Hurwitz’s excellent summary of the relevant considerations in this case:

In 2012, the FTC sued Wyndham Worldwide, the parent company and franchisor of the Wyndham brand of hotels, arguing that its allegedly lax data security practices allowed hackers to repeatedly break into its franchiseescomputer systems. The FTC argued that these breaches resulted in harm to consumers totaling over $10 million in fraudulent activity. The FTC brought its case under Section 5 of the FTC Act, which declares “unfair and deceptive acts and practices” to be illegal. The FTCs basic arguments are that it was, first, deceptive for Wyndham – which had a privacy policy indicating how it handled customer data – to assure consumers that the company took industry-standard security measures to protect customer data; and second, independent of any affirmative assurances that customer data was safe, it was unfair for Wyndham to handle customer data in an insecure way.

This case arose in the broader context of the FTCs efforts to establish a general law of data security. Over the past two decades, the FTC has begun aggressively pursuing data security claims against companies that suffer data breaches. Almost all of these cases have settled out of court, subject to consent agreements with the FTC. The Commission points to these agreements, along with other public documents that it views as guidance, as creating a “common law of data security.” Responding to a request from the Third Circuit for supplemental briefing on this question, the FTC asserted in no uncertain terms its view that “the FTC has acted under its procedures to establish that unreasonable data security practices that harm consumers are indeed unfair within the meaning of Section 5.”

Shortly after the FTCs case was filed, Wyndham asked the District Court judge to dismiss the case, arguing that the FTC didnt have authority under Section 5 to take action against a firm that had suffered a criminal theft of its data. The judge denied this motion. But, recognizing the importance and uncertainty of part of the issue – the scope of the FTCs “unfairness” authority – she allowed Wyndham to immediately appeal that part of her decision. The Third Circuit agreed to hear the appeal, framing the question as whether the FTC has authority to regulate cybersecurity under its Section 5 “unfairness” authority, and, if so, whether the FTCs application of that authority satisfied Constitutional Due Process requirements. Oral arguments were heard last March, and the courts opinion was issued on Monday [August 24]. . . . 

In its opinion, the Court of Appeals rejects Wyndhams arguments that its data security practices cannot be unfair. As such, the case will be allowed to proceed to determine whether Wyndhams security practices were in fact “unfair” under Section 5. . . .

 Recall the setting in which this case arose: the FTC has spent more than a decade trying to create a general law of data security. The reason this case was – and still is – important is because Wyndham was challenging the FTCs general law of data security.

But the court, in the second part of its opinion, accepts Wyndhams arguments that the FTC has not developed such a law. This is central to the courts opinion, because different standards apply to interpretations of laws that courts have developed as opposed to those that agencies have developed. The court outlines these standards, explaining that “a higher standard of fair notice applies [in the context of agency rules] than in the typical civil statutory interpretation case because agencies engage in interpretation differently than courts.”

The court goes on to find that Wyndham had sufficient notice of the requirements of Section 5 under the standard that applies to judicial interpretations of statutes. And it expressly notes that, should the district court decide that the higher standard applies – that is, if the court agrees to apply the general law of data security that the FTC has tried to develop in recent years – the court will need to reevaluate whether the FTCs rules meet Constitutional muster. That review would be subject to the tougher standard applied to agency interpretations of statutes.

Stressing the Third Circuit’s statement that the FTC had failed to explain how it had “informed the public that it needs to look at [FTC] complaints and consent decrees for guidance[,]” Gus concludes that the Third Circuit’s opinion indicates that  the FTC “has lost its war to create a general law of data security” based merely on its prior actions.  According to Gus:

The takeaway, it seems, is that the FTC does have the power to take action against bad security practices, but if it wants to do so in a way that shapes industry norms and legal standards – if it wants to develop a general law of data security – a patchwork of consent decrees and informal statements is insufficient to the task. Rather, it must either pursue its cases to a decision on the merits or develop legally binding rules through . . . rulemaking procedures.

2.     Wyndham’s Implications for the Scope of the FTC’s Legal Authority

I highly respect Gus’s trenchant legal and policy analysis of Wyndham.  I believe, however, that it may somewhat understate the strength of the FTC’s legal position going forward.  The Third Circuit also explained (citations omitted):

Wyndham is only entitled to notice of the meaning of the statute and not to the agencys interpretation of the statute. . . . 

[Furthermore,] Wyndham is entitled to a relatively low level of statutory notice for several reasons. Subsection 45(a) [of the FTC Act, which states “unfair acts or practices” are illegal] does not implicate any constitutional rights here. . . .  It is a civil rather than criminal statute. . . .  And statutes regulating economic activity receive a “less strict” test because their “subject matter is often more narrow, and because businesses, which face economic demands to plan behavior carefully, can be expected to consult relevant legislation in advance of action.” . . . .  In this context, the relevant legal rule is not “so vague as to be ‘no rule or standard at all.’” . . . .  Subsection 45(n) [of the FTC Act, as a prerequisite to a finding of unfairness,] asks whether “the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.” While far from precise, this standard informs parties that the relevant inquiry here is a cost-benefit analysis, . . . that considers a number of relevant factors, including the probability and expected size of reasonably unavoidable harms to consumers given a certain level of cybersecurity and the costs to consumers that would arise from investment in stronger cybersecurity. We acknowledge there will be borderline cases where it is unclear if a particular companys conduct falls below the requisite legal threshold. But under a due process analysis a company is not entitled to such precision as would eliminate all close calls. . . .  Fair notice is satisfied here as long as the company can reasonably foresee that a court could construe its conduct as falling within the meaning of the statute. . . . 

[In addition, in 2007, the FTC issued a guidebook on business data security, which] could certainly have helped Wyndham determine in advance that its conduct might not survive the [§ 45(n)] cost-benefit analysis.  Before the [cybersecurity] attacks [on Wyndhams network], the FTC also filed complaints and entered into consent decrees in administrative cases raising unfairness claims based on inadequate corporate cybersecurity. . . .  That the FTC Commissioners – who must vote on whether to issue a complaint . . . – believe that alleged cybersecurity practices fail the cost-benefit analysis of § 45(n) certainly helps companies with similar practices apprehend the possibility that their cybersecurity could fail as well.  

In my view, a fair reading of this Third Circuit language is that:  (1) courts should read key provisions of the FTC Act to encompass cybersecurity practices that the FTC finds are not cost-beneficial; and (2) the FTC’s history of guidance and consent decrees regarding cybersecurity give sufficient notice to companies regarding the nature of cybersecurity plans that the FTC may challenge.   Based on that reading, I conclude that even if a court adopts a very exacting standard for reviewing the FTC’s interpretation of its own statute, the FTC is likely to succeed in future case-specific cybersecurity challenges, assuming that it builds a solid factual record that appears to meet cost-benefit analysis.  Whether other Circuits would agree with the Third Circuit’s analysis is, of course, open to debate (I myself suspect that they probably would).

3.     Sound Policy in Light of Wyndham

Apart from our slightly different “takes” on the legal implications of the Third Circuit’s Wyndham decision, I fully agree with Gus that, as a policy matter, the FTC’s “patchwork of consent decrees and informal statements is insufficient to the task” of building a general law of cybersecurity.  In a 2014 Heritage Foundation Legal Memorandum on the FTC and cybersecurity, I stated:

The FTCs regulation of business systems by decree threatens to stifle innovation by companies related to data security and to impose costs that will be passed on in part to consumers. Missing from the consent decree calculus is the question of whether the benefits in diminished data security breaches justify those costs—a question that should be at the heart of unfairness analysis. There are no indications that the FTC has even asked this question in fashioning data security consents, let alone made case-specific cost-benefit analyses. This is troubling.

Equally troubling is the that the FTC apparently expects businesses to divine from a large number of ad hoc, fact-specific consent decrees with varying provisions what they must do vis-à-vis data security to avoid possible FTC targeting. The uncertainty engendered by sole reliance on complicated consent decrees for guidance (in the absence of formal agency guidelines or litigated court decisions) imposes additional burdens on business planners. . . .

[D]ata security investigations that are not tailored to the size and capacity of the firm may impose competitive disadvantages on smaller rivals in industries in which data protection issues are paramount.

Moreover, it may be in the interest of very large firms to support costlier and more intrusive FTC data security initiatives, knowing that they can better afford the adoption of prohibitively costly data security protocols than their smaller competitors can. This is an example of a “raising rivalscosts” strategy, which reduces competition by crippling or eliminating rivals.

Given these and related concerns (including the failure of existing FTC reports to give appropriate guidance), I concluded, among other recommendations, that:

[T]he FTC should issue data security guidelines that clarify its enforcement policy regarding data security breaches pursuant to Section 5 of the Federal Trade Commission Act. Such guidelines should be framed solely as limiting principles that tie the FTC’s hands to avoid enforcement excesses. They should studiously avoid dictating to industry the data security principles that firms should adopt. . . .

[T]he FTC should [also] employ a strict cost-benefit analysis before pursuing any new regulatory initiatives, legislative recommendations, or investigations related to other areas of data protection, such as data brokerage or the uses of big data.

In sum, the Third Circuit’s Wyndham decision, while interesting, in no way alters the fact that the FTC’s existing cybersecurity enforcement program is inadequate and unsound.  Whether through guidelines or formal FTC rules (which carry their own costs, including the risk of establishing inflexible standards that ignore future changes in business conditions and technology), the FTC should provide additional guidance to the private sector, rooted in sound cost-benefit analysis.  The FTC should also be ever mindful of the costs it imposes on the economy (including potential burdens on business innovation) whenever it considers bringing enforcement actions in this area.

4.     Conclusion

The debate over the appropriate scope of federal regulation of business cybersecurity programs will continue to rage, as serious data breaches receive public attention and the FTC considers new initiatives.  Let us hope that, as we move forward, federal regulators will fully take into account costs as well as benefits – including, in particular, the risk that federal overregulation will undermine innovation, harm businesses, and weaken the economy.

Recently, the en banc Federal Circuit decided in Suprema, Inc. v. ITC that the International Trade Commission could properly prevent the importation of articles that infringe under an indirect liability theory. The core of the dispute in Suprema was whether § 337 of the Tariff Act’s prohibition against “importing articles that . . . infringe a valid and enforceable United States patent” could be used to prevent the importation of articles that at the moment of importation were not (yet) directly infringing. In essence, is the ITC limited to acting only when there is a direct infringement, or can it also prohibit articles involved in an indirect infringement scheme — in this case under an inducement theory?

TOTM’s own Alden Abbott posted his view of the decision, and there are a couple of points we’d like to respond to, both embodied in this quote:

[The ITC’s Suprema decision] would likely be viewed unfavorably by the Supreme Court, which recently has shown reluctance about routinely invoking Chevron deference … Furthermore, the en banc majority’s willingness to find inducement liability at a time when direct patent infringement has not yet occurred (the point of importation) is very hard to square with the teachings of [Limelight v.] Akamai.

In truth, we are of two minds (four minds?) regarding this view. We’re deeply sympathetic with arguments that the Supreme Court has become — and should become — increasingly skeptical of blind Chevron deference. Recently, we filed a brief on the 2015 Open Internet Order that, in large part, argued that the FCC does not deserve Chevron deference under King v. Burwell, UARG v. EPA and Michigan v. EPA (among other important cases) along a very similar line of reasoning. However, much as we’d like to generally scale back Chevron deference, in this case we happen to think that the Federal Circuit got it right.

Put simply, “infringe” as used in § 337 plainly includes indirect infringement. Section 271 of the Patent Act makes it clear that indirect infringers are guilty of “infringement.” The legislative history of the section, as well as Supreme Court case law, makes it very clear that § 271 was a codification of both direct and indirect liability.

In taxonomic terms, § 271 codifies “infringement” as a top-level category, with “direct infringement” and “indirect infringement” as two distinct subcategories of infringement. The law further subdivides “indirect infringement” into sub-subcategories, “inducement” and “contributory infringement.” But all of these are “infringement.”

For instance, § 271(b) says that “[w]hoever actively induces infringement of a patent shall be liable as an infringer” (emphasis added). Thus, in terms of § 271, to induce infringement is to commit infringement within the meaning of the patent laws. And in § 337, assuming it follows § 271 (which seems appropriate given Congress’ stated purpose to “make it a more effective remedy for the protection of United States intellectual property rights” (emphasis added)), it must follow that when one imports “articles… that infringe” she can be liable for either (or both) § 271(a) direct infringement or § 271(b) inducement.

Frankly, we think this should end the analysis: There is no Chevron question here because the Tariff Act isn’t ambiguous.

But although it seems clear on the face of § 337 that “infringe” must include indirect infringement, at the very least § 337 is ambiguous and cannot clearly mean only “direct infringement.” Moreover, the history of patent law as well as the structure of the ITC’s powers both cut in favor of the ITC enforcing the Tariff Act against indirect infringers. The ITC’s interpretation of any ambiguity in the term “articles… that infringe” is surely reasonable.

The Ambiguity and History of § 337 Allows for Inducement Liability

Assuming for argument’s sake that § 337’s lack of specificity leaves room for debate as to what “infringe” means, there is nothing that militates definitively against indirect liability being included in § 337. The majority handles any ambiguity of this sort well:

[T]he shorthand phrase “articles that infringe” does not unambiguously exclude inducement of post-importation infringement… By using the word “infringe,” § 337 refers to 35 U.S.C. § 271, the statutory provision defining patent infringement. The word “infringe” does not narrow § 337’s scope to any particular subsections of § 271. As reflected in § 271 and the case law from before and after 1952, “infringement” is a term that encompasses both direct and indirect infringement, including infringement by importation that induces direct infringement of a method claim… Section 337 refers not just to infringement, but to “articles that infringe.” That phrase does not narrow the provision to exclude inducement of post-importation infringement. Rather, the phrase introduces textual uncertainty.

Further, the court notes that it has consistently held that inducement is a valid theory of liability on which to base § 337 cases.

And lest you think that this interpretation would give some new, expansive powers to the ITC (perhaps meriting something like a Brown & Williamson exception to Chevron deference), the ITC is still bound by all the defenses and limitations on indirect liability under § 271. Saying it has authority to police indirect infringement doesn’t give it carte blanche, nor any more power than US district courts currently have in adjudicating indirect infringement. In this case, the court went nowhere near the limits of Chevron in giving deference to the ITC’s decision that “articles… that infringe” emcompasses the well-established (and statutorily defined) law of indirect infringement.

Inducement Liability Isn’t Precluded by Limelight

Nor does the Supreme Court’s Limelight v. Akamai decision present any problem. Limelight is often quoted for the proposition that there can be no inducement liability without direct infringement. And it does stand for that, as do many other cases; that point is not really in any doubt. But what Alden and others (including the dissenters in Suprema) have cited it for is the proposition that inducement liability cannot attach unless all of the elements of inducement have already been practiced at the time of importation. Limelight does not support that contention, however.

Inducement liability contemplates direct infringement, but the direct infringement need not have been practiced by the same entity liable for inducement, nor at the same time as inducement (see, e.g., Standard Oil. v. Nippon). Instead, the direct infringement may come at a later time — and there is no dispute in Suprema regarding whether there was direct infringement (there was, as Suprema notes: “the Commission found that record evidence demonstrated that Mentalix had already directly infringed claim 19 within the United States prior to the initiation of the investigation.”).

Limelight, on the other hand, is about what constitutes the direct infringement element in an inducement case. The sole issue in Limelight was whether this “direct infringement element” required that all of the steps of a method patent be carried out by a single entity or entities acting in concert. In Limelight’s network there was a division of labor, so to speak, between the company and its customers, such that each carried out some of the steps of the method patent at issue. In effect, plaintiffs argued that Limelight should be liable for inducement because it practised some of the steps of the patented method, with the requisite intent that others would carry out the rest of the steps necessary for direct infringement. But neither Limelight nor its customers separately carried out all of the steps necessary for direct infringement.

The Court held (actually, it simply reiterated established law) that the method patent could never be violated unless a single party (or parties acting in concert) carried out all of the steps of the method necessary for direct infringement. Thus it also held that Limelight could not be liable for inducement because, on the facts of that case, none of its customers could ever be liable for the necessary, underlying direct infringement. Again — what was really at issue in Limelight were the requirements to establish the direct infringement necessary to prove inducement.

On remand, the Federal Circuit reinforced the point that Limelight was really about direct infringement and, by extension, who must be involved in the direct infringement element of an inducement claim. According to the court:

We conclude that the facts Akamai presented at trial constitute substantial evidence from which a jury could find that Limelight directed or controlled its customers’ performance of each remaining method step. As such, substantial evidence supports the jury’s verdict that all steps of the claimed methods were performed by or attributable to Limelight. Therefore, Limelight is liable for direct infringement.

The holding of Limelight is simply inapposite to the facts of Suprema. The crux of Suprema is whether the appropriate mens rea existed to support a claim of inducement — not whether the requisite direct infringement occurred or not.

The Structure of § 337 Supports The ITC’s Ability to Block Inducement

Further, as the majority in Suprema notes, the very idea of inducement liability necessarily contemplates that there will be a temporal separation between the event that gives rise to indirect liability and the future direct infringement (required to prove inducement). As the Suprema court briefly noted “Section 337(a)(1)(B)’s ‘sale . . . after importation’ language confirms that the Commission is permitted to focus on post-importation activity to identify the completion of infringement.”

In particular, each of the enforcement powers in § 337(a) contains a clause that, in addition to a prohibition against, e.g., infringing articles at the time of importation, also prohibits “the sale within the United States after importation by the owner, importer, or consignee, of articles[.]” Thus, Congress explicitly contemplated that the ITC would have the power to act upon articles at various points in time, not limiting it to a power effective only at the moment of importation.

Although the particular power to reach into the domestic market has to do with preventing the importer or its agent from making sales, this doesn’t undermine the larger point here: the ITC’s power to prevent infringing articles extends over a range of time. Given that “articles that … infringe” is at the very least ambiguous, and, as per the Federal Circuit (and our own position), this ambiguity allows for indirect infringement, it isn’t a stretch to infer that that Congress intended the ITC to have authority under § 337 to ban the import of articles that induce infringement that occurs only after the time of importation..

To interpret § 337 otherwise would be to render it absurd and to create a giant loophole that would enable infringers to easily circumvent the ITC’s enforcement powers.

A Dissent from the Dissent

The dissent also takes a curious approach to § 271 by mixing inducement and contributory infringement, and generally making a confusing mess of the two. For instance, Judge Dyk says

At the time of importation, the scanners neither directly infringe nor induce infringement… Instead, these staple articles may or may not ultimately be used to infringe… depending upon whether and how they are combined with domestically developed software after importation into the United States (emphasis added).

Whether or not the goods were “staples articles” (and thus potentially capable of substantial noninfringing uses) has nothing to do with whether or not there was inducement. Section 271 makes a very clear delineation between inducement in § 271(b) and contributory infringement in § 271(c). While a staple article of commerce capable of substantial noninfringing uses will not serve as the basis for a contributory infringement claim, it is irrelevant whether or not goods are such “staples” for purposes of establishing inducement.

The boundaries of inducement liability, by contrast, are focused on the intent of the actors: If there is an intent to induce, whether or not there is a substantial noninfringing use, there can be a violation of § 271. Contributory infringement and inducement receive treatment in separate paragraphs of § 271 and are separate doctrines comprising separate elements. This separation is so evident on the face of the law as well as in its history that the Supreme Court read the doctrine into copyright in Grokster — where, despite a potentially large number of non-infringing uses, the intent to induce infringement was sufficient to find liability.

Parting Thoughts on Chevron

We have some final thoughts on the Chevron question, because this is rightly a sore point in administrative law. In this case we think that the analysis should have ended at step one. Although the Federal Circuit began with an assumption of ambiguity, it was being generous to the appellants. Did Congress speak with clear intent? We think so. Section 271 very clearly includes direct infringement as well as indirect infringement within its definition of what constitutes infringement of a patent. When § 337 references “articles … that infringe” it seems fairly obvious that Congress intended the ITC to be able to enforce the prohibitions in § 271 in the context of imported goods.

But even if we advance to step two of the Chevron analysis, the ITC’s construction of § 337 is plainly permissible — and far from expansive. By asserting its authority here the ITC is simply policing the importation of infringing goods (which it clearly has the power to do), and doing so in the case of goods that indirectly infringe (a concept that has been part of US law for a very long time). If “infringe” as used in the Tariff Act is ambiguous, the ITC’s interpretation of it to include both indirect as well as direct infringement seems self-evidently reasonable.

Under the dissent’s (and Alden’s) interpretation of § 337, all that would be required to evade the ITC would be to import only the basic components of an article such that at the moment of importation there was no infringement. Once reassembled within the United States, the ITC’s power to prevent the sale of infringing goods would be nullified. Section 337 would thus be read to simply write out the entire “indirect infringement” subdivision of § 271 — an inference that seems like a much bigger stretch than that “infringement” under § 337 means all infringement under § 271. Congress was more than capable of referring only to “direct infringement” in § 337 if that’s what it intended.

Much as we would like to see Chevron limited, not every agency case is the place to fight this battle. If we are to have agencies, and we are to have a Chevron doctrine, there will be instances of valid deference to agency interpretations — regardless of how broadly or narrowly Chevron is interpreted. The ITC wasn’t making a power grab in Suprema, nor was its reading of the statute unexpected, inconsistent with its past practice, or expansive.

In short, Suprema doesn’t break any new statutory interpretation ground, nor present a novel question of “deep economic or political significance” akin to the question at issue in King v. Burwell. Like it or not, there will be no roots of an anti-Chevron-deference revolution growing out of Suprema.

As the organizer of this retrospective on Josh Wright’s tenure as FTC Commissioner, I have the (self-conferred) honor of closing out the symposium.

When Josh was confirmed I wrote that:

The FTC will benefit enormously from Josh’s expertise and his error cost approach to antitrust and consumer protection law will be a tremendous asset to the Commission — particularly as it delves further into the regulation of data and privacy. His work is rigorous, empirically grounded, and ever-mindful of the complexities of both business and regulation…. The Commissioners and staff at the FTC will surely… profit from his time there.

Whether others at the Commission have really learned from Josh is an open question, but there’s no doubt that Josh offered an enormous amount from which they could learn. As Tim Muris said, Josh “did not disappoint, having one of the most important and memorable tenures of any non-Chair” at the agency.

Within a month of his arrival at the Commission, in fact, Josh “laid down the cost-benefit-analysis gauntlet” in a little-noticed concurring statement regarding a proposed amendment to the Hart-Scott-Rodino Rules. The technical details of the proposed rule don’t matter for these purposes, but, as Josh noted in his statement, the situation intended to be avoided by the rule had never arisen:

The proposed rulemaking appears to be a solution in search of a problem. The Federal Register notice states that the proposed rules are necessary to prevent the FTC and DOJ from “expend[ing] scarce resources on hypothetical transactions.” Yet, I have not to date been presented with evidence that any of the over 68,000 transactions notified under the HSR rules have required Commission resources to be allocated to a truly hypothetical transaction.

What Josh asked for in his statement was not that the rule be scrapped, but simply that, before adopting the rule, the FTC weigh its costs and benefits.

As I noted at the time:

[I]t is the Commission’s responsibility to ensure that the rules it enacts will actually be beneficial (it is a consumer protection agency, after all). The staff, presumably, did a perfectly fine job writing the rule they were asked to write. Josh’s point is simply that it isn’t clear the rule should be adopted because it isn’t clear that the benefits of doing so would outweigh the costs.

As essentially everyone who has contributed to this symposium has noted, Josh was singularly focused on the rigorous application of the deceptively simple concept that the FTC should ensure that the benefits of any rule or enforcement action it adopts outweigh the costs. The rest, as they say, is commentary.

For Josh, this basic principle should permeate every aspect of the agency, and permeate the way it thinks about everything it does. Only an entirely new mindset can ensure that outcomes, from the most significant enforcement actions to the most trivial rule amendments, actually serve consumers.

While the FTC has a strong tradition of incorporating economic analysis in its antitrust decision-making, its record in using economics in other areas is decidedly mixed, as Berin points out. But even in competition policy, the Commission frequently uses economics — but it’s not clear it entirely understands economics. The approach that others have lauded Josh for is powerful, but it’s also subtle.

Inherent limitations on anyone’s knowledge about the future of technology, business and social norms caution skepticism, as regulators attempt to predict whether any given business conduct will, on net, improve or harm consumer welfare. In fact, a host of factors suggests that even the best-intentioned regulators tend toward overconfidence and the erroneous condemnation of novel conduct that benefits consumers in ways that are difficult for regulators to understand. Coase’s famous admonition in a 1972 paper has been quoted here before (frequently), but bears quoting again:

If an economist finds something – a business practice of one sort or another – that he does not understand, he looks for a monopoly explanation. And as in this field we are very ignorant, the number of ununderstandable practices tends to be very large, and the reliance on a monopoly explanation, frequent.

Simply “knowing” economics, and knowing that it is important to antitrust enforcement, aren’t enough. Reliance on economic formulae and theoretical models alone — to say nothing of “evidence-based” analysis that doesn’t or can’t differentiate between probative and prejudicial facts — doesn’t resolve the key limitations on regulatory decisionmaking that threaten consumer welfare, particularly when it comes to the modern, innovative economy.

As Josh and I have written:

[O]ur theoretical knowledge cannot yet confidently predict the direction of the impact of additional product market competition on innovation, much less the magnitude. Additionally, the multi-dimensional nature of competition implies that the magnitude of these impacts will be important as innovation and other forms of competition will frequently be inversely correlated as they relate to consumer welfare. Thus, weighing the magnitudes of opposing effects will be essential to most policy decisions relating to innovation. Again, at this stage, economic theory does not provide a reliable basis for predicting the conditions under which welfare gains associated with greater product market competition resulting from some regulatory intervention will outweigh losses associated with reduced innovation.

* * *

In sum, the theoretical and empirical literature reveals an undeniably complex interaction between product market competition, patent rules, innovation, and consumer welfare. While these complexities are well understood, in our view, their implications for the debate about the appropriate scale and form of regulation of innovation are not.

Along the most important dimensions, while our knowledge has expanded since 1972, the problem has not disappeared — and it may only have magnified. As Tim Muris noted in 2005,

[A] visitor from Mars who reads only the mathematical IO literature could mistakenly conclude that the U.S. economy is rife with monopoly power…. [Meanwhile, Section 2’s] history has mostly been one of mistaken enforcement.

It may not sound like much, but what is needed, what Josh brought to the agency, and what turns out to be absolutely essential to getting it right, is unflagging awareness of and attention to the institutional, political and microeconomic relationships that shape regulatory institutions and regulatory outcomes.

Regulators must do their best to constantly grapple with uncertainty, problems of operationalizing useful theory, and, perhaps most important, the social losses associated with error costs. It is not (just) technicians that the FTC needs; it’s regulators imbued with the “Economic Way of Thinking.” In short, what is needed, and what Josh brought to the Commission, is humility — the belief that, as Coase also wrote, sometimes the best answer is to “do nothing at all.”

The technocratic model of regulation is inconsistent with the regulatory humility required in the face of fast-changing, unexpected — and immeasurably valuable — technological advance. As Virginia Postrel warns in The Future and Its Enemies:

Technocrats are “for the future,” but only if someone is in charge of making it turn out according to plan. They greet every new idea with a “yes, but,” followed by legislation, regulation, and litigation…. By design, technocrats pick winners, establish standards, and impose a single set of values on the future.

For Josh, the first JD/Econ PhD appointed to the FTC,

economics provides a framework to organize the way I think about issues beyond analyzing the competitive effects in a particular case, including, for example, rulemaking, the various policy issues facing the Commission, and how I weigh evidence relative to the burdens of proof and production. Almost all the decisions I make as a Commissioner are made through the lens of economics and marginal analysis because that is the way I have been taught to think.

A representative example will serve to illuminate the distinction between merely using economics and evidence and understanding them — and their limitations.

In his Nielson/Arbitron dissent Josh wrote:

The Commission thus challenges the proposed transaction based upon what must be acknowledged as a novel theory—that is, that the merger will substantially lessen competition in a market that does not today exist.

[W]e… do not know how the market will evolve, what other potential competitors might exist, and whether and to what extent these competitors might impose competitive constraints upon the parties.

Josh’s straightforward statement of the basis for restraint stands in marked contrast to the majority’s decision to impose antitrust-based limits on economic activity that hasn’t even yet been contemplated. Such conduct is directly at odds with a sensible, evidence-based approach to enforcement, and the economic problems with it are considerable, as Josh also notes:

[I]t is an exceedingly difficult task to predict the competitive effects of a transaction where there is insufficient evidence to reliably answer the[] basic questions upon which proper merger analysis is based.

When the Commission’s antitrust analysis comes unmoored from such fact-based inquiry, tethered tightly to robust economic theory, there is a more significant risk that non-economic considerations, intuition, and policy preferences influence the outcome of cases.

Compare in this regard Josh’s words about Nielsen with Deborah Feinstein’s defense of the majority from such charges:

The Commission based its decision not on crystal-ball gazing about what might happen, but on evidence from the merging firms about what they were doing and from customers about their expectations of those development plans. From this fact-based analysis, the Commission concluded that each company could be considered a likely future entrant, and that the elimination of the future offering of one would likely result in a lessening of competition.

Instead of requiring rigorous economic analysis of the facts, couched in an acute awareness of our necessary ignorance about the future, for Feinstein the FTC fulfilled its obligation in Nielsen by considering the “facts” alone (not economic evidence, mind you, but customer statements and expressions of intent by the parties) and then, at best, casually applying to them the simplistic, outdated structural presumption – the conclusion that increased concentration would lead inexorably to anticompetitive harm. Her implicit claim is that all the Commission needed to know about the future was what the parties thought about what they were doing and what (hardy disinterested) customers thought they were doing. This shouldn’t be nearly enough.

Worst of all, Nielsen was “decided” with a consent order. As Josh wrote, strongly reflecting the essential awareness of the broader institutional environment that he brought to the Commission:

[w]here the Commission has endorsed by way of consent a willingness to challenge transactions where it might not be able to meet its burden of proving harm to competition, and which therefore at best are competitively innocuous, the Commission’s actions may alter private parties’ behavior in a manner that does not enhance consumer welfare.

Obviously in this regard his successful effort to get the Commission to adopt a UMC enforcement policy statement is a most welcome development.

In short, Josh is to be applauded not because he brought economics to the Commission, but because he brought the economic way of thinking. Such a thing is entirely too rare in the modern administrative state. Josh’s tenure at the FTC was relatively short, but he used every moment of it to assiduously advance his singular, and essential, mission. And, to paraphrase the last line of the movie The Right Stuff (it helps to have the rousing film score playing in the background as you read this): “for a brief moment, [Josh Wright] became the greatest [regulator] anyone had ever seen.”

I would like to extend my thanks to everyone who participated in this symposium. The contributions here will stand as a fitting and lasting tribute to Josh and his legacy at the Commission. And, of course, I’d also like to thank Josh for a tenure at the FTC very much worth honoring.


totmauthor —  27 August 2015

by Michael Baye, Bert Elwert Professor of Business at the Kelley School of Business, Indiana University, and former Director of the Bureau of Economics, FTC

Imagine a world where competition and consumer protection authorities base their final decisions on scientific evidence of potential harm. Imagine a world where well-intentioned policymakers do not use “possibility theorems” to rationalize decisions that are, in reality, based on idiosyncratic biases or beliefs. Imagine a world where “harm” is measured using a scientific yardstick that accounts for the economic benefits and costs of attempting to remedy potentially harmful business practices.

Many economists—conservatives and liberals alike—have the luxury of pondering this world in the safe confines of ivory towers; they publish in journals read by a like-minded audience that also relies on the scientific method.

Congratulations and thanks, Josh, for superbly articulating these messages in the more relevant—but more hostile—world outside of the ivory tower.

To those of you who might disagree with a few (or all) of Josh’s decisions, I challenge you to examine honestly whether your views on a particular matter are based on objective (scientific) evidence, or on your personal, subjective beliefs. Evidence-based policymaking can be discomforting: It sometimes induces those with philosophical biases in favor of intervention to make laissez-faire decisions, and it sometimes induces people with a bias for non-intervention to make decisions to intervene.

by Berin Szoka, President, TechFreedom

Josh Wright will doubtless be remembered for transforming how FTC polices competition. Between finally defining Unfair Methods of Competition (UMC), and his twelve dissents and multiple speeches about competition matters, he re-grounded competition policy in the error-cost framework: weighing not only costs against benefits, but also the likelihood of getting it wrong against the likelihood of getting it right.

Yet Wright may be remembered as much for what he started as what he finished: reforming the Commission’s Unfair and Deceptive Acts and Practices (UDAP) work. His consumer protection work is relatively slender: four dissents on high tech matters plus four relatively brief concurrences and one dissent on more traditional advertising substantiation cases. But together, these offer all the building blocks of an economic, error-cost-based approach to consumer protection. All that remains is for another FTC Commissioner to pick up where Wright left off.

Apple: Unfairness & Cost-Benefit Analysis

In January 2014, Wright issued a blistering, 17 page dissent from the Commission’s decision to bring, and settle, an enforcement action against Apple regarding the design of its app store. Wright dissented, not from the conclusion necessarily, but from the methodology by which the Commission arrived there. In essence, he argued for an error-cost approach to unfairness:

The Commission, under the rubric of “unfair acts and practices,” substitutes its own judgment for a private firm’s decisions as to how to design its product to satisfy as many users as possible, and requires a company to revamp an otherwise indisputably legitimate business practice. Given the apparent benefits to some consumers and to competition from Apple’s allegedly unfair practices, I believe the Commission should have conducted a much more robust analysis to determine whether the injury to this small group of consumers justifies the finding of unfairness and the imposition of a remedy.

…. although Apple’s allegedly unfair act or practice has harmed some consumers, I do not believe the Commission has demonstrated the injury is substantial. More importantly, any injury to consumers flowing from Apple’s choice of disclosure and billing practices is outweighed considerably by the benefits to competition and to consumers that flow from the same practice.

The majority insisted that the burden on consumers or Apple from its remedy “is de minimis,” and therefore “it was unnecessary for the Commission to undertake a study of how consumers react to different disclosures before issuing its complaint against Apple, as Commissioner Wright suggests.”

Wright responded: “Apple has apparently determined that most consumers do not want to experience excessive disclosures or to be inconvenienced by having to enter their passwords every time they make a purchase.” In essence, he argued, that the FTC should not presume to know better than Apple how to manage the subtle trade-offs between convenience and usability.

Wright was channeling Hayek’s famous quip: “The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.” The last thing the FTC should be doing is designing digital products — even by hovering over Apple’s shoulder.

The Data Broker Report

Wright next took the Commission to task for the lack of economic analysis in its May 2013 report, “Data Brokers: A Call for Transparency and Accountability.” In just four footnotes, Wright extended his analysis of Apple. For example:

Footnote 85: Commissioner Wright agrees that Congress should consider legislation that would provide for consumer access to the information collected by data brokers. However, he does not believe that at this time there is enough evidence that the benefits to consumers of requiring data brokers to provide them with the ability to opt out of the sharing of all consumer information for marketing purposes outweighs the costs of imposing such a restriction. Finally… he believes that the Commission should engage in a rigorous study of consumer preferences sufficient to establish that consumers would likely benefit from such a portal prior to making such a recommendation.

Footnote 88: Commissioner Wright believes that in enacting statutes such as the Fair Credit Reporting Act, Congress undertook efforts to balance [costs and benefits]. In the instant case, Commissioner Wright is wary of extending FCRA-like coverage to other uses and categories of information without first performing a more robust balancing of the benefits and costs associated with imposing these requirements

The Internet of Things Report

This January, in a 4-page dissent from the FTC’s staff report on “The Internet of Things: Privacy and Security in a Connected World,” Wright lamented that the report neither represented serious economic analysis of the issues discussed nor synthesized the FTC’s workshop on the topic:

A record that consists of a one-day workshop, its accompanying public comments, and the staff’s impressions of those proceedings, however well-intended, is neither likely to result in a representative sample of viewpoints nor to generate information sufficient to support legislative or policy recommendations.

His attack on the report’s methodology was blistering:

The Workshop Report does not perform any actual analysis whatsoever to ensure that, or even to give a rough sense of the likelihood that the benefits of the staff’s various proposals exceed their attendant costs. Instead, the Workshop Report merely relies upon its own assertions and various surveys that are not necessarily representative and, in any event, do not shed much light on actual consumer preferences as revealed by conduct in the marketplace…. I support the well-established Commission view that companies must maintain reasonable and appropriate security measures; that inquiry necessitates a cost-benefit analysis. The most significant drawback of the concepts of “security by design” and other privacy-related catchphrases is that they do not appear to contain any meaningful analytical content.


Nomi: Deception & Materiality Analysis

In April, Wright turned his analytical artillery from unfairness to deception, long the more uncontroversial half of UDAP. In a five-page dissent, Wright accused the Commission of essentially dispensing with the core limiting principle of the 1983 Deception Policy Statement: materiality. As Wright explained:

The materiality inquiry is critical because the Commission’s construct of “deception” uses materiality as an evidentiary proxy for consumer injury…. Deception causes consumer harm because it influences consumer behavior — that is, the deceptive statement is one that is not merely misleading in the abstract but one that causes consumers to make choices to their detriment that they would not have otherwise made. This essential link between materiality and consumer injury ensures the Commission’s deception authority is employed to deter only conduct that is likely to harm consumers and does not chill business conduct that makes consumers better off.

As in Apple, Wright did not argue that there might not be a role for the FTC; merely that the FTC had failed to justify bringing, let alone settling, an enforcement action without establishing that the key promise at issue — to provide in-store opt-out — was material.

The Chamber Speech: A Call for Economic Analysis

In May, Wright gave a speech to the Chamber of Commerce on “How to Regulate the Internet of Things Without Harming its Future: Some Do’s and Don’ts”:

Perhaps it is because I am an economist who likes to deal with hard data, but when it comes to data and privacy regulation, the tendency to rely upon anecdote to motivate policy is a serious problem. Instead of developing a proper factual record that documents cognizable and actual harms, regulators can sometimes be tempted merely to explore anecdotal and other hypothetical examples and end up just offering speculations about the possibility of harm.

And on privacy in particular:

What I have seen instead is what appears to be a generalized apprehension about the collection and use of data — whether or not the data is actually personally identifiable or sensitive — along with a corresponding, and arguably crippling, fear about the possible misuse of such data.  …. Any sensible approach to regulating the collection and use of data will take into account the risk of abuses that will harm consumers. But those risks must be weighed with as much precision as possible, as is the case with potential consumer benefits, in order to guide sensible policy for data collection and use. The appropriate calibration, of course, turns on our best estimates of how policy changes will actually impact consumers on the margin….

Wright concedes that the “vast majority of work that the Consumer Protection Bureau performs simply does not require significant economic analysis because they involve business practices that create substantial risk of consumer harm but little or nothing in the way of consumer benefits.” Yet he notes that the Internet has made the need for cost-benefit analysis far more acute, at least where conduct is ambiguous as its effects on consumers, as in Apple, to avoid “squelching innovation and depriving consumers of these benefits.”

The Wrightian Reform Agenda for UDAP Enforcement

Wright left all the building blocks his successor will need to bring “Wrightian” reform to how the Bureau of Consumer Protection works:

  1. Wright’s successor should work to require economic analysis for consent decrees, as Wright proposed in his last major address as a Commissioner. BE might not to issue a statement at all in run-of-the-mill deception cases, but it should certainly have to say something about unfairness cases.
  2. The FTC needs to systematically assess its enforcement process to understand the incentives causing companies to settle UDAP cases nearly every time — resulting in what Chairman Ramirez and Commissioner Brill frequently call the FTC’s “common law of consent decrees.”
  3. As Wright says in his Nomi dissent “While the Act does not set forth a separate standard for accepting a consent decree, I believe that threshold should be at least as high as for bringing the initial complaint.” This point should be uncontroversial, yet the Commission has never addressed it. Wright’s successor (and the FTC) should, at a minimum, propose a standard for settling cases.
  4. Just as Josh succeeded in getting the FTC to issue a UMC policy statement, his successor should re-assess the FTC’s two UDAP policy statements. Wright’s successor needs to make the case for finally codifying the DPS — and ensuring that the FTC stops bypassing materiality, as in Nomi.
  5. The Commission should develop a rigorous methodology for each of the required elements of unfairness and deception to justify bringing cases (or making report recommendations). This will be a great deal harder than merely attacking the lack of such methodology in dissents.
  6. The FTC has, in recent years, increasingly used reports to make de facto policy — by inventing what Wright calls, in his Chamber speech, “slogans and catchphrases” like “privacy by design,” and then using them as boilerplate requirements for consent decrees; by pressuring companies into adopting the FTC’s best practices; by calling for legislation; and so on. At a minimum, these reports must be grounded in careful economic analysis.
  7. The Commission should apply far greater rigor in setting standards for substantiating claims about health benefits. In two dissents, Genelink et al and HCG Platinum, Wright demolished arguments for a clear, bright line requiring two randomized clinical trials, and made the case for “a more flexible substantiation requirement” instead.

Conclusion: Big Shoes to Fill

It’s a testament to Wright’s analytical clarity that he managed to say so much about consumer protection in so few words. That his UDAP work has received so little attention, relative to his competition work, says just as much about the far greater need for someone to do for consumer protection what Wright did for competition enforcement and policy at the FTC.

Wright’s successor, if she’s going to finish what Wright started, will need something approaching Wright’s sheer intellect, his deep internalization of the error-costs approach, and his knack for brokering bipartisan compromise around major issues — plus the kind of passion for UDAP matters Wright had for competition matters. And, of course, that person needs to be able to continue his legacy on competition matters…

Compared to the difficulty of finding that person, actually implementing these reforms may be the easy part.