Section 5(a)(2) of the Federal Trade Commission (FTC) Act authorizes the FTC to “prevent persons, partnerships, or corporations, except . . . common carriers subject to the Acts to regulate commerce . . . from using unfair methods of competition in or affecting commerce and unfair or deceptive acts or practices in or affecting commerce.”  On August 29, in FTC v. AT&T, the Ninth Circuit issued a decision that exempts non-common carrier data services from U.S. Federal Trade Commission (FTC) jurisdiction, merely because they are offered by a company that has common carrier status.  This case involved an FTC allegation that AT&T had “throttled” data (slowed down Internet service) for “unlimited mobile data” customers without adequate consent or disclosures, in violation of Section 5 of the FTC Act.  The FTC had claimed that although AT&T mobile wireless voice services were a common carrier service, the company’s mobile wireless data services were not, and, thus, were subject to FTC oversight.  Reversing a federal district court’s refusal to grant AT&T’s motion to dismiss, the Ninth Circuit concluded that “when Congress used the term ‘common carrier’ in the FTC Act, [there is no indication] it could only have meant ‘common carrier to the extent engaged in common carrier activity.’”  The Ninth Circuit therefore determined that “a literal reading of the words Congress selected simply does comport with [the FTC’s] activity-based approach.”  The FTC’s pending case against AT&T in the Northern District of California (which is within the Ninth Circuit) regarding alleged unfair and deceptive advertising of satellite services by AT&T subsidiary DIRECTTV (see here) could be affected by this decision.

The Ninth Circuit’s AT&T holding threatens to further extend the FCC’s jurisdictional reach at the expense of the FTC.  It comes on the heels of the divided D.C. Circuit’s benighted and ill-reasoned decision (see here) upholding the FCC’s “Open Internet Order,” including its decision to reclassify Internet broadband service as a common carrier service.  That decision subjects broadband service to heavy-handed and costly FCC “consumer protection” regulation, including in the area of privacy.  The FCC’s overly intrusive approach stands in marked contrast to the economic efficiency considerations (albeit not always perfectly applied) that underlie FTC consumer protection mode of analysis.  As I explained in a May 2015 Heritage Foundation Legal Memorandum,  the FTC’s highly structured, analytic, fact-based methodology, combined with its vast experience in privacy and data security investigations, make it a far better candidate than the FCC to address competition and consumer protection problems in the area of broadband.

I argued in this space in March 2016 that, should the D.C. Circuit uphold the FCC’s Open Internet Order, Congress should carefully consider whether to strip the FCC of regulatory authority in this area (including, of course, privacy practices) and reassign it to the FTC.  The D.C. Circuit’s decision upholding that Order, combined with the Ninth Circuit’s latest ruling, makes the case for potential action by the next Congress even more urgent.

While it is at it, the next Congress should also weigh whether to repeal the FTC’s common carrier exemption, as well as all special exemptions for specified categories of institutions, such as banks, savings and loans, and federal credit unions (see here).  In so doing, Congress might also do away with the Consumer Financial Protection Bureau, an unaccountable bureaucracy whose consumer protection regulatory responsibilities should cease (see my February 2016 Heritage Legal Memorandum here).

Finally, as Heritage Foundation scholars have urged, Congress should look into enacting additional regulatory reform legislation, such as requiring congressional approval of new major regulations issued by agencies (including financial services regulators) and subjecting “independent” agencies (including the FCC) to executive branch regulatory review.

That’s enough for now.  Stay tuned.

Today ICLE released a white paper entitled, A critical assessment of the latest charge of Google’s anticompetitive bias from Yelp and Tim Wu.

The paper is a comprehensive response to a study by Michael Luca, Timothy Wu, Sebastian Couvidat, Daniel Frank, & William Seltzer, entitled, Is Google degrading search? Consumer harm from Universal Search.

The Wu, et al. paper will be one of the main topics of discussion at today’s Capitol Forum and George Washington Institute of Public Policy event on Dominant Platforms Under the Microscope: Policy Approaches in the US and EU, at which I will be speaking — along with a host of luminaries including, inter alia, Josh Wright, Jonathan Kanter, Allen Grunes, Catherine Tucker, and Michael Luca — one of the authors of the Universal Search study.

Follow the link above to register — the event starts at noon today at the National Press Club.

Meanwhile, here’s a brief description of our paper:

Late last year, Tim Wu of Columbia Law School (and now the White House Office of Management and Budget), Michael Luca of Harvard Business School (and a consultant for Yelp), and a group of Yelp data scientists released a study claiming that Google has been purposefully degrading search results from its more-specialized competitors in the area of local search. The authors’ claim is that Google is leveraging its dominant position in general search to thwart competition from specialized search engines by favoring its own, less-popular, less-relevant results over those of its competitors:

To improve the popularity of its specialized search features, Google has used the power of its dominant general search engine. The primary means for doing so is what is called the “universal search” or the “OneBox.”

This is not a new claim, and researchers have been attempting (and failing) to prove Google’s “bias” for some time. Likewise, these critics have drawn consistent policy conclusions from their claims, asserting that antitrust violations lie at the heart of the perceived bias. But the studies are systematically marred by questionable methodology and bad economics.

This latest study by Tim Wu, along with a cadre of researchers employed by Yelp (one of Google’s competitors and one of its chief antitrust provocateurs), fares no better, employing slightly different but equally questionable methodology, bad economics, and a smattering of new, but weak, social science. (For a thorough criticism of the inherent weaknesses of Wu et al.’s basic social science methodology, see Miguel de la Mano, Stephen Lewis, and Andrew Leyden, Focus on the Evidence: A Brief Rebuttal of Wu, Luca, et al (2016), available here).

The basic thesis of the study is that Google purposefully degrades its local searches (e.g., for restaurants, hotels, services, etc.) to the detriment of its specialized search competitors, local businesses, consumers, and even Google’s bottom line — and that this is an actionable antitrust violation.

But in fact the study shows nothing of the kind. Instead, the study is marred by methodological problems that, in the first instance, make it impossible to draw any reliable conclusions. Nor does the study show that Google’s conduct creates any antitrust-relevant problems. Rather, the construction of the study and the analysis of its results reflect a superficial and inherently biased conception of consumer welfare that completely undermines the study’s purported legal and economic conclusions.

Read the whole thing here.

As Commissioner Wheeler moves forward with his revised set-top box proposal, and on the eve of tomorrow’s senate FCC oversight hearing, we would do well to reflect on some insightful testimony regarding another of the Commission’s rulemakings from ten years ago:

We are living in a digital gold age and consumers… are the beneficiaries. Consumers have numerous choices for buying digital content and for buying devices on which to play that content. They have never had so much flexibility and so much opportunity.  

* * *

As the content industry has ramped up on-line delivery of content, it has been testing a variety of protection measures that provide both security for the industry and flexibility for consumers.

So to answer the question, can content protection and technological innovation coexist?  It is a resounding yes. Look at the robust market for on-line content distribution facilitated by the technologies and networks consumers love.

* * *

[T]he Federal Communications Commission should not become the Federal Computer Commission or the Federal Copyright Commission, and the marketplace, not the Government, is the best arbiter of what technologies succeed or fail.

That’s not the self-interested testimony of a studio or cable executive — that was Gigi Sohn, current counsel to Chairman Wheeler, speaking on behalf of Public Knowledge in 2006 before the House Energy and Commerce Committee against the FCC’s “broadcast flag” rules. Those rules, supported by a broad spectrum of rightsholders, required consumer electronics devices to respect programming conditions preventing the unauthorized transmission over the internet of digital broadcast television content.

Ms. Sohn and Public Knowledge won that fight in court, convincing the DC Circuit that Congress hadn’t given the FCC authority to impose the rules in the first place, and she successfully urged Congress not to give the FCC the authority to reinstate them.

Yet today, she and the Chairman seem to have forgotten her crucial insights from ten years ago. If the marketplace for video content was sufficiently innovative and competitive then, how can it possibly not be so now, with audiences having orders of magnitude more choices, both online and off? And if the FCC lacked authority to adopt copyright-related rules then, how does the FCC suddenly have that authority now, in the absence of any intervening congressional action?

With Section 106 of the Copyright Act, Congress granted copyright holders the exclusive rights to engage in or license the reproduction, distribution, and public performance of their works. The courts are the “backstop,” not the FCC (as Chairman Wheeler would have it), and section 629 of the Communications Act doesn’t say otherwise. All section 629 does is direct the FCC to promote a competitive market for devices to access pay-TV services from pay-TV providers. As we noted last week, it very simply doesn’t allow the FCC to interfere with the license arrangements that fill those devices, and, short of explicit congressional direction, the Commission is simply not empowered to interfere with the framework set forth in the Copyright Act.

Chairman Wheeler’s latest proposal has improved on his initial plan by, for example, moving toward an applications-based approach and away from the mandatory disaggregation of content. But it would still arrogate to the FCC the authority to stand up a licensing body for the distribution of content over pay-TV applications; set rules on the terms such licenses must, may, and may not include; and even allow the FCC itself to create terms or the entire license. Such rules would necessarily implicate the extent to which rightsholders are able to control the distribution of their content.

The specifics of the regulations may be different from 2006, but the point is the same: What the FCC could not do in 2006, it cannot do today.

Mylan Pharmaceuticals recently reinvigorated the public outcry over pharmaceutical price increases when news surfaced that the company had raised the price of EpiPens by more than 500% over the past decade and, purportedly, had plans to increase the price even more. The Mylan controversy comes on the heels of several notorious pricing scandals last year. Recall Valeant Pharmaceuticals, that acquired cardiac drugs Isuprel and Nitropress and then quickly raised their prices by 525% and 212%, respectively. And of course, who can forget Martin Shkreli of Turing Pharmaceuticals, who increased the price of toxoplasmosis treatment Daraprim by 5,000% and then claimed he should have raised the price even higher.

However, one company, pharmaceutical giant Allergan, seems to be taking a different approach to pricing.   Last week, Allergan CEO Brent Saunders condemned the scandalous pricing increases that have raised suspicions of drug companies and placed the entire industry in the political hot seat. In an entry on the company’s blog, Saunders issued Allergan’s “social contract with patients” that made several drug pricing commitments to its customers.

Some of the most important commitments Allergan made to its customers include:

  • A promise to not increase prices more than once a year, and to limit price increases to singe-digit percentage increases.
  • A pledge to improve patient access to Allergan medications by enhancing patient assistance programs in 2017
  • A vow to cooperate with policy makers and payers (including government drug plans, private insurers, and pharmacy benefit managers) to facilitate better access to Allergan products by offering pricing discounts and paying rebates to lower drug costs.
  • An assurance that Allergan will no longer engage in the common industry tactic of dramatically increasing prices for branded drugs nearing patent expiry, without cost increases that justify the increase.
  • A commitment to provide annual updates on how pricing affects Allergan’s business.
  • A pledge to price Allergan products in a way that is commensurate with, or lower than, the value they create.

Saunders also makes several non-pricing pledges to maintain a continuous supply of its drugs, diligently monitor the safety of its products, and appropriately educate physicians about its medicines. He also makes the point that the recent pricing scandals have shifted attention away from the vibrant medical innovation ecosystem that develops new life-saving and life-enhancing drugs. Saunders contends that the focus on pricing by regulators and the public has incited suspicions about this innovation ecosystem: “This ecosystem can quickly fall apart if it is not continually nourished with the confidence that there will be a longer term opportunity for appropriate return on investment in the long R&D journey.”

Policy-makers and the public would be wise to focus on the importance of brand drug innovation. Brand drug companies are largely responsible for pharmaceutical innovation. Since 2000, brand companies have spent over half a trillion dollars on R&D, and they currently account for over 90 percent of the spending on the clinical trials necessary to bring new drugs to market. As a result of this spending, over 550 new drugs have been approved by the FDA since 2000, and another 7,000 are currently in development globally. And this innovation is directly tied to health advances. Empirical estimates of the benefits of pharmaceutical innovation indicate that each new drug brought to market saves 11,200 life-years each year.  Moreover, new drugs save money by reducing doctor visits, hospitalizations, and other medical procedures, ultimately for every $1 spent on new drugs, total medical spending decreases by more than $7.

But, as Saunders suggests, this innovation depends on drugmakers earning a sufficient return on their investment in R&D. The costs to bring a new drug to market with FDA approval are now estimated at over $2 billion, and only 1 in 10 drugs that begin clinical trials are ever approved by the FDA. Brand drug companies must price a drug not only to recoup the drug’s own costs, they must also consider the costs of all the product failures in their pricing decisions. However, they have a very limited window to recoup these costs before generic competition destroys brand profits: within three months of the first generic entry, generics have already captured over 70 percent of the brand drugs’ market. Drug companies must be able to price drugs at a level where they can earn profits sufficient to offset their R&D costs and the risk of failures. Failure to cover these costs will slow investment in R&D; drug companies will not spend millions and billions of dollars developing drugs if they cannot recoup the costs of that development.

Yet several recent proposals threaten to control prices in a way that could prevent drug companies from earning a sufficient return on their investment in R&D. Ultimately, we must remember that a social contract involves commitment from all members of a group; it should involve commitments from drug companies to price responsibly, and commitments from the public and policy makers to protect innovation. Hopefully, more drug companies will follow Allergan’s lead and renounce the exorbitant price increases we’ve seen in recent times. But in return, we should all remember that innovation and, in turn, health improvements, depend on drug companies’ profitability.

Imagine if you will… that a federal regulatory agency were to decide that the iPhone ecosystem was too constraining and too expensive; that consumers — who had otherwise voted for iPhones with their dollars — were being harmed by the fact that the platform was not “open” enough.

Such an agency might resolve (on the basis of a very generous reading of a statute), to force Apple to make its iOS software available to any hardware platform that wished to have it, in the process making all of the apps and user data accessible to the consumer via these new third parties, on terms set by the agency… for free.

Difficult as it may be to picture this ever happening, it is exactly the sort of Twilight Zone scenario that FCC Chairman Tom Wheeler is currently proposing with his new set-top box proposal.

Based on the limited information we have so far (a fact sheet and an op-ed), Chairman Wheeler’s new proposal does claw back some of the worst excesses of his initial draft (which we critiqued in our comments and reply comments to that proposal).

But it also appears to reinforce others — most notably the plan’s disregard for the right of content creators to control the distribution of their content. Wheeler continues to dismiss the complex business models, relationships, and licensing terms that have evolved over years of competition and innovation. Instead, he offers  a one-size-fits-all “solution” to a “problem” that market participants are already falling over themselves to provide.

Plus ça change…

To begin with, Chairman Wheeler’s new proposal is based on the same faulty premise: that consumers pay too much for set-top boxes, and that the FCC is somehow both prescient enough and Congressionally ordained to “fix” this problem. As we wrote in our initial comments, however,

[a]lthough the Commission asserts that set-top boxes are too expensive, the history of overall MVPD prices tells a remarkably different story. Since 1994, per-channel cable prices including set-top box fees have fallen by 2 percent, while overall consumer prices have increased by 54 percent. After adjusting for inflation, this represents an impressive overall price decrease.

And the fact is that no one buys set-top boxes in isolation; rather, the price consumers pay for cable service includes the ability to access that service. Whether the set-top box fee is broken out on subscribers’ bills or not, the total price consumers pay is unlikely to change as a result of the Commission’s intervention.

As we have previously noted, the MVPD set-top box market is an aftermarket; no one buys set-top boxes without first (or simultaneously) buying MVPD service. And as economist Ben Klein (among others) has shown, direct competition in the aftermarket need not be plentiful for the market to nevertheless be competitive:

Whether consumers are fully informed or uninformed, consumers will pay a competitive package price as long as sufficient competition exists among sellers in the [primary] market.

Engineering the set-top box aftermarket to bring more direct competition to bear may redistribute profits, but it’s unlikely to change what consumers pay.

Stripped of its questionable claims regarding consumer prices and placed in the proper context — in which consumers enjoy more ways to access more video content than ever before — Wheeler’s initial proposal ultimately rested on its promise to “pave the way for a competitive marketplace for alternate navigation devices, and… end the need for multiple remote controls.” Weak sauce, indeed.

He now adds a new promise: that “integrated search” will be seamlessly available for consumers across the new platforms. But just as universal remotes and channel-specific apps on platforms like Apple TV have already made his “multiple remotes” promise a hollow one, so, too, have competitive pressures already begun to deliver integrated search.

Meanwhile, such marginal benefits come with a host of substantial costs, as others have pointed out. Do we really need the FCC to grant itself more powers and create a substantial and coercive new regulatory regime to mandate what the market is already poised to provide?

From ignoring copyright to obliterating copyright

Chairman Wheeler’s first proposal engendered fervent criticism for the impossible position in which it placed MVPDs — of having to disregard, even outright violate, their contractual obligations to content creators.

Commendably, the new proposal acknowledges that contractual relationships between MVPDs and content providers should remain “intact.” Thus, the proposal purports to enable programmers and MVPDs to maintain “their channel position, advertising and contracts… in place.” MVPDs will retain “end-to-end” control of the display of content through their apps, and all contractually guaranteed content protection mechanisms will remain, because the “pay-TV’s software will manage the full suite of linear and on-demand programming licensed by the pay-TV provider.”

But, improved as it is, the new proposal continues to operate in an imagined world where the incredibly intricate and complex process by which content is created and distributed can be reduced to the simplest of terms, dictated by a regulator and applied uniformly across all content and all providers.

According to the fact sheet, the new proposal would “[p]rotect[] copyrights and… [h]onor[] the sanctity of contracts” through a “standard license”:

The proposed final rules require the development of a standard license governing the process for placing an app on a device or platform. A standard license will give device manufacturers the certainty required to bring innovative products to market… The license will not affect the underlying contracts between programmers and pay-TV providers. The FCC will serve as a backstop to ensure that nothing in the standard license will harm the marketplace for competitive devices.

But programming is distributed under a diverse range of contract terms. The only way a single, “standard license” could possibly honor these contracts is by forcing content providers to license all of their content under identical terms.

Leaving aside for a moment the fact that the FCC has no authority whatever to do this, for such a scheme to work, the agency would necessarily have to strip content holders of their right to govern the terms on which their content is accessed. After all, if MVPDs are legally bound to redistribute content on fixed terms, they have no room to permit content creators to freely exercise their rights to specify terms like windowing, online distribution restrictions, geographic restrictions, and the like.

In other words, the proposal simply cannot deliver on its promise that “[t]he license will not affect the underlying contracts between programmers and pay-TV providers.”

But fear not: According to the Fact Sheet, “[p]rogrammers will have a seat at the table to ensure that content remains protected.” Such largesse! One would be forgiven for assuming that the programmers’ (single?) seat will surrounded by those of other participants — regulatory advocates, technology companies, and others — whose sole objective will be to minimize content companies’ ability to restrict the terms on which their content is accessed.

And we cannot ignore the ominous final portion of the Fact Sheet’s “Standard License” description: “The FCC will serve as a backstop to ensure that nothing in the standard license will harm the marketplace for competitive devices.” Such an arrogation of ultimate authority by the FCC doesn’t bode well for that programmer’s “seat at the table” amounting to much.

Unfortunately, we can only imagine the contours of the final proposal that will describe the many ways by which distribution licenses can “harm the marketplace for competitive devices.” But an educated guess would venture that there will be precious little room for content creators and MVPDs to replicate a large swath of the contract terms they currently employ. “Any content owner can have its content painted any color that it wants, so long as it is black.”

At least we can take solace in the fact that the FCC has no authority to do what Wheeler wants it to do

And, of course, this all presumes that the FCC will be able to plausibly muster the legal authority in the Communications Act to create what amounts to a de facto compulsory licensing scheme.

A single license imposed upon all MVPDs, along with the necessary restrictions this will place upon content creators, does just as much as an overt compulsory license to undermine content owners’ statutory property rights. For every license agreement that would be different than the standard agreement, the proposed standard license would amount to a compulsory imposition of terms that the rights holders and MVPDs would not otherwise have agreed to. And if this sounds tedious and confusing, just wait until the Commission starts designing its multistakeholder Standard Licensing Oversight Process (“SLOP”)….

Unfortunately for Chairman Wheeler (but fortunately for the rest of us), the FCC has neither the legal authority, nor the requisite expertise, to enact such a regime.

Last month, the Copyright Office was clear on this score in its letter to Congress commenting on the Chairman’s original proposal:  

[I]t is important to remember that only Congress, through the exercise of its power under the Copyright Clause, and not the FCC or any other agency, has the constitutional authority to create exceptions and limitations in copyright law. While Congress has enacted compulsory licensing schemes, they have done so in response to demonstrated market failures, and in a carefully circumscribed manner.

Assuming that Section 629 of the Communications Act — the provision that otherwise empowers the Commission to promote a competitive set-top box market — fails to empower the FCC to rewrite copyright law (which is assuredly the case), the Commission will be on shaky ground for the inevitable torrent of lawsuits that will follow the revised proposal.

In fact, this new proposal feels more like an emergency pivot by a panicked Chairman than an actual, well-grounded legal recommendation. While the new proposal improves upon the original, it retains at its core the same ill-informed, ill-advised and illegal assertion of authority that plagued its predecessor.

The Antitrust Division of the U.S. Department of Justice (DOJ) ignored sound law and economics principles in its August 4 decision announcing a new interpretation of seventy-five year-old music licensing consent decrees it had entered into separately with the two major American “performing rights organizations” (PROs)  —  the American Society of Composers, Authors, and Publishers (see ASCAP) and Broadcast Music, Inc. (see BMI).  It also acted in a matter at odds with international practice.  DOJ should promptly rescind its new interpretation and restore the welfare-enhancing licensing flexibility that ASCAP and BMI previously enjoyed.   If DOJ fails to do this, the court overseeing the decrees or Congress should be prepared to act.


ASCAP and BMI contract with music copyright holders to act as intermediaries that provide “blanket” licenses to music users (e.g., television and radio stations, bars, and internet music distributors) for use of their full copyrighted musical repertoires, without the need for song-specific licensing negotiations.  This greatly reduces the transactions costs of arranging for the playing of musical works, benefiting music users, the listening public, and copyright owners (all of whom are assured of at least some compensation for their endeavors).  ASCAP and BMI are big businesses, with each PRO holding licenses to over ten million works and accounting for roughly 45 percent of the domestic music licensing market (ninety percent combined).  Because both ASCAP and BMI pool copyrighted songs that could otherwise compete with each other, and both grant users a single-price “blanket license” conveying the rights to play their full set of copyrighted works, the two organizations could be seen as restricting competition among copyrighted works and fixing the prices of copyrighted substitutes – raising serious questions under section 1 of the Sherman Antitrust Act, which condemns contracts that unreasonably restrain trade.  This led the DOJ to bring antitrust suits against ASCAP and BMI over eighty years ago, which were settled by separate judicially-filed consent decrees in 1941.  The decrees imposed a variety of limitations on the two PROs’ licensing practices, aimed at preventing ASCAP and BMI from exercising anticompetitive market power (such as the setting of excessive licensing rates).  The decrees were amended twice over the years, most recently in 2001, to take account of changing market conditions.  The U.S. Supreme Court noted the constraining effect of the decrees in BMI v. CBS (1979), in ruling that the BMI and ASCAP blanket licenses did not constitute per se illegal price fixing.  The Court held, rather, that the licenses should be evaluated on a case-by-case basis under the antitrust “rule of reason,” since the licenses inherently generated great efficiency benefits (“the immediate use of covered compositions, without the delay of prior individual negotiations”) that had to be weighed against potential anticompetitive harms.

The August 4, 2016 DOJ Consent Decree Interpretation

Fast forward to 2014, when DOJ undertook a new review of the ASCAP and BMI decrees, and requested the submission of public comments to aid it in its deliberations.  This review came to an official conclusion two year laters, on August 4, 2016, when DOJ decided not to amend the decrees – but announced a decree interpretation that limits ASCAP’s and BMI’s flexibility.  Specifically, DOJ stated that the decrees needed to be “more consistently applied.”  By this, the DOJ meant that BMI and ASCAP should only grant blanket licenses that cover all of the rights to 100 percent of the works in the PROs’ respective catalogs, not licenses that cover only partial interests in those works.  DOJ stated:

Only full-work licensing can yield the substantial procompetitive benefits associated with blanket licenses that distinguish ASCAP’s and BMI’s activities from other agreements among competitors that present serious issues under the antitrust laws.

The New DOJ Interpretation is bad as a Matter of Policy

DOJ’s August 4 interpretation rejects industry practice.  Under it, ASCAP and BMI will only be able to offer a license covering all of the copyright interests in a musical competition, even if the license covers a joint work.  For example, consider a band of five composer-musicians, each of whom has a fractional interest in the copyright covering the band’s new album which is a joint work.  Previously, each musician was able to offer a partial interest in the joint work to a performance rights organization, reflecting the relative shares of the total copyright interest covering the work. The organization could offer a partial license, and a user could aggregate different partial licenses in order to cover the whole joint work.

Now, however, under DOJ’s new interpretation, BMI and ASCAP will be prevented from offering partial licenses to that work to users. This may deny the band’s individual members the opportunity to deal profitably with BMI and ASCAP, thereby undermining their ability to receive fair compensation.  As the two PROs have noted, this approach “will cause unnecessary chaos in the marketplace and place unfair financial burdens and creative constraints on songwriters and composers.”  According to ASCAP President Paul Williams, “It is as if the DOJ saw songwriters struggling to stay afloat in a sea of outdated regulations and decided to hand us an anchor, in the form of 100 percent licensing, instead of a life preserver.”  Furthermore, the president and CEO of BMI, Mike O’Neill, stated:  “We believe the DOJ’s interpretation benefits no one – not BMI or ASCAP, not the music publishers, and not the music users – but we are most sensitive to the impact this could have on you, our songwriters and composers.”  These views are bolstered by a January 2016 U.S. Copyright Office report, which concluded that “an interpretation of the consent decrees that would require 100-percent licensing or removal of a work from the ASCAP or BMI repertoire would appear to be fraught with legal and logistical problems, and might well result in a sharp decrease in repertoire available through these [performance rights organizations’] blanket licenses.”  Regrettably, during the decree review period, DOJ ignored the expert opinion of the Copyright Office, as well as the public record comments of numerous publishers and artists (see here, for example) indicating that a 100 percent licensing requirement would depress returns to copyright owners and undermine the creative music industry.

Most fundamentally, DOJ’s new interpretation of the BMI and ASCAP consent decrees involves an abridgment of economic freedom.  It further limits the flexibility of copyright music holders and music users to contract with intermediaries to promote the efficient distribution of music performance rights, in a manner that benefits the listening public while allowing creative artists sufficient compensation for their efforts.  DOJ made no compelling showing that a new consent decree constraint is needed to promote competition (100 percent licensing only).  Far from promoting competition, DOJ’s new interpretation undermines it.  In short, DOJ micromanagement of copyright licensing by consent decree reinterpretation is a costly new regulatory initiative that reflects a lack of appreciation for intellectual property rights, which incentivize innovation.  In short, DOJ’s latest interpretation of the ASCAP and BMI decrees is terrible policy.

The New DOJ Interpretation is bad as a Matter of Law

DOJ’s new interpretation not only is bad policy, it is inconsistent with sound textual construction of the decrees themselves.  As counsel for BMI explained in an August 4 federal court filing (in the Southern District of New York, which oversees the decrees), the BMI decree (and therefore the analogous ASCAP decree as well) does not expressly require 100 percent licensing and does not unambiguously prohibit fractional licensing.  Accordingly, since a consent decree is an injunction, and any activity not expressly required or prohibited thereunder is permitted, fractional shares licensing should be authorized.  DOJ’s new interpretation ignores this principle.  It also is at odds with a report of the U.S. Copyright Office that concluded the BMI consent decree “must be understood to include partial interests in musical works.”  Furthermore, the new interpretation is belied by the fact that the PRO licensing market has developed and functioned efficiently for decades by pricing, colleting, and distributing fees for royalties on a fractional basis.  Courts view such evidence of trade practice and custom as relevant in determining the meaning of a consent decree.


The New DOJ Interpretation Runs Counter to International Norms

Finally, according to Gadi Oron, Director General of the International Confederation of Societies of Authors and Composers (CISAC), a Paris-based organization that regroups 239 rights societies from 123 countries, including ASCAP, BMI, and SESAC, adoption of the new interpretation would depart from international norms in the music licensing industry and have disruptive international effects:

It is clear that the DoJ’s decisions have been made without taking the interests of creators, neither American nor international, into account. It is also clear that they were made with total disregard for the international framework, where fractional licensing is practiced, even if it’s less of a factor because many countries only have one performance rights organization representing songwriters in their territory. International copyright laws grant songwriters exclusive rights, giving them the power to decide who will license their rights in each territory and it is these rights that underpin the landscape in which authors’ societies operate. The international system of collective management of rights, which is based on reciprocal representation agreements and founded on the freedom of choice of the rights holder, would be negatively affected by such level of government intervention, at a time when it needs support more than ever.


In sum, DOJ should take account of these concerns and retract its new interpretation of the ASCAP and BMI consent decrees, restoring the status quo ante.  If it fails to do so, a federal court should be prepared to act, and, if necessary, Congress should seriously consider appropriate corrective legislation.

It’s not quite so simple to spur innovation. Just ask the EU as it resorts to levying punitive retroactive taxes on productive American companies in order to ostensibly level the playing field (among other things) for struggling European startups. Thus it’s truly confusing when groups go on a wholesale offensive against patent rights — one of the cornerstones of American law that has contributed a great deal toward our unparalleled success as an innovative economy.

Take EFF, for instance. The advocacy organization has recently been peddling sample state legislation it calls the “Reclaim Invention Act,” which it claims is targeted at reining in so-called “patent trolls.” Leaving aside potential ulterior motives (like making it impossible to get software patents at all), I am left wondering what EFF actually hopes to achieve.

“Troll” is a scary sounding word, but what exactly is wrapped up in EFF’s definition? According to EFF’s proposed legislation, a “patent assertion entity” (the polite term for “patent troll”) is any entity that primarily derives its income through the licensing of patents – as opposed to actually producing the invention for public consumption. But this is just wrong. As Zorina Khan has noted, the basic premise upon which patent law was constructed in the U.S. was never predicated upon whether an invention would actually be produced:

The primary concern was access to the new information, and the ability of other inventors to benefit from the discovery either through licensing, inventing around the idea, or at expiration of the patent grant. The emphasis was certainly not on the production of goods; in fact, anyone who had previously commercialized an invention lost the right of exclusion vested in patents. The decision about how or whether the patent should be exploited remained completely within the discretion of the patentee, in the same way that the owner of physical property is allowed to determine its use or nonuse.

Patents are property. As with other forms of property, patent holders are free to transfer them to whomever they wish, and are free to license them as they see fit. The mere act of exercising property rights simply cannot be the basis for punitive treatment by the state. And, like it or not, licensing inventions or selling the property rights to an invention is very often how inventors are compensated for their work. Whether one likes the Patent Act in particular or not is irrelevant; as long as we have patents, these are fundamental economic and legal facts.

Further, the view implicit in EFF’s legislative proposal completely ignores the fact that the people or companies that may excel at inventing things (the province of scientists, for example) may not be so skilled at commercializing things (the province of entrepreneurs). Moreover, inventions can be enormously expensive to commercialize. In such cases, it could very well be the most economically efficient result to allow some third party with the requisite expertise or the means to build it, to purchase and manage the rights to the patent, and to allow them to arrange for production of the invention through licensing agreements. Intermediaries are nothing new in society, and, despite popular epithets about “middlemen,” they actually provide a necessary function with respect to mobilizing capital and enabling production.

Granted, some companies will exhibit actual “troll” behavior, but the question is not whether some actors are bad, but whether the whole system overall optimizes innovation and otherwise contributes to greater social welfare. Licensing patents in itself is a benign practice, so long as the companies that manage the patents are not abusive. And, of course, among the entities that engage in patent licensing, one would assume that universities would be the most unobjectionable of all parties.

Thus, it’s extremely disappointing that EFF would choose to single out universities as aiders and abettors of “trolls” — and in so doing recommend punitive treatment. And what EFF recommends is shockingly draconian. It doesn’t suggest that there should be heightened review in IPR proceedings, or that there should be fee shifting or other case-by-case sanctions doled out for unwise partnership decisions. No, according to the model legislation, universities would be outright cut off from government financial aid or other state funding, and any technology transfers would be void, unless they:

determine whether a patent is the most effective way to bring a new invention to a broad user base before filing for a patent that covers that invention[;] … prioritize technology transfer that develops its inventions and scales their potential user base[;] … endeavor to nurture startups that will create new jobs, products, and services[;] … endeavor to assign and license patents only to entities that require such licenses for active commercialization efforts or further research and development[;] … foster agreements and relationships that include the sharing of know-how and practical experience to maximize the value of the assignment or license of the corresponding patents; and … prioritize the public interest in all patent transactions.

Never mind the fact that recent cases like Alice Corp., Octane Fitness, and Highmark — as well as the new inter partes review process — seem to be putting effective downward pressure on frivolous suits (as well as, potentially, non-frivolous suits, for that matter); apparently EFF thinks that putting the screws to universities is what’s needed to finally overcome the (disputed) problems of excessive patent litigation.

Perhaps reflecting that even EFF itself knows that its model legislation is more of a publicity stunt than a serious proposal, most of what it recommends is either so ill-defined as to be useless (e.g., “prioritize public interest in all patent transactions?” What does that even mean?) or is completely mixed up.

For instance, the entire point of a university technology transfer office is that educational institutions and university researchers are not themselves in a position to adequately commercialize inventions. Questions of how large a user base a given invention can reach, or how best to scale products, grow markets, or create jobs are best left to entrepreneurs and business people. The very reason a technology transfer office would license or sell its patents to a third party is to discover these efficiencies.

And if a university engages in a transfer that, upon closer scrutiny, runs afoul of this rather fuzzy bit of legislation, any such transfers will be deemed void. Which means that universities will either have to expend enormous resources to find willing partners, or will spend millions on lawsuits and contract restitution damages. Enacting these feel-good  mandates into state law is at best useless, and most likely a tool for crusading plaintiff’s attorneys to use to harass universities.

Universities: Don’t you dare commercialize that invention!

As I noted above, it’s really surprising that groups like EFF are going after universities, as their educational mission and general devotion to improving social welfare should make them the darlings of social justice crusaders. However, as public institutions with budgets and tax statuses dependent on political will, universities are both unable to route around organizational challenges (like losing student aid or preferred tax status) and are probably unwilling to engage in wholesale PR defensive warfare for fear of offending a necessary political constituency. Thus, universities are very juicy targets — particularly when they engage in “dirty” commercial activities of any sort, no matter how attenuated.

And lest you think that universities wouldn’t actually be harassed (other than in the abstract by the likes of EFF) over patents, it turns out that it’s happening even now, even without EFF’s proposed law.

For the last five years Princeton University has been locked in a lawsuit with some residents of Princeton, New Jersey who have embarked upon a transparently self-interested play to divert university funds to their own pockets. Their weapon of choice? A challenge to Princeton’s tax-exempt status based on the fact that the school licenses and sells its patented inventions.

The plaintiffs’ core argument in Fields v. Princeton is that the University should be  a taxpaying entity because it occasionally generates patent licensing revenues from a small fraction of the research that its faculty conducts in University buildings.

The Princeton case is problematic for a variety of reasons, one of which deserves special attention because it runs squarely up against a laudable federal law that is intended to promote research, development, and patent commercialization.

In the early 1980s Congress passed the Bayh-Dole Act, which made it possible for universities to retain ownership over discoveries made in campus labs. The aim of the law was to encourage essential basic research that had historically been underdeveloped. Previously, the rights to any such federally-funded discoveries automatically became the property of the federal government, which, not surprisingly, put a damper on universities’ incentives to innovate.

When universities collaborate with industry — a major aim of Bayh-Dole — innovation is encouraged, breakthroughs occur, and society as a whole is better off. About a quarter of the top drugs approved since 1981 came from university research, as did many life-changing products we now take for granted, like Google, web browsers, email, cochlear implants and major components of cell phones. Since the passage of the Act, a boom in commercialized patents has yielded billions of dollars of economic activity.

Under the Act innovators are also rewarded: Qualifying institutions like Princeton are required to share royalties with the researchers who make these crucial discoveries. The University has no choice in the matter; to refuse to share the revenues would constitute a violation of the terms of federal research funding. But the Fields suit ignores this reality an,d in much the same way as EFF’s proposed legislation, will force a stark choice upon Princeton University: engage with industry, increase social utility and face lawsuits, or keep your head down and your inventions to yourself.

A Hobson’s Choice

Thus, things like the Fields suit and EFF’s proposed legislation are worse than costly distractions for universities; they are major disincentives to the commercialization of university inventions. This may not be the intended consequence of these actions, but it is an entirely predictable one.

Faced with legislation that punishes them for being insufficiently entrepreneurial and suits that attack them for bothering to commercialize at all, universities will have to make a hobson’s choice: commercialize the small fraction of research that might yield licensing revenues and potentially face massive legal liability, or simply decide to forego commercialization (and much basic research) altogether.

The risk here, obviously, is that research institutions will choose the latter in order to guard against the significant organizational costs that could result from a change in their tax status or a thicket of lawsuits that emerge from voided technology transfers (let alone the risk of losing student aid money).

But this is not what we want as a society. We want the optimal level of invention, innovation, and commercialization. What anti-patent extremists and short-sighted state governments may obtain for us instead, however, is a status quo much like Europe where the legal and regulatory systems perpetually keep innovation on a low simmer.

The American concept of “the rule of law” (see here) is embodied in the Due Process Clause of the Fifth Amendment to the U.S. Constitution, and in the constitutional principles of separation of powers, an independent judiciary, a government under law, and equality of all before the law (see here).  It holds that the executive must comply with the law because ours is “a government of laws, and not of men,” or, as Justice Anthony Kennedy put it in a 2006 address to the American Bar Association, “that the Law is superior to, and thus binds, the government and all its officials.”  (See here.)  More specifically, and consistent with these broader formulations, the late and great legal philosopher Friedrich Hayek wrote that the rule of law “means the government in all its actions is bound by rules fixed and announced beforehand – rules which make it possible to see with fair certainty how the authority will use its coercive powers in given circumstances and to plan one’s individual affairs on the basis of this knowledge.”  (See here.)  In other words, as former Boston University Law School Dean Ron Cass put it, the rule of law involves “a system of binding rules” adopted and applied by a valid government authority that embody “clarity, predictability, and equal applicability.”  (See here.)

Regrettably, by engaging in regulatory overreach and ignoring statutory limitations on the scope of their authority, federal administrative agencies have shown scant appreciation for rule of law restraints under the current administration (see here and here for commentaries on this problem by Heritage Foundation scholars).  Although many agencies could be singled out, the Federal Communications Commission’s (FCC) actions in recent years have been especially egregious (see here).

A prime example of regulatory overreach by the FCC that flouted the rule of law was its promulgation in 2015 of an order preempting state laws in Tennessee and North Carolina that prevented municipally-owned broadband providers from providing broadband service beyond their geographic boundaries (Municipal Broadband Order, see here).   As a matter of substance, this decision ignored powerful economic evidence that municipally-provided broadband services often involve wasteful subsidies for financially–troubled government-owned providers that interfere with effective private sector competition and are economically harmful (my analysis is here).   As a legal matter, the Municipal Broadband Order went beyond the FCC’s statutory authority and raises grave constitutional problems, thereby ignoring the constitutional limitations placed on the exercise of governmental powers that lie at the heart of the rule of law (see here).  The Order lacked a sound legal footing in basing its authority on Section 706 of the Telecommunications Act of 1996, which merely authorizes the FCC to promote local broadband competition and investment (a goal which the Order did not advance) and says nothing about preemption.   In addition, the FCC’s invocation of preemption authority trenched upon the power of the states to control their subordinate governmental entities, guaranteed to them by the Constitution as an essential element of their sovereignty in our federal system (see here).   What’s more, the Chattanooga, Tennessee and Wilson, North Carolina municipal broadband systems that had requested FCC preemption imposed content-based restrictions on users of their network that raised serious First Amendment issues (see here).   Specifically, those systems’ bans on the transmittal of various sorts of “abusive” language appeared to be too broad to withstand First Amendment “strict scrutiny.”  Moreover, by requiring prospective broadband enrollees to agree not to sue their provider as an initial condition of service, two of the municipal systems arguably unconstitutionally coerced users to forgo exercise of their First Amendment rights.

Fortunately, on August 10, 2016, in Tennessee v. FCC, the U.S. Court of Appeals for the Sixth Circuit struck down the Municipal Broadband Order, pithily stating:

The FCC order essentially serves to re-allocate decision-making power between the states and their municipalities. This is shown by the fact that no federal statute or FCC regulation requires the municipalities to expand or otherwise to act in contravention of the preempted state statutory provisions. This preemption by the FCC of the allocation of power between a state and its subdivisions requires at least a clear statement in the authorizing federal legislation. The FCC relies upon § 706 of the Telecommunications Act of 1996 for the authority to preempt in this case, but that statute falls far short of such a clear statement. The preemption order must accordingly be reversed.

The Sixth Circuit’s decision has important policy ramifications that extend beyond the immediate controversy, as Free State Foundation Scholars Randolph May and Seth Cooper explain:

The FCC’s Municipal Broadband Preemption Order would have turned constitutional federalism inside out by severing local political subdivisions’ accountability from the states governments that created them. Had the agency’s order been upheld, the FCC surely would have preempted several other state laws restricting municipalities’ ownership and operation of broadband networks. Several state governments would have been locked into an unwise policy of favoring municipal broadband business ventures with a track record of legal and proprietary conflicts of interest, expensive financial failures, and burdensome debts for local taxpayers.

The avoidance of a series of bad side effects in a corner of the regulatory world is not, however, sufficient grounds for breaking out the champagne.  From a global perspective, the Sixth Circuit’s Tennessee v. FCC decision, while helpful, does not address the broader problem of agency disregard for the limitations of constitutional federalism and the rule of law.  Administrative overreach, like a chronic debilitating virus, saps the initiative of the private sector (and, more generally, the body politic) and undermines its vitality.  In addition, not all federal judges can be counted on to rein in legally unjustified rules (which in any event impose costly delay and uncertainty, even if they are eventually overturned).  What is needed is an administration that emphasizes by word and deed that it is committed to constitutionalist rule of law principles – and insists that its appointees (including commissioners of independent agencies) share that philosophy.  Let us hope that we do not have to wait too long for such an administration.


In recent years, U.S. government policymakers have recounted various alleged market deficiencies associated with patent licensing practices, as part of a call for patent policy “reforms” – with the “reforms” likely to have the effect of weakening patent rights.  In particular, antitrust enforcers have expressed concerns that:  (1) the holder of a patent covering the technology needed to implement some aspect of a technical standard (a “standard-essential patent,” or SEP) could “hold up” producers that utilize the standard by demanding  anticompetitively high royalty payments; (2) the accumulation of royalties for multiple complementary patent licenses needed to make a product exceeds the bundled monopoly rate that would be charged if all patents were under common control (“royalty stacking”); (3) an overlapping set of patent rights requiring that producers seeking to commercialize a new technology obtain licenses from multiple patentees deters innovation (“patent thickets”); and (4) the dispersed ownership of complementary patented inventions results in “excess” property rights, the underuse of resources, and economic inefficiency (“the tragedy of the anticommons”).  (See, for example, Federal Trade Commission and U.S. Justice Department reports on antitrust and intellectual property policy, here, here, and here).

Although some commentators have expressed skepticism about the actual real world incidence of these scenarios, relatively little attention has been paid to the underlying economic assumptions that give rise to the “excessive royalty” problem that is portrayed.  Very recently, however, Professor Daniel F. Spulber of Northwestern University circulated a paper that questions those assumptions.  The paper points out that claims of economic harm due to excessive royalty charges critically rest on the assumption that individual patent owners choose royalties using posted prices, thereby generating total royalties that are above the monopoly level that would be charged for all complementary patents if they were owned in common.  In other words, it is assumed that interdependencies among complements are ignored, with individual patent monopoly prices being separately charged – the “Cournot complements” problem.

In reality, however, Professor Spulber explains that patent licensing usually involves bargaining rather than posted prices, because such licensing involves long-term contractual relationships between patentees and producers, rather than immediate exchange.  Significantly, the paper shows that bargaining procedures reflecting long-term relationships maximize the joint profits of inventors (patentees) and producers, with licensing royalties being less than (as opposed to more than under posted prices) bundled monopoly royalties.  In short, bargaining over long-term patent licensing contracts yields an efficient market outcome, in marked contrast to the inefficient outcome posited by those who (wrongly) assume patent licensing under posted prices.  In other words, real world patent holders (as opposed to the inward-looking, non-cooperative, posted-price patentees of government legend) tend to engage in highly fruitful licensing negotiations that yield socially efficient outcomes.  This finding neatly explains why examples of economically-debilitating patent thickets, royalty stacks, hold-ups, and patent anti-commons, like unicorns (or perhaps, to be fair, black swans), are amazingly hard to spot in the real world.  It also explains why the business sector that should in theory be most prone to such “excessive patent” problems, the telecommunications industry (which involves many different patentees and producers, and tens of thousands of patents), has been (and remains) a leader in economic growth and innovation.  (See also here, for an article explaining that smartphone innovation has soared because of the large number of patents.)

Professor Spulber’s concluding section highlights the policy implications of his research:

The efficiency of the bargaining outcome differs from the outcome of the Cournot posted prices model. Understanding the role of bargaining helps address a host of public policy concerns, including SEP holdup, royalty stacking, patent thickets, the tragedy of the anticommons, and justification for patent pools. The efficiency of the bargaining outcome suggests the need for antitrust forbearance toward industries that combine multiple inventions, including SEPs.

Professor Spulber’s reference to “antitrust forbearance” is noteworthy.  As I have previously pointed out (see, for example, here, here, and here), in recent years U.S. antitrust enforcers have taken positions that tend to favor the weakening of patent rights.  Those positions are justified by the “patent policy problems” that Professor Spulber’s paper debunks, as well as an emphasis on low quality “probabilistic patents” (see, for example, here) that ignores a growing body of literature (both theoretical and empirical) on the economic benefits of a strong patent system (see, for example, here and here).

In sum, Professor Spulber’s impressive study is one more piece of compelling evidence that the federal government’s implicitly “anti-patent” positions are misguided.  The government should reject those positions and restore its previous policy of respect for robust patent rights – a policy that promotes American innovation and economic growth.


While Professor Spulber’s long paper is well worth a careful read, key italicized excerpts from his debunking of prominent “excessive patent” stories are set forth below.

SEP Holdups

Standard Setting Organizations (SSOs) are voluntary organizations that establish and disseminate technology standards for industries. Patent owners may declare that their patents are essential to manufacturing products that conform to the standard. Many critics of SSOs suggest that inclusion of SEPs in technology standards allows patent owners to charge much higher royalties than if the SEPs were not included in the standard. SEPs are said to cause a form of “holdup” if producers using the patented technology would incur high costs of switching to alternative technologies. . . . [Academic] discussions of the effects of SEPs [summarized by the author] depend on patent owners choosing royalties using posted prices, generating total royalties above the bundled monopoly level. When IP owners and producers engage in bargaining, the present analysis suggests that total royalties will be less than the bundled monopoly level. Efficiencies in choosing licensing royalties should mitigate concerns about the effects of SEPs on total royalties when patent licensing involves bargaining. The present analysis further suggests bargaining should reduce or eliminate concerns about SEP “holdup”. Efficiencies in choosing patent licensing royalties also should help mitigate concerns about whether or not SSOs choose efficient technology standards.

Royalty Stacking

“Royalty stacking” refers to the situation in which total royalties are excessive in comparison to some benchmark, typically the bundled monopoly rate. . . . The present analysis shows that the perceived royalty stacking problem is due to the posted prices assumption in Cournot’s model. . . . The present analysis shows that royalty stacking need not occur with different market institutions, notably bargaining between IP owners and producers. In particular, with non-cooperative licensing offers and negotiation of royalty rates between IP owners and producers, total royalties will be less than the royalties chosen by a bundled monopoly IP owner. The result that total royalties are less than the bundled monopoly benchmark holds even if there are many patented inventions. Total royalties are less than the benchmark with innovative complements and substitutes.

Patent Thickets

The patent thickets view considers patents as deterrents to innovation. This view differs substantially from the view that patents function as property rights that stimulate innovation. . . . The bargaining analysis presented here suggests that multiple patents should not be viewed as deterring innovation. Multiple inventors can coordinate with producers through market transactions. This means that by making licensing offers to producers and negotiating patent royalties, inventors and producers can achieve efficient outcomes. There is no need for government regulation to restrict the total number of patents. Arbitrarily limiting the total number of patents by various regulatory mechanisms would likely discourage invention and innovation.

Tragedy of the Anticommons

The “Tragedy of the Anticommons” describes the situation in which dispersed ownership of complementary inventions results in underuse of resources[.] . . . . The present analysis shows that patents need not create excess property rights when there is bargaining between IP owners and producers. Bargaining results in a total output that maximizes the joint returns of inventors and producers. Social welfare and final output are greater with bargaining than in Cournot’s posted prices model. This contradicts the “Tragedy of the Anticommons” result and shows that there need not be underutilization of resources due to high royalties.

Copyright law, ever a sore point in some quarters, has found a new field of battle in the FCC’s recent set-top box proposal. At the request of members of Congress, the Copyright Office recently wrote a rather thorough letter outlining its view of the FCC’s proposal on rightsholders.

In sum, the CR’s letter was an even-handed look at the proposal which concluded:

As a threshold matter, it seems critical that any revised proposal respect the authority of creators to manage the exploitation of their copyrighted works through private licensing arrangements, because regulatory actions that undermine such arrangements would be inconsistent with the rights granted under the Copyright Act.

This fairly uncontroversial statement of basic legal principle was met with cries of alarm. And Stanford’s CIS had a post from Affiliated Scholar Annemarie Bridy that managed to trot out breathless comparisons to inapposite legal theories while simultaneously misconstruing the “fair use” doctrine (as well as how Copyright law works in the video market, for that matter).

Look out! Lochner is coming!

In its letter the Copyright Office warned the FCC that its proposed rules have the potential to disrupt the web of contracts that underlie cable programming, and by extension, risk infringing the rights of copyright holders to commercially exploit their property. This analysis actually tracks what Geoff Manne and I wrote in both our initial comment and our reply comment to the set-top box proposal.

Yet Professor Bridy seems to believe that, notwithstanding the guarantees of both the Constitution and Section 106 of the Copyright Act, the FCC should have the power to abrogate licensing contracts between rightsholders and third parties.  She believes that

[t]he Office’s view is essentially that the Copyright Act gives right holders not only the limited range of rights enumerated in Section 106 (i.e., reproduction, preparation of derivative works, distribution, public display, and public performance), but also a much broader and more amorphous right to “manage the commercial exploitation” of copyrighted works in whatever ways they see fit and can accomplish in the marketplace, without any regulatory interference from the government.

What in the world does this even mean? A necessary logical corollary of the Section 106 rights includes the right to exploit works commercially as rightsholders see fit. Otherwise, what could it possibly mean to have the right to control the reproduction or distribution of a work? The truth is that Section 106 sets out a general set of rights that inhere in rightsholders with respect to their protected works, and that commercial exploitation is merely a subset of this total bundle of rights.

The ability to contract with other parties over these rights is also a necessary corollary of the property rights recognized in Section 106. After all, the right to exclude implies by necessity the right to include. Which is exactly what a licensing arrangement is.

But wait, there’s more — she actually managed to pull out the Lochner bogeyman to validate her argument!

The Office’s absolutist logic concerning freedom of contract in the copyright licensing domain is reminiscent of the Supreme Court’s now-infamous reasoning in Lochner v. New York, a 1905 case that invalidated a state law limiting maximum working hours for bakers on the ground that it violated employer-employee freedom of contract. The Court in Lochner deprived the government of the ability to provide basic protections for workers in a labor environment that subjected them to unhealthful and unsafe conditions. As Julie Cohen describes it, “‘Lochner’ has become an epithet used to characterize an outmoded, over-narrow way of thinking about state and federal economic regulation; it goes without saying that hardly anybody takes the doctrine it represents seriously.”

This is quite a leap of logic, as there is precious little in common between the letter from the Copyright Office and the Lochner opinion aside from the fact that both contain the word “contracts” in their pages.  Perhaps the most critical problem with Professor Bridy’s analogy is the fact that Lochner was about a legislature interacting with the common law system of contract, whereas the FCC is a body subordinate to Congress, and IP is both constitutionally and statutorily guaranteed. A sovereign may be entitled to interfere with the operation of common law, but an administrative agency does not have the same sort of legal status as a legislature when redefining general legal rights.

The key argument that Professor Bridy offered in support of her belief that the FCC should be free to abrogate contracts at will is that “[r]egulatory limits on private bargains may come in the form of antitrust laws or telecommunications laws or, as here, telecommunications regulations that further antitrust ends.”  However, this completely misunderstand U.S. constitutional doctrine.

In particular, as Geoff Manne and I discussed in our set-top box comments to the FCC, using one constitutional clause to end-run another constitutional clause is generally a no-no:

Regardless of whether or how well the rules effect the purpose of Sec. 629, copyright violations cannot be justified by recourse to the Communications Act. Provisions of the Communications Act — enacted under Congress’s Commerce Clause power — cannot be used to create an end run around limitations imposed by the Copyright Act under the Constitution’s Copyright Clause. “Congress cannot evade the limits of one clause of the Constitution by resort to another,” and thus neither can an agency acting within the scope of power delegated to it by Congress. Establishing a regulatory scheme under the Communications Act whereby compliance by regulated parties forces them to violate content creators’ copyrights is plainly unconstitutional.

Congress is of course free to establish the implementation of the Copyright Act as it sees fit. However, unless Congress itself acts to change that implementation, the FCC — or any other party — is not at liberty to interfere with rightsholders’ constitutionally guaranteed rights.

You Have to Break the Law Before You Raise a Defense

Another bone of contention upon which Professor Bridy gnaws is a concern that licensing contracts will abrogate an alleged right to “fair use” by making the defense harder to muster:  

One of the more troubling aspects of the Copyright Office’s letter is the length to which it goes to assert that right holders must be free in their licensing agreements with MVPDs to bargain away the public’s fair use rights… Of course, the right of consumers to time-shift video programming for personal use has been enshrined in law since Sony v. Universal in 1984. There’s no uncertainty about that particular fair use question—none at all.

The major problem with this reasoning (notwithstanding the somewhat misleading drafting of Section 107) is that “fair use” is not an affirmative right, it is an affirmative defense. Despite claims that “fair use” is a right, the Supreme Court has noted on at least two separate occasions (1, 2) that Section 107 was “structured… [as]… an affirmative defense requiring a case-by-case analysis.”

Moreover, important as the Sony case is, it does not not establish that “[t]here’s no uncertainty about [time-shifting as a] fair use question—none at all.” What it actually establishes is that, given the facts of that case, time-shifting was a fair use. Not for nothing the Sony Court notes at the outset of its opinion that

An explanation of our rejection of respondents’ unprecedented attempt to impose copyright liability upon the distributors of copying equipment requires a quite detailed recitation of the findings of the District Court.

But more generally, the Sony doctrine stands for the proposition that:

“The limited scope of the copyright holder’s statutory monopoly, like the limited copyright duration required by the Constitution, reflects a balance of competing claims upon the public interest: creative work is to be encouraged and rewarded, but private motivation must ultimately serve the cause of promoting broad public availability of literature, music, and the other arts. The immediate effect of our copyright law is to secure a fair return for an ‘author’s’ creative labor. But the ultimate aim is, by this incentive, to stimulate artistic creativity for the general public good. ‘The sole interest of the United States and the primary object in conferring the monopoly,’ this Court has said, ‘lie in the general benefits derived by the public from the labors of authors.’ Fox Film Corp. v. Doyal, 286 U. S. 123, 286 U. S. 127. See Kendall v. Winsor, 21 How. 322, 62 U. S. 327-328; Grant v. Raymond, 6 Pet. 218, 31 U. S. 241-242. When technological change has rendered its literal terms ambiguous, the Copyright Act must be construed in light of this basic purpose.” Twentieth Century Music Corp. v. Aiken, 422 U. S. 151, 422 U. S. 156 (1975) (footnotes omitted).

In other words, courts must balance competing interests to maximize “the general benefits derived by the public,” subject to technological change and other criteria that might shift that balance in any particular case.  

Thus, even as an affirmative defense, nothing is guaranteed. The court will have to walk through a balancing test, and only after that point, and if the accused party’s behavior has not tipped the scales against herself, will the court find the use a “fair use.”  

As I noted before,

Not surprisingly, other courts are inclined to follow the Supreme Court. Thus the Eleventh Circuit, the Southern District of New York, and the Central District of California (here and here), to name but a few, all explicitly refer to fair use as an affirmative defense. Oh, and the Ninth Circuit did too, at least until Lenz.

The Lenz case was an interesting one because, despite the above noted Supreme Court precedent treating “fair use” as a defense, it is one of the very few cases that has held “fair use” to be an affirmative right (in that case, the court decided that Section 1201 of the DMCA required consideration of “fair use” as a part of filling out a take-down notice). And in doing so, it too tried to rely on Sony to restructure the nature of “fair use.” But as I have previously written, “[i]t bears noting that the Court in Sony Corp. did not discuss whether or not fair use is an affirmative defense, whereas Acuff Rose (decided 10 years after Sony Corp.) and Harper & Row decisions do.”

Further, even the Eleventh Circuit, which the Ninth relied upon in Lenz, later clarified its position that the above-noted Supreme Court precedent definitely binds lower courts, and that “fair use” is in fact an affirmative defense.

Thus, to say that rightsholders’ licensing contracts somehow impinge a “right” of fair use completely puts the cart before the horse. Remember, as an affirmative defense, “fair use” is an excuse for otherwise infringing behavior, and rightsholders are well within their constitutional and statutory rights to avoid potential infringing uses.

Think about it this way. When you commit a crime you can raise a defense: for instance, an insanity defense. But just because you might be excused for committing a crime if a court finds you were not operating with full faculties, this does not entitle every insane person to go out and commit that crime. The insanity defense can be raised only after a crime is committed, and at that point it will be examined by a judge and jury to determine if applying the defense furthers the overall criminal law scheme.

“Fair use” works in exactly the same manner. And even though Sony described how time- and space-shifting were potentially permissible, it did so only by determining on those facts that the balancing test came out to allow it. So, maybe a particular time-shifting use would be “fair use.” But maybe not. More likely, in this case, even the allegedly well-established “fair use” of time-shifting in the context of today’s digital media, on-demand programing, Netflix and the like may not meet that burden.

And what this means is that a rightsholder does not have an ex ante obligation to consider whether a particular contractual clause might in some fashion or other give rise to a “fair use” defense.

The contrary point of view makes no sense. Because “fair use” is a defense, forcing parties to build “fair use” considerations into their contractual negotiations essentially requires them to build in an allowance for infringement — and one that a court might or might not ever find appropriate in light of the requisite balancing of interests. That just can’t be right.

Instead, I think this article is just a piece of the larger IP-skeptic movement. I suspect that when “fair use” was in its initial stages of development, it was intended as a fairly gentle softening on the limits of intellectual property — something like the “public necessity” doctrine in common law with respect to real property and trespass. However, that is just not how “fair use” advocates see it today. As Geoff Manne has noted, the idea of “permissionless innovation” has wrongly come to mean “no contracts required (or permitted)”:  

[Permissionless innovation] is used to justify unlimited expansion of fair use, and is extended by advocates to nearly all of copyright…, which otherwise requires those pernicious licenses (i.e., permission) from others.

But this position is nonsense — intangible property is still property. And at root, property is just a set of legal relations between persons that defines their rights and obligations with respect to some “thing.” It doesn’t matter if you can hold that thing in your hand or not. As property, IP can be subject to transfer and control through voluntarily created contracts.

Even if “fair use” were some sort of as-yet unknown fundamental right, it would still be subject to limitations upon it by other rights and obligations. To claim that “fair use” should somehow trump the right of a property holder to dispose of the property as she wishes is completely at odds with our legal system.