Archives For copyright

In an ideal world, it would not be necessary to block websites in order to combat piracy. But we do not live in an ideal world. We live in a world in which enormous amounts of content—from books and software to movies and music—is being distributed illegally. As a result, content creators and owners are being deprived of their rights and of the revenue that would flow from legitimate consumption of that content.

In this real world, site blocking may be both a legitimate and a necessary means of reducing piracy and protecting the rights and interests of rightsholders.

Of course, site blocking may not be perfectly effective, given that pirates will “domain hop” (moving their content from one website/IP address to another). As such, it may become a game of whack-a-mole. However, relative to other enforcement options, such as issuing millions of takedown notices, it is likely a much simpler, easier and more cost-effective strategy.

And site blocking could be abused or misapplied, just as any other legal remedy can be abused or misapplied. It is a fair concern to keep in mind with any enforcement program, and it is important to ensure that there are protections against such abuse and misapplication.

Thus, a Canadian coalition of telecom operators and rightsholders, called FairPlay Canada, have proposed a non-litigation alternative solution to piracy that employs site blocking but is designed to avoid the problems that critics have attributed to other private ordering solutions.

The FairPlay Proposal

FairPlay has sent a proposal to the CRTC (the Canadian telecom regulator) asking that it develop a process by which it can adjudicate disputes over web sites that are “blatantly, overwhelmingly, or structurally engaged in piracy.”  The proposal asks for the creation of an Independent Piracy Review Agency (“IPRA”) that would hear complaints of widespread piracy, perform investigations, and ultimately issue a report to the CRTC with a recommendation either to block or not to block sites in question. The CRTC would retain ultimate authority regarding whether to add an offending site to a list of known pirates. Once on that list, a pirate site would have its domain blocked by ISPs.

The upside seems fairly obvious: it would be a more cost-effective and efficient process for investigating allegations of piracy and removing offenders. The current regime is cumbersome and enormously costly, and the evidence suggests that site blocking is highly effective.

Under Canadian law—the so-called “Notice and Notice” regime—rightsholders send notices to ISPs, who in turn forward those notices to their own users. Once those notices have been sent, rightsholders can then move before a court to require ISPs to expose the identities of users that upload infringing content. In just one relatively large case, it was estimated that the cost of complying with these requests was 8.25M CAD.

The failure of the American equivalent of the “Notice and Notice” regime provides evidence supporting the FairPlay proposal. The graduated response system was set up in 2012 as a means of sending a series of escalating warnings to users who downloaded illegal content, much as the “Notice and Notice” regime does. But the American program has since been discontinued because it did not effectively target the real source of piracy: repeat offenders who share a large amount of material.

This, on the other hand, demonstrates one of the greatest points commending the FairPlay proposal. The focus of enforcement shifts away from casually infringing users and directly onto the  operators of sites that engage in widespread infringement. Therefore, one of the criticisms of Canada’s current “notice and notice” regime — that the notice passthrough system is misused to send abusive settlement demands — is completely bypassed.

And whichever side of the notice regime bears the burden of paying the associated research costs under “Notice and Notice”—whether ISPs eat them as a cost of doing business, or rightsholders pay ISPs for their work—the net effect is a deadweight loss. Therefore, whatever can be done to reduce these costs, while also complying with Canada’s other commitments to protecting its citizens’ property interests and civil rights, is going to be a net benefit to Canadian society.

Of course it won’t be all upside — no policy, private or public, ever is. IP and property generally represent a set of tradeoffs intended to net the greatest social welfare gains. As Richard Epstein has observed

No one can defend any system of property rights, whether for tangible or intangible objects, on the naïve view that it produces all gain and no pain. Every system of property rights necessarily creates some winners and some losers. Recognize property rights in land, and the law makes trespassers out of people who were once free to roam. We choose to bear these costs not because we believe in the divine rights of private property. Rather, we bear them because we make the strong empirical judgment that any loss of liberty is more than offset by the gains from manufacturing, agriculture and commerce that exclusive property rights foster. These gains, moreover, are not confined to some lucky few who first get to occupy land. No, the private holdings in various assets create the markets that use voluntary exchange to spread these gains across the entire population. Our defense of IP takes the same lines because the inconveniences it generates are fully justified by the greater prosperity and well-being for the population at large.

So too is the justification — and tempering principle — behind any measure meant to enforce copyrights. The relevant question when thinking about a particular enforcement regime is not whether some harms may occur because some harm will always occur. The proper questions are: (1) Does the measure to be implemented stand a chance of better giving effect to the property rights we have agreed to protect and (2) when harms do occur, is there a sufficiently open and accessible process available whereby affected parties (and interested third parties) can rightly criticize and improve the system.

On both accounts the FairPlay proposal appears to hit the mark.

FairPlay’s proposal can reduce piracy while respecting users’ rights

Although I am generally skeptical of calls for state intervention, this case seems to present a real opportunity for the CRTC to do some good. If Canada adopts this proposal it is is establishing a reasonable and effective remedy to address violations of individuals’ property, the ownership of which is considered broadly legitimate.

And, as a public institution subject to input from many different stakeholder groups — FairPlay describes the stakeholders  as comprised of “ISPs, rightsholders, consumer advocacy and citizen groups” — the CRTC can theoretically provide a fairly open process. This is distinct from, for example, the Donuts trusted notifier program that some criticized (in my view, mistakenly) as potentially leading to an unaccountable, private ordering of the DNS.

FairPlay’s proposal outlines its plan to provide affected parties with due process protections:

The system proposed seeks to maximize transparency and incorporates extensive safeguards and checks and balances, including notice and an opportunity for the website, ISPs, and other interested parties to review any application submitted to and provide evidence and argument and participate in a hearing before the IPRA; review of all IPRA decisions in a transparent Commission process; the potential for further review of all Commission decisions through the established review and vary procedure; and oversight of the entire system by the Federal Court of Appeal, including potential appeals on questions of law or jurisdiction including constitutional questions, and the right to seek judicial review of the process and merits of the decision.

In terms of its efficacy, according to even the critics of the FairPlay proposal, site blocking provides a measurably positive reduction on piracy. In its formal response to critics, FairPlay Canada noted that one of the studies the critics relied upon actually showed that previous blocks of the PirateBay domains had reduced piracy by nearly 25%:

The Poort study shows that when a single illegal peer-to-peer piracy site (The Pirate Bay) was blocked, between 8% and 9.3% of consumers who were engaged in illegal downloading (from any site, not just The Pirate Bay) at the time the block was implemented reported that they stopped their illegal downloading entirely.  A further 14.5% to 15.3% reported that they reduced their illegal downloading. This shows the power of the regime the coalition is proposing.

The proposal stands to reduce the costs of combating piracy, as well. As noted above, the costs of litigating a large case can reach well into the millions just to initiate proceedings. In its reply comments, FairPlay Canada noted the costs for even run-of-the-mill suits essentially price enforcement of copyrights out of the reach of smaller rightsholders:

[T]he existing process can be inefficient and inaccessible for rightsholders. In response to this argument raised by interveners and to ensure the Commission benefits from a complete record on the point, the coalition engaged IP and technology law firm Hayes eLaw to explain the process that would likely have to be followed to potentially obtain such an order under existing legal rules…. [T]he process involves first completing litigation against each egregious piracy site, and could take up to 765 days and cost up to $338,000 to address a single site.

Moreover, these cost estimates assume that the really bad pirates can even be served with process — which is untrue for many infringers. Unlike physical distributors of counterfeit material (e.g. CDs and DVDs), online pirates do not need to operate within Canada to affect Canadian artists — which leaves a remedy like site blocking as one of the only viable enforcement mechanisms.

Don’t we want to reduce piracy?

More generally, much of the criticism of this proposal is hard to understand. Piracy is clearly a large problem to any observer who even casually peruses the lumen database. Even defenders of the status quo  are forced to acknowledge that “the notice and takedown provisions have been used by rightsholders countless—but likely billions—of times” — a reality that shows that efforts to control piracy to date have been insufficient.

So why not try this experiment? Why not try using a neutral multistakeholder body to see if rightsholders, ISPs, and application providers can create an online environment both free from massive, obviously infringing piracy, and also free for individuals to express themselves and service providers to operate?

In its response comments, the FairPlay coalition noted that some objectors have “insisted that the Commission should reject the proposal… because it might lead… the Commission to use a similar mechanism to address other forms of illegal content online.”

This is the same weak argument that is easily deployable against any form of collective action at all. Of course the state can be used for bad ends — anyone with even a superficial knowledge of history knows this  — but that surely can’t be an indictment against lawmaking as a whole. If allowing a form of prohibition for category A is appropriate, but the same kind of prohibition is inappropriate for category B, then either we assume lawmakers are capable of differentiating between category A and category B, or else we believe that prohibition itself is per se inappropriate. If site blocking is wrong in every circumstance, the objectors need to convincingly  make that case (which, to date, they have not).

Regardless of these criticisms, it seems unlikely that such a public process could be easily subverted for mass censorship. And any incipient censorship should be readily apparent and addressable in the IPRA process. Further, at least twenty-five countries have been experimenting with site blocking for IP infringement in different ways, and, at least so far, there haven’t been widespread allegations of massive censorship.

Maybe there is a perfect way to control piracy and protect user rights at the same time. But until we discover the perfect, I’m all for trying the good. The FairPlay coalition has a good idea, and I look forward to seeing how it progresses in Canada.

The Internet is a modern miracle: from providing all varieties of entertainment, to facilitating life-saving technologies, to keeping us connected with distant loved ones, the scope of the Internet’s contribution to our daily lives is hard to overstate. Moving forward there is undoubtedly much more that we can and will do with the Internet, and part of that innovation will, naturally, require a reconsideration of existing laws and how new Internet-enabled modalities fit into them.

But when undertaking such a reconsideration, the goal should not be simply to promote Internet-enabled goods above all else; rather, it should be to examine the law’s effect on the promotion of new technology within the context of other, competing social goods. In short, there are always trade-offs entailed in changing the legal order. As such, efforts to reform, clarify, or otherwise change the law that affects Internet platforms must be balanced against other desirable social goods, not automatically prioritized above them.

Unfortunately — and frequently with the best of intentions — efforts to promote one good thing (for instance, more online services) inadequately take account of the balance of the larger legal realities at stake. And one of the most important legal realities that is too often readily thrown aside in the rush to protect the Internet is that policy be established through public, (relatively) democratically accountable channels.

Trade deals and domestic policy

Recently a letter was sent by a coalition of civil society groups and law professors asking the NAFTA delegation to incorporate U.S.-style intermediary liability immunity into the trade deal. Such a request is notable for its timing in light of the ongoing policy struggles over SESTA —a bill currently working its way through Congress that seeks to curb human trafficking through online platforms — and the risk that domestic platform companies face of losing (at least in part) the immunity provided by Section 230 of the Communications Decency Act. But this NAFTA push is not merely about a tradeoff between less trafficking and more online services, but between promoting policies in a way that protects the rule of law and doing so in a way that undermines the rule of law.

Indeed, the NAFTA effort appears to be aimed at least as much at sidestepping the ongoing congressional fight over platform regulation as it is aimed at exporting U.S. law to our trading partners. Thus, according to EFF, for example, “[NAFTA renegotiation] comes at a time when Section 230 stands under threat in the United States, currently from the SESTA and FOSTA proposals… baking Section 230 into NAFTA may be the best opportunity we have to protect it domestically.”

It may well be that incorporating Section 230 into NAFTA is the “best opportunity” to protect the law as it currently stands from efforts to reform it to address conflicting priorities. But that doesn’t mean it’s a good idea. In fact, whatever one thinks of the merits of SESTA, it is not obviously a good idea to use a trade agreement as a vehicle to override domestic reforms to Section 230 that Congress might implement. Trade agreements can override domestic law, but that is not the reason we engage in trade negotiations.

In fact, other parts of NAFTA remain controversial precisely for their ability to undermine domestic legal norms, in this case in favor of guaranteeing the expectations of foreign investors. EFF itself is deeply skeptical of this “investor-state” dispute process (“ISDS”), noting that “[t]he latest provisions would enable multinational corporations to undermine public interest rules.” The irony here is that ISDS provides a mechanism for overriding domestic policy that is a close analogy for what EFF advocates for in the Section 230/SESTA context.

ISDS allows foreign investors to sue NAFTA signatories in a tribunal when domestic laws of that signatory have harmed investment expectations. The end result is that the signatory could be responsible for paying large sums to litigants, which in turn would serve as a deterrent for the signatory to continue to administer its laws in a similar fashion.

Stated differently, NAFTA currently contains a mechanism that favors one party (foreign investors) in a way that prevents signatory nations from enacting and enforcing laws approved of by democratically elected representatives. EFF and others disapprove of this.

Yet, at the same time, EFF also promotes the idea that NAFTA should contain a provision that favors one party (Internet platforms) in a way that would prevent signatory nations from enacting and enforcing laws like SESTA that (might be) approved of by democratically elected representatives.

A more principled stance would be skeptical of the domestic law override in both contexts.

Restating Copyright or creating copyright policy?

Take another example: Some have suggested that the American Law Institute (“ALI”) is being used to subvert Congressional will. Since 2013, ALI has taken upon itself the project to “restate” the law of copyright. ALI is well known and respected for its common law restatements, but it may be that something more than mere restatement is going on here. As the NY Bar Association recently observed:

The Restatement as currently drafted appears inconsistent with the ALI’s long-standing goal of promoting clarity in the law: indeed, rather than simply clarifying or restating that law, the draft offers commentary and interpretations beyond the current state of the law that appear intended to shape current and future copyright policy.  

It is certainly odd that ALI (or any other group) would seek to restate a body of law that is already stated in the form of an overarching federal statute. The point of a restatement is to gather together the decisions of disparate common law courts interpreting different laws and precedent in order to synthesize a single, coherent framework approximating an overall consensus. If done correctly, a restatement of a federal statute would, theoretically, end up with the exact statute itself along with some commentary about how judicial decisions have filled in the blanks differently — a state of affairs that already exists with the copious academic literature commenting on federal copyright law.

But it seems that merely restating judicial interpretations was not the only objective behind the copyright restatement effort. In a letter to ALI, one of the scholars responsible for the restatement project noted that:

While congressional efforts to improve the Copyright Act… may be a welcome and beneficial development, it will almost certainly be a long and contentious process… Register Pallante… [has] not[ed] generally that “Congress has moved slowly in the copyright space.”

Reform of copyright law, in other words, and not merely restatement of it, was an important impetus for the project. As an attorney for the Copyright Office observed, “[a]lthough presented as a “Restatement” of copyright law, the project would appear to be more accurately characterized as a rewriting of the law.” But “rewriting” is a job for the legislature. And even if Congress moves slowly, or the process is frustrating, the democratic processes that produce the law should still be respected.

Pyrrhic Policy Victories

Attempts to change copyright or entrench liability immunity through any means possible are rational actions at an individual level, but writ large they may undermine the legal fabric of our system and should be resisted.

It’s no surprise why some may be frustrated and concerned about intermediary liability and copyright issues: On the margin, it’s definitely harder to operate an Internet platform if it faces sweeping liability for the actions of third parties (whether for human trafficking or infringing copyrights). Maybe copyright law needs to be reformed and perhaps intermediary liability must be maintained exactly as it is (or expanded). But the right way to arrive at these policy outcomes is not through backdoors — and it is not to begin with the assertion that such outcomes are required.

Congress and the courts can be frustrating vehicles through which to enact public policy, but they have the virtue of being relatively open to public deliberation, and of having procedural constraints that can circumscribe excesses and idiosyncratic follies. We might get bad policy from Congress. We might get bad cases from the courts. But the theory of our system is that, on net, having a frustratingly long, circumscribed, and public process will tend to weed out most of the bad ideas and impulses that would otherwise result from unconstrained decision making, even if well-intentioned.

We should meet efforts like these to end-run Congress and the courts with significant skepticism. Short term policy “victories” are likely not worth the long-run consequences. These are important, complicated issues. If we surreptitiously adopt idiosyncratic solutions to them, we risk undermining the rule of law itself.

Introduction and Summary

On December 19, 2017, the U.S. Court of Appeals for the Second Circuit presented Broadcast Music, Inc. (BMI) with an early Christmas present.  Specifically, the Second Circuit commendably affirmed the District Court for the Southern District of New York’s September 2016 ruling rejecting the U.S. Department of Justice’s (DOJ) August 2016 reinterpretation of its longstanding antitrust consent decree with BMI.  Because the DOJ reinterpretation also covered a parallel DOJ consent decree with the American Society of Composers, Authors, and Publishers (ASCAP), the Second Circuit’s decision by necessary implication benefits ASCAP as well, although it was not a party to the suit.

The Second Circuit’s holding is sound as a matter of textual interpretation and wise as a matter of economic policy.  Indeed, DOJ’s current antitrust leadership, which recognizes the importance of vibrant intellectual property licensing in the context of patents (see here), should be pleased that the Second Circuit rescued it from a huge mistake by the Obama Administration DOJ in the context of copyright licensing.

Background

BMI and ASCAP are the two leading U.S. “performing rights organizations” (PROs).  They contract with music copyright holders to act as intermediaries that provide “blanket” licenses to music users (e.g., television and radio stations, bars, and internet music distributors) for use of their full copyrighted musical repertoires, without the need for song-specific licensing negotiations.  This greatly reduces the transactions costs of arranging for the playing of musical works, benefiting music users, the listening public, and copyright owners (all of whom are assured of at least some compensation for their endeavors).  ASCAP and BMI are big businesses, with each PRO holding licenses to over ten million works and accounting for roughly 45 percent of the domestic music licensing market (ninety percent combined).

Because both ASCAP and BMI pool copyrighted songs that could otherwise compete with each other, and both grant users a single-price “blanket license” conveying the rights to play their full set of copyrighted works, the two organizations could be seen as restricting competition among copyrighted works and fixing the prices of copyrighted substitutes – raising serious questions under section 1 of the Sherman Antitrust Act, which condemns contracts that unreasonably restrain trade.  This led the DOJ to bring antitrust suits against ASCAP and BMI over eighty years ago, which were settled by separate judicially-filed consent decrees in 1941.

The decrees imposed a variety of limitations on the two PROs’ licensing practices, aimed at preventing ASCAP and BMI from exercising anticompetitive market power (such as the setting of excessive licensing rates).  The decrees were amended twice over the years, most recently in 2001, to take account of changing market conditions.  The U.S. Supreme Court noted the constraining effect of the decrees in BMI v. CBS (1979), in ruling that the BMI and ASCAP blanket licenses did not constitute per se illegal price fixing.  The Court held, rather, that the licenses should be evaluated on a case-by-case basis under the antitrust “rule of reason,” since the licenses inherently generated great efficiency benefits (“the immediate use of covered compositions, without the delay of prior individual negotiations”) that had to be weighed against potential anticompetitive harms.

The August 4, 2016 DOJ Consent Decree Interpretation

Fast forward to 2014, when DOJ undertook a new review of the ASCAP and BMI decrees, and requested the submission of public comments to aid it in its deliberations.  This review came to an official conclusion two years later, on August 4, 2016, when DOJ decided not to amend the decrees – but announced a decree interpretation that limits ASCAP’s and BMI’s flexibility.  Specifically, DOJ stated that the decrees needed to be “more consistently applied.”  By this, the DOJ meant that BMI and ASCAP should only grant blanket licenses that cover all of the rights to 100 percent of the works in the PROs’ respective catalogs (“full-work licensing”), not licenses that cover only partial interests in those works.  DOJ stated:

Only full-work licensing can yield the substantial procompetitive benefits associated with blanket licenses that distinguish ASCAP’s and BMI’s activities from other agreements among competitors that present serious issues under the antitrust laws.

The New DOJ Interpretation Was Bad as a Matter of Policy

DOJ’s August 4 interpretation rejected industry practice.  Under it, ASCAP and BMI were only allowed to offer a license covering all of the copyright interests in a musical competition, even if the license covers a joint work.

For example, consider a band of five composer-musicians, each of whom has a fractional interest in the copyright covering the band’s new album which is a joint work.  Prior to the DOJ’s new interpretation, each musician was able to offer a partial interest in the joint work to a performance rights organization, reflecting the relative shares of the total copyright interest covering the work.  The organization could offer a partial license, and a user could aggregate different partial licenses in order to cover the whole joint work.  Following the new interpretation, however, BMI and ASCAP could not offer partial licenses to that work to users.  This denied the band’s individual members the opportunity to deal profitably with BMI and ASCAP, thereby undermining their ability to receive fair compensation.

As the two PROs warned, this approach, if upheld, would “cause unnecessary chaos in the marketplace and place unfair financial burdens and creative constraints on songwriters and composers.”  According to ASCAP President Paul Williams, “It is as if the DOJ saw songwriters struggling to stay afloat in a sea of outdated regulations and decided to hand us an anchor, in the form of 100 percent licensing, instead of a life preserver.”  Furthermore, the president and CEO of BMI, Mike O’Neill, stated:  “We believe the DOJ’s interpretation benefits no one – not BMI or ASCAP, not the music publishers, and not the music users – but we are most sensitive to the impact this could have on you, our songwriters and composers.”

The PROs’ views were bolstered by a January 2016 U.S. Copyright Office report, which concluded that “an interpretation of the consent decrees that would require 100-percent licensing or removal of a work from the ASCAP or BMI repertoire would appear to be fraught with legal and logistical problems, and might well result in a sharp decrease in repertoire available through these [performance rights organizations’] blanket licenses.”  Regrettably, during the decree review period, DOJ ignored the expert opinion of the Copyright Office, as well as the public record comments of numerous publishers and artists (see here, for example) indicating that a 100 percent licensing requirement would depress returns to copyright owners and undermine the creative music industry.

Most fundamentally, DOJ’s new interpretation of the BMI and ASCAP consent decrees involved an abridgment of economic freedom.  It further limited the flexibility of copyright music holders and music users to contract with intermediaries to promote the efficient distribution of music performance rights, in a manner that benefits the listening public while allowing creative artists sufficient compensation for their efforts.  DOJ made no compelling showing that a new consent decree constraint was needed to promote competition (100 percent licensing only).  Far from promoting competition, DOJ’s new interpretation undermined it.  DOJ micromanagement of copyright licensing by consent decree reinterpretation was a costly new regulatory initiative that reflected a lack of appreciation for intellectual property rights, which incentivize innovation.  In short, DOJ’s latest interpretation of the ASCAP and BMI decrees was terrible policy.

The New DOJ Interpretation Ran Counter to International Norms

The new DOJ interpretation had unfortunate international policy implications as well.  According to Gadi Oron, Director General of the International Confederation of Societies of Authors and Composers (CISAC), a Paris-based organization that regroups 239 rights societies from 123 countries, including ASCAP, BMI, and SESAC, the new interpretation departed from international norms in the music licensing industry and have disruptive international effects:

It is clear that the DoJ’s decisions have been made without taking the interests of creators, neither American nor international, into account. It is also clear that they were made with total disregard for the international framework, where fractional licensing is practiced, even if it’s less of a factor because many countries only have one performance rights organization representing songwriters in their territory. International copyright laws grant songwriters exclusive rights, giving them the power to decide who will license their rights in each territory and it is these rights that underpin the landscape in which authors’ societies operate. The international system of collective management of rights, which is based on reciprocal representation agreements and founded on the freedom of choice of the rights holder, would be negatively affected by such level of government intervention, at a time when it needs support more than ever.

The New DOJ Interpretation Was Defective as a Matter of Law, and the District Court and the Second Circuit So Held

As I explained in a November 2016 Heritage Foundation commentary (citing arguments made by counsel for BMI), DOJ’s new interpretation not only was bad domestic and international policy, it was inconsistent with sound textual construction of the decrees themselves.  The BMI decree (and therefore the analogous ASCAP decree as well) did not expressly require 100 percent licensing and did not unambiguously prohibit fractional licensing.  Accordingly, since a consent decree is an injunction, and any activity not expressly required or prohibited thereunder is permitted, fractional shares licensing should be authorized.  DOJ’s new interpretation ignored this principle.  It also was at odds with a report of the U.S. Copyright Office that concluded the BMI consent decree “must be understood to include partial interests in musical works.”  Furthermore, the new interpretation was belied by the fact that the PRO licensing market has developed and functioned efficiently for decades by pricing, collecting, and distributing fees for royalties on a fractional basis.  Courts view such evidence of trade practice and custom as relevant in determining the meaning of a consent decree.

The district court for the Southern District of New York accepted these textual arguments in its September 2016 ruling, granting BMI’s request for a declaratory judgment that the BMI decree did not require Decree did not require 100% (“full-work”) licensing.  The court explained:

Nothing in the Consent Decree gives support to the Division’s views. If a fractionally-licensed composition is disqualified from inclusion in BMI’s repertory, it is not for violation of any provision of the Consent Decree. While the Consent Decree requires BMI to license performances of those compositions “the right of public performances of which [BMI] has or hereafter shall have the right to license or sublicense” (Art. II(C)), it contains no provision regarding the source, extent, or nature of that right. It does not address the possibilities that BMI might license performances of a composition without sufficient legal right to do so, or under a worthless or invalid copyright, or users might perform a music composition licensed by fewer than all of its creators. . . .

The Consent Decree does not regulate the elements of the right to perform compositions. Performance of a composition under an ineffective license may infringe an author’s rights under copyright, contract or other law, but it does not infringe the Consent Decree, which does not extend to matters such as the invalidity or value of copyrights of any of the compositions in BMI’s repertory. Questions of the validity, scope and limits of the right to perform compositions are left to the congruent and competing interests in the music copyright market, and to copyright, property and other laws, to continue to resolve and enforce. Infringements (and fractional infringements) and remedies are not part of the Consent Decree’s subject-matter.

The Second Circuit affirmed, agreeing with the district court’s reading of the decree:

The decree does not address the issue of fractional versus full work licensing, and the parties agree that the issue did not arise at the time of the . . . [subsequent] amendments [to the decree]. . . .

This appeal begins and ends with the language of the consent decree. It is a “well-established principle that the language of a consent decree must dictate what a party is required to do and what it must refrain from doing.” Perez v. Danbury Hosp., 347 F.3d 419, 424 (2d Cir. 2003); United States v. Armour & Co., 402 U.S. 673, 682 (1971) (“[T]he scope of a consent decree must be discerned within its four corners…”). “[C]ourts must abide by the express terms of a consent decree and may not impose additional requirements or supplementary obligations on the parties even to fulfill the purposes of the decree more effectively.” Perez, 347 F.3d at 424; see also Barcia v. Sitkin, 367 F.3d 87, 106 (2d Cir. 2004) (internal citations omitted) (The district court may not “impose obligations on a party that are not unambiguously mandated by the decree itself.”). Accordingly, since the decree is silent on fractional licensing, BMI may (and perhaps must) offer them unless a clear and unambiguous command of the decree would thereby be violated. See United States v. Int’l Bhd. Of Teamsters, Chauffeurs, Warehousemen & Helpers of Am., AFLCIO, 998 F.2d 1101, 1107 (2d Cir. 1993); see also Armour, 402 U.S. at 681-82.

Conclusion

The federal courts wisely have put to rest an ill-considered effort by the Obama Antitrust Division to displace longstanding industry practices that allowed efficient flexibility in the licensing of copyright interests by PROs.  Let us hope that the Trump Antitrust Division will not just accept the Second Circuit’s decision, but will positively embrace it as a manifestation of enlightened antitrust-IP policy – one in harmony with broader efforts by the Division to restore sound thinking to the antitrust treatment of patent licensing and intellectual property in general.

I recently published a piece in the Hill welcoming the Canadian Supreme Court’s decision in Google v. Equustek. In this post I expand (at length) upon my assessment of the case.

In its decision, the Court upheld injunctive relief against Google, directing the company to avoid indexing websites offering the infringing goods in question, regardless of the location of the sites (and even though Google itself was not a party in the case nor in any way held liable for the infringement). As a result, the Court’s ruling would affect Google’s conduct outside of Canada as well as within it.

The case raises some fascinating and thorny issues, but, in the end, the Court navigated them admirably.

Some others, however, were not so… welcoming of the decision (see, e.g., here and here).

The primary objection to the ruling seems to be, in essence, that it is the top of a slippery slope: “If Canada can do this, what’s to stop Iran or China from doing it? Free expression as we know it on the Internet will cease to exist.”

This is a valid concern, of course — in the abstract. But for reasons I explain below, we should see this case — and, more importantly, the approach adopted by the Canadian Supreme Court — as reassuring, not foreboding.

Some quick background on the exercise of extraterritorial jurisdiction in international law

The salient facts in, and the fundamental issue raised by, the case were neatly summarized by Hugh Stephens:

[The lower Court] issued an interim injunction requiring Google to de-index or delist (i.e. not return search results for) the website of a firm (Datalink Gateways) that was marketing goods online based on the theft of trade secrets from Equustek, a Vancouver, B.C., based hi-tech firm that makes sophisticated industrial equipment. Google wants to quash a decision by the lower courts on several grounds, primarily that the basis of the injunction is extra-territorial in nature and that if Google were to be subject to Canadian law in this case, this could open a Pandora’s box of rulings from other jurisdictions that would require global delisting of websites thus interfering with freedom of expression online, and in effect “break the Internet”.

The question of jurisdiction with regard to cross-border conduct is clearly complicated and evolving. But, in important ways, it isn’t anything new just because the Internet is involved. As Jack Goldsmith and Tim Wu (yes, Tim Wu) wrote (way back in 2006) in Who Controls the Internet?: Illusions of a Borderless World:

A government’s responsibility for redressing local harms caused by a foreign source does not change because the harms are caused by an Internet communication. Cross-border harms that occur via the Internet are not any different than those outside the Net. Both demand a response from governmental authorities charged with protecting public values.

As I have written elsewhere, “[g]lobal businesses have always had to comply with the rules of the territories in which they do business.”

Traditionally, courts have dealt with the extraterritoriality problem by applying a rule of comity. As my colleague, Geoffrey Manne (Founder and Executive Director of ICLE), reminds me, the principle of comity largely originated in the work of the 17th Century Dutch legal scholar, Ulrich Huber. Huber wrote that comitas gentium (“courtesy of nations”) required the application of foreign law in certain cases:

[Sovereigns will] so act by way of comity that rights acquired within the limits of a government retain their force everywhere so far as they do not cause prejudice to the powers or rights of such government or of their subjects.

And, notably, Huber wrote that:

Although the laws of one nation can have no force directly with another, yet nothing could be more inconvenient to commerce and to international usage than that transactions valid by the law of one place should be rendered of no effect elsewhere on account of a difference in the law.

The basic principle has been recognized and applied in international law for centuries. Of course, the flip side of the principle is that sovereign nations also get to decide for themselves whether to enforce foreign law within their jurisdictions. To summarize Huber (as well as Lord Mansfield, who brought the concept to England, and Justice Story, who brought it to the US):

All three jurists were concerned with deeply polarizing public issues — nationalism, religious factionalism, and slavery. For each, comity empowered courts to decide whether to defer to foreign law out of respect for a foreign sovereign or whether domestic public policy should triumph over mere courtesy. For each, the court was the agent of the sovereign’s own public law.

The Canadian Supreme Court’s well-reasoned and admirably restrained approach in Equustek

Reconciling the potential conflict between the laws of Canada and those of other jurisdictions was, of course, a central subject of consideration for the Canadian Court in Equustek. The Supreme Court, as described below, weighed a variety of factors in determining the appropriateness of the remedy. In analyzing the competing equities, the Supreme Court set out the following framework:

[I]s there a serious issue to be tried; would the person applying for the injunction suffer irreparable harm if the injunction were not granted; and is the balance of convenience in favour of granting the interlocutory injunction or denying it. The fundamental question is whether the granting of an injunction is just and equitable in all of the circumstances of the case. This will necessarily be context-specific. [Here, as throughout this post, bolded text represents my own, added emphasis.]

Applying that standard, the Court held that because ordering an interlocutory injunction against Google was the only practical way to prevent Datalink from flouting the court’s several orders, and because there were no sufficient, countervailing comity or freedom of expression concerns in this case that would counsel against such an order being granted, the interlocutory injunction was appropriate.

I draw particular attention to the following from the Court’s opinion:

Google’s argument that a global injunction violates international comity because it is possible that the order could not have been obtained in a foreign jurisdiction, or that to comply with it would result in Google violating the laws of that jurisdiction is, with respect, theoretical. As Fenlon J. noted, “Google acknowledges that most countries will likely recognize intellectual property rights and view the selling of pirated products as a legal wrong”.

And while it is always important to pay respectful attention to freedom of expression concerns, particularly when dealing with the core values of another country, I do not see freedom of expression issues being engaged in any way that tips the balance of convenience towards Google in this case. As Groberman J.A. concluded:

In the case before us, there is no realistic assertion that the judge’s order will offend the sensibilities of any other nation. It has not been suggested that the order prohibiting the defendants from advertising wares that violate the intellectual property rights of the plaintiffs offends the core values of any nation. The order made against Google is a very limited ancillary order designed to ensure that the plaintiffs’ core rights are respected.

In fact, as Andrew Keane Woods writes at Lawfare:

Under longstanding conflicts of laws principles, a court would need to weigh the conflicting and legitimate governments’ interests at stake. The Canadian court was eager to undertake that comity analysis, but it couldn’t do so because the necessary ingredient was missing: there was no conflict of laws.

In short, the Canadian Supreme Court, while acknowledging the importance of comity and appropriate restraint in matters with extraterritorial effect, carefully weighed the equities in this case and found that they favored the grant of extraterritorial injunctive relief. As the Court explained:

Datalink [the direct infringer] and its representatives have ignored all previous court orders made against them, have left British Columbia, and continue to operate their business from unknown locations outside Canada. Equustek has made efforts to locate Datalink with limited success. Datalink is only able to survive — at the expense of Equustek’s survival — on Google’s search engine which directs potential customers to Datalink’s websites. This makes Google the determinative player in allowing the harm to occur. On balance, since the world‑wide injunction is the only effective way to mitigate the harm to Equustek pending the trial, the only way, in fact, to preserve Equustek itself pending the resolution of the underlying litigation, and since any countervailing harm to Google is minimal to non‑existent, the interlocutory injunction should be upheld.

As I have stressed, key to the Court’s reasoning was its close consideration of possible countervailing concerns and its entirely fact-specific analysis. By the very terms of the decision, the Court made clear that its balancing would not necessarily lead to the same result where sensibilities or core values of other nations would be offended. In this particular case, they were not.

How critics of the decision (and there are many) completely miss the true import of the Court’s reasoning

In other words, the holding in this case was a function of how, given the facts of the case, the ruling would affect the particular core concerns at issue: protection and harmonization of global intellectual property rights on the one hand, and concern for the “sensibilities of other nations,” including their concern for free expression, on the other.

This should be deeply reassuring to those now criticizing the decision. And yet… it’s not.

Whether because they haven’t actually read or properly understood the decision, or because they are merely grandstanding, some commenters are proclaiming that the decision marks the End Of The Internet As We Know It — you know, it’s going to break the Internet. Or something.

Human Rights Watch, an organization I generally admire, issued a statement including the following:

The court presumed no one could object to delisting someone it considered an intellectual property violator. But other countries may soon follow this example, in ways that more obviously force Google to become the world’s censor. If every country tries to enforce its own idea of what is proper to put on the Internet globally, we will soon have a race to the bottom where human rights will be the loser.

The British Columbia Civil Liberties Association added:

Here it was technical details of a product, but you could easily imagine future cases where we might be talking about copyright infringement, or other things where people in private lawsuits are wanting things to be taken down off  the internet that are more closely connected to freedom of expression.

From the other side of the traditional (if insufficiently nuanced) “political spectrum,” AEI’s Ariel Rabkin asserted that

[O]nce we concede that Canadian courts can regulate search engine results in Turkey, it is hard to explain why a Turkish court shouldn’t have the reciprocal right. And this is no hypothetical — a Turkish court has indeed ordered Twitter to remove a user (AEI scholar Michael Rubin) within the United States for his criticism of Erdogan. Once the jurisdictional question is decided, it is no use raising free speech as an issue. Other countries do not have our free speech norms, nor Canada’s. Once Canada concedes that foreign courts have the right to regulate Canadian search results, they are on the internet censorship train, and there is no egress before the end of the line.

In this instance, in particular, it is worth noting not only the complete lack of acknowledgment of the Court’s articulated constraints on taking action with extraterritorial effect, but also the fact that Turkey (among others) has hardly been waiting for approval from Canada before taking action.   

And then there’s EFF (of course). EFF, fairly predictably, suggests first — with unrestrained hyperbole — that the Supreme Court held that:

A country has the right to prevent the world’s Internet users from accessing information.

Dramatic hyperbole aside, that’s also a stilted way to characterize the content at issue in the case. But it is important to EFF’s misleading narrative to begin with the assertion that offering infringing products for sale is “information” to which access by the public is crucial. But, of course, the distribution of infringing products is hardly “expression,” as most of us would understand that term. To claim otherwise is to denigrate the truly important forms of expression that EFF claims to want to protect.

And, it must be noted, even if there were expressive elements at issue, infringing “expression” is always subject to restriction under the copyright laws of virtually every country in the world (and free speech laws, where they exist).

Nevertheless, EFF writes that the decision:

[W]ould cut off access to information for U.S. users would set a dangerous precedent for online speech. In essence, it would expand the power of any court in the world to edit the entire Internet, whether or not the targeted material or site is lawful in another country. That, we warned, is likely to result in a race to the bottom, as well-resourced individuals engage in international forum-shopping to impose the one country’s restrictive laws regarding free expression on the rest of the world.

Beyond the flaws of the ruling itself, the court’s decision will likely embolden other countries to try to enforce their own speech-restricting laws on the Internet, to the detriment of all users. As others have pointed out, it’s not difficult to see repressive regimes such as China or Iran use the ruling to order Google to de-index sites they object to, creating a worldwide heckler’s veto.

As always with EFF missives, caveat lector applies: None of this is fair or accurate. EFF (like the other critics quoted above) is looking only at the result — the specific contours of the global order related to the Internet — and not to the reasoning of the decision itself.

Quite tellingly, EFF urges its readers to ignore the case in front of them in favor of a theoretical one. That is unfortunate. Were EFF, et al. to pay closer attention, they would be celebrating this decision as a thoughtful, restrained, respectful, and useful standard to be employed as a foundational decision in the development of global Internet governance.

The Canadian decision is (as I have noted, but perhaps still not with enough repetition…) predicated on achieving equity upon close examination of the facts, and giving due deference to the sensibilities and core values of other nations in making decisions with extraterritorial effect.

Properly understood, the ruling is a shield against intrusions that undermine freedom of expression, and not an attack on expression.

EFF subverts the reasoning of the decision and thus camouflages its true import, all for the sake of furthering its apparently limitless crusade against all forms of intellectual property. The ruling can be read as an attack on expression only if one ascribes to the distribution of infringing products the status of protected expression — so that’s what EFF does. But distribution of infringing products is not protected expression.

Extraterritoriality on the Internet is complicated — but that undermines, rather than justifies, critics’ opposition to the Court’s analysis

There will undoubtedly be other cases that present more difficult challenges than this one in defining the jurisdictional boundaries of courts’ abilities to address Internet-based conduct with multi-territorial effects. But the guideposts employed by the Supreme Court of Canada will be useful in informing such decisions.

Of course, some states don’t (or won’t, when it suits them), adhere to principles of comity. But that was true long before the Equustek decision. And, frankly, the notion that this decision gives nations like China or Iran political cover for global censorship is ridiculous. Nations that wish to censor the Internet will do so regardless. If anything, reference to this decision (which, let me spell it out again, highlights the importance of avoiding relief that would interfere with core values or sensibilities of other nations) would undermine their efforts.

Rather, the decision will be far more helpful in combating censorship and advancing global freedom of expression. Indeed, as noted by Hugh Stephens in a recent blog post:

While the EFF, echoed by its Canadian proxy OpenMedia, went into hyperventilation mode with the headline, “Top Canadian Court permits Worldwide Internet Censorship”, respected organizations like the Canadian Civil Liberties Association (CCLA) welcomed the decision as having achieved the dual objectives of recognizing the importance of freedom of expression and limiting any order that might violate that fundamental right. As the CCLA put it,

While today’s decision upholds the worldwide order against Google, it nevertheless reflects many of the freedom of expression concerns CCLA had voiced in our interventions in this case.

As I noted in my piece in the Hill, this decision doesn’t answer all of the difficult questions related to identifying proper jurisdiction and remedies with respect to conduct that has global reach; indeed, that process will surely be perpetually unfolding. But, as reflected in the comments of the Canadian Civil Liberties Association, it is a deliberate and well-considered step toward a fair and balanced way of addressing Internet harms.

With apologies for quoting myself, I noted the following in an earlier piece:

I’m not unsympathetic to Google’s concerns. As a player with a global footprint, Google is legitimately concerned that it could be forced to comply with the sometimes-oppressive and often contradictory laws of countries around the world. But that doesn’t make it — or any other Internet company — unique. Global businesses have always had to comply with the rules of the territories in which they do business… There will be (and have been) cases in which taking action to comply with the laws of one country would place a company in violation of the laws of another. But principles of comity exist to address the problem of competing demands from sovereign governments.

And as Andrew Keane Woods noted:

Global takedown orders with no limiting principle are indeed scary. But Canada’s order has a limiting principle. As long as there is room for Google to say to Canada (or France), “Your order will put us in direct and significant violation of U.S. law,” the order is not a limitless assertion of extraterritorial jurisdiction. In the instance that a service provider identifies a conflict of laws, the state should listen.

That is precisely what the Canadian Supreme Court’s decision contemplates.

No one wants an Internet based on the lowest common denominator of acceptable speech. Yet some appear to want an Internet based on the lowest common denominator for the protection of original expression. These advocates thus endorse theories of jurisdiction that would deny societies the ability to enforce their own laws, just because sometimes those laws protect intellectual property.

And yet that reflects little more than an arbitrary prioritization of those critics’ personal preferences. In the real world (including the real online world), protection of property is an important value, deserving reciprocity and courtesy (comity) as much as does speech. Indeed, the G20 Digital Economy Ministerial Declaration adopted in April of this year recognizes the importance to the digital economy of promoting security and trust, including through the provision of adequate and effective intellectual property protection. Thus the Declaration expresses the recognition of the G20 that:

[A]pplicable frameworks for privacy and personal data protection, as well as intellectual property rights, have to be respected as they are essential to strengthening confidence and trust in the digital economy.

Moving forward in an interconnected digital universe will require societies to make a series of difficult choices balancing both competing values and competing claims from different jurisdictions. Just as it does in the offline world, navigating this path will require flexibility and skepticism (if not rejection) of absolutism — including with respect to the application of fundamental values. Even things like freedom of expression, which naturally require a balancing of competing interests, will need to be reexamined. We should endeavor to find that fine line between allowing individual countries to enforce their own national judgments and a tolerance for those countries that have made different choices. This will not be easy, as well manifested in something that Alice Marwick wrote earlier this year:

But a commitment to freedom of speech above all else presumes an idealistic version of the internet that no longer exists. And as long as we consider any content moderation to be censorship, minority voices will continue to be drowned out by their aggressive majority counterparts.

* * *

We need to move beyond this simplistic binary of free speech/censorship online. That is just as true for libertarian-leaning technologists as it is neo-Nazi provocateurs…. Aggressive online speech, whether practiced in the profanity and pornography-laced environment of 4Chan or the loftier venues of newspaper comments sections, positions sexism, racism, and anti-Semitism (and so forth) as issues of freedom of expression rather than structural oppression.

Perhaps we might want to look at countries like Canada and the United Kingdom, which take a different approach to free speech than does the United States. These countries recognize that unlimited free speech can lead to aggression and other tactics which end up silencing the speech of minorities — in other words, the tyranny of the majority. Creating online communities where all groups can speak may mean scaling back on some of the idealism of the early internet in favor of pragmatism. But recognizing this complexity is an absolutely necessary first step.

While I (and the Canadian Supreme Court, for that matter) share EFF’s unease over the scope of extraterritorial judgments, I fundamentally disagree with EFF that the Equustek decision “largely sidesteps the question of whether such a global order would violate foreign law or intrude on Internet users’ free speech rights.”

In fact, it is EFF’s position that comes much closer to a position indifferent to the laws and values of other countries; in essence, EFF’s position would essentially always prioritize the particular speech values adopted in the US, regardless of whether they had been adopted by the countries affected in a dispute. It is therefore inconsistent with the true nature of comity.

Absolutism and exceptionalism will not be a sound foundation for achieving global consensus and the effective operation of law. As stated by the Canadian Supreme Court in Equustek, courts should enforce the law — whatever the law is — to the extent that such enforcement does not substantially undermine the core sensitivities or values of nations where the order will have effect.

EFF ignores the process in which the Court engaged precisely because EFF — not another country, but EFF — doesn’t find the enforcement of intellectual property rights to be compelling. But that unprincipled approach would naturally lead in a different direction where the court sought to protect a value that EFF does care about. Such a position arbitrarily elevates EFF’s idiosyncratic preferences. That is simply not a viable basis for constructing good global Internet governance.

If the Internet is both everywhere and nowhere, our responses must reflect that reality, and be based on the technology-neutral application of laws, not the abdication of responsibility premised upon an outdated theory of tech exceptionalism under which cyberspace is free from the application of the laws of sovereign nations. That is not the path to either freedom or prosperity.

To realize the economic and social potential of the Internet, we must be guided by both a determination to meaningfully address harms, and a sober reservation about interfering in the affairs of other states. The Supreme Court of Canada’s decision in Google v. Equustek has planted a flag in this space. It serves no one to pretend that the Court decided that a country has the unfettered right to censor the Internet. That’s not what it held — and we should be grateful for that. To suggest otherwise may indeed be self-fulfilling.

R Street’s Sasha Moss recently posted a piece on TechDirt describing the alleged shortcomings of the Register of Copyrights Selection and Accountability Act of 2017 (RCSAA) — proposed legislative adjustments to the Copyright Office, recently passed in the House and introduced in the Senate last month (with identical language).

Many of the article’s points are well taken. Nevertheless, they don’t support the article’s call for the Senate to “jettison [the bill] entirely,” nor the assertion that “[a]s currently written, the bill serves no purpose, and Congress shouldn’t waste its time on it.”

R Street’s main complaint with the legislation is that it doesn’t include other proposals in a House Judiciary Committee whitepaper on Copyright Office modernization. But condemning the RCSAA simply for failing to incorporate all conceivable Copyright Office improvements fails to adequately take account of the political realities confronting Congress — in other words, it lets the perfect be the enemy of the good. It also undermines R Street’s own stated preference for Copyright Office modernization effected through “targeted and immediately implementable solutions.”

Everyone — even R Street — acknowledges that we need to modernize the Copyright office. But none of the arguments in favor of a theoretical, “better” bill is undermined or impeded by passing this bill first. While there is certainly more that Congress can do on this front, the RCSAA is a sensible, targeted piece of legislation that begins to build the new foundation for a twenty-first century Copyright Office.

Process over politics

The proposed bill is simple: It would make the Register of Copyrights a nominated and confirmed position. For reasons almost forgotten over the last century and a half, the head of the Copyright Office is currently selected at the sole discretion of the Librarian of Congress. The Copyright Office was placed in the Library merely as a way to grow the Library’s collection with copies of copyrighted works.

More than 100 years later, most everyone acknowledges that the Copyright Office has lagged behind the times. And many think the problem lies with the Office’s placement within the Library, which is plagued with information technology and other problems, and has a distinctly different mission than the Copyright Office. The only real question is what to do about it.

Separating the the Copyright Office from the Library is a straightforward and seemingly apolitical step toward modernization. And yet, somewhat inexplicably, R Street claims that the bill

amounts largely to a partisan battle over who will have the power to select the next Register: [Current Librarian of Congress] Hayden, who was appointed by Barack Obama, or President Donald Trump.

But this is a pretty farfetched characterization.

First, the House passed the bill 378-48, with 145 Democrats joining 233 Republicans in support. That’s more than three-quarters of the Democratic caucus.

Moreover, legislation to make the Register a nominated and confirmed position has been under discussion for more than four years — long before either Dr. Hayden was nominated or anyone knew that Donald Trump (or any Republican at all, for that matter) would be president.

R Street also claims that the legislation

will make the register and the Copyright Office more politicized and vulnerable to capture by special interests, [and that] the nomination process could delay modernization efforts [because of Trump’s] confirmation backlog.

But precisely the opposite seems far more likely — as Sasha herself has previously recognized:

Clarifying the office’s lines of authority does have the benefit of making it more politically accountable…. The [House] bill takes a positive step forward in promoting accountability.

As far as I’m aware, no one claims that Dr. Hayden was “politicized” or that Librarians are vulnerable to capture because they are nominated and confirmed. And a Senate confirmation process will be more transparent than unilateral appointment by the Librarian, and will give the electorate a (nominal) voice in the Register’s selection. Surely unilateral selection of the Register by the Librarian is more susceptible to undue influence.

With respect to the modernization process, we should also not forget that the Copyright Office currently has an Acting Register in Karyn Temple Claggett, who is perfectly capable of moving the modernization process forward. And any limits on her ability to do so would arise from the very tenuousness of her position that the RCSAA is intended to address.

Modernizing the Copyright Office one piece at a time

It’s certainly true, as the article notes, that the legislation doesn’t include a number of other sensible proposals for Copyright Office modernization. In particular, it points to ideas like forming a stakeholder advisory board, creating new chief economist and technologist positions, upgrading the Office’s information technology systems, and creating a small claims court.

To be sure, these could be beneficial reforms, as ICLE (and many others) have noted. But I would take some advice from R Street’s own “pragmatic approach” to promoting efficient government “with the full realization that progress on the ground tends to be made one inch at a time.”

R Street acknowledges that the legislation’s authors have indicated that this is but a beginning step and that they plan to tackle the other issues in due course. At a time when passage of any legislation on any topic is a challenge, it seems appropriate to defer to those in Congress who affirmatively want more modernization about how big a bill to start with.

In any event, it seems perfectly sensible to address the Register selection process before tackling the other issues, which may require more detailed discussions of policy and cost. And with the Copyright Office currently lacking a permanent Register and discussions underway about finding a new one, addressing any changes Congress deems necessary in the selection process seems like the most pressing issue, if they are to be resolved prior to the next pick being made.

Further, because the Register would presumably be deeply involved in the selection and operation of any new advisory board, chief economist and technologist, IT system, or small claims process, Congress can also be forgiven for wanting to address the Register issue first. Moreover, a Register who can be summarily dismissed by the Librarian likely doesn’t have the needed autonomy to fully and effectively implement the other proposals from the whitepaper. Why build a house on a shaky foundation when you can fix the foundation first?

Process over substance

All of which leaves the question why R Street opposes a bill that was passed by a bipartisan supermajority in the House; that effects precisely the kind of targeted, incremental reform that R Street promotes; and that implements a specific reform that R Street favors.

The legislation has widespread support beyond Congress, although the TechDirt piece gives this support short shrift. Instead, it notes that “some” in the content industry support the legislation, but lists only the Motion Picture Association of America. There is a subtle undercurrent of the typical substantive copyright debate, in which “enlightened” thinking on copyright is set against the presumptively malicious overreach of the movie studios. But the piece neglects to mention the support of more than 70 large and small content creators, technology companies, labor unions, and free market and civil rights groups, among others.

Sensible process reforms should be implementable without the rancor that plagues most substantive copyright debates. But it’s difficult to escape. Copyright minimalists are skeptical of an effectual Copyright Office if it is more likely to promote policies that reinforce robust copyright, even if they support sensible process reforms and more-accountable government in the abstract. And, to be fair, copyright proponents are thrilled when their substantive positions might be bolstered by promotion of sensible process reforms.

But the truth is that no one really knows how an independent and accountable Copyright Office will act with respect to contentious, substantive issues. Perhaps most likely, increased accountability via nomination and confirmation will introduce more variance in its positions. In other words, on substance, the best guess is that greater Copyright Office accountability and modernization will be a wash — leaving only process itself as a sensible basis on which to assess reform. And on that basis, there is really no reason to oppose this widely supported, incremental step toward a modern US Copyright Office.

According to Cory Doctorow over at Boing Boing, Tim Wu has written an open letter to W3C Chairman Sir Timothy Berners-Lee, expressing concern about a proposal to include Encrypted Media Extensions (EME) as part of the W3C standards. W3C has a helpful description of EME:

Encrypted Media Extensions (EME) is currently a draft specification… [for] an Application Programming Interface (API) that enables Web applications to interact with content protection systems to allow playback of encrypted audio and video on the Web. The EME specification enables communication between Web browsers and digital rights management (DRM) agent software to allow HTML5 video play back of DRM-wrapped content such as streaming video services without third-party media plugins. This specification does not create nor impose a content protection or Digital Rights Management system. Rather, it defines a common API that may be used to discover, select and interact with such systems as well as with simpler content encryption systems.

Wu’s letter expresses his concern about hardwiring DRM into the technical standards supporting an open internet. He writes:

I wanted to write to you and respectfully ask you to seriously consider extending a protective covenant to legitimate circumventers who have cause to bypass EME, should it emerge as a W3C standard.

Wu asserts that this “protective covenant” is needed because, without it, EME will confer too much power on internet “chokepoints”:

The question is whether the W3C standard with an embedded DRM standard, EME, becomes a tool for suppressing competition in ways not expected…. Control of chokepoints has always and will always be a fundamental challenge facing the Internet as we both know… It is not hard to recall how close Microsoft came, in the late 1990s and early 2000s, to gaining de facto control over the future of the web (and, frankly, the future) in its effort to gain an unsupervised monopoly over the browser market.”

But conflating the Microsoft case with a relatively simple browser feature meant to enable all content providers to use any third-party DRM to secure their content — in other words, to enhance interoperability — is beyond the pale. If we take the Microsoft case as Wu would like, it was about one firm controlling, far and away, the largest share of desktop computing installations, a position that Wu and his fellow travelers believed gave Microsoft an unreasonable leg up in forcing usage of Internet Explorer to the exclusion of Netscape. With EME, the W3C is not maneuvering the standard so that a single DRM provider comes to protect all content on the web, or could even hope to do so. EME enables content distributors to stream content through browsers using their own DRM backend. There is simply nothing in that standard that enables a firm to dominate content distribution or control huge swaths of the Internet to the exclusion of competitors.

Unless, of course, you just don’t like DRM and you think that any technology that enables content producers to impose restrictions on consumption of media creates a “chokepoint.” But, again, this position is borderline nonsense. Such a “chokepoint” is no more restrictive than just going to Netflix’s app (or Hulu’s, or HBO’s, or Xfinity’s, or…) and relying on its technology. And while it is no more onerous than visiting Netflix’s app, it creates greater security on the open web such that copyright owners don’t need to resort to proprietary technologies and apps for distribution. And, more fundamentally, Wu’s position ignores the role that access and usage controls are playing in creating online markets through diversified product offerings

Wu appears to believe, or would have his readers believe, that W3C is considering the adoption of a mandatory standard that would modify core aspects of the network architecture, and that therefore presents novel challenges to the operation of the internet. But this is wrong in two key respects:

  1. Except in the extremely limited manner as described below by the W3C, the EME extension does not contain mandates, and is designed only to simplify the user experience in accessing content that would otherwise require plug-ins; and
  2. These extensions are already incorporated into the major browsers. And of course, most importantly for present purposes, the standard in no way defines or harmonizes the use of DRM.

The W3C has clearly and succinctly explained the operation of the proposed extension:

The W3C is not creating DRM policies and it is not requiring that HTML use DRM. Organizations choose whether or not to have DRM on their content. The EME API can facilitate communication between browsers and DRM providers but the only mandate is not DRM but a form of key encryption (Clear Key). EME allows a method of playback of encrypted content on the Web but W3C does not make the DRM technology nor require it. EME is an extension. It is not required for HTML nor HMTL5 video.

Like many internet commentators, Tim Wu fundamentally doesn’t like DRM, and his position here would appear to reflect his aversion to DRM rather than a response to the specific issues before the W3C. Interestingly, in arguing against DRM nearly a decade ago, Wu wrote:

Finally, a successful locking strategy also requires intense cooperation between many actors – if you protect a song with “superlock,” and my CD player doesn’t understand that, you’ve just created a dead product. (Emphasis added)

In other words, he understood the need for agreements in vertical distribution chains in order to properly implement protection schemes — integration that he opposes here (not to suggest that he supported them then, but only to highlight the disconnect between recognizing the need for coordination and simultaneously trying to prevent it).

Vint Cerf (himself no great fan of DRM — see here, for example) has offered a number of thoughtful responses to those, like Wu, who have objected to the proposed standard. Cerf writes on the ISOC listserv:

EMEi is plainly very general. It can be used to limit access to virtually any digital content, regardless of IPR status. But, in some sense, anyone wishing to restrict access to some service/content is free to do so (there are other means such as login access control, end/end encryption such as TLS or IPSEC or QUIC). EME is yet another method for doing that. Just because some content is public domain does not mean that every use of it must be unprotected, does it?

And later in the thread he writes:

Just because something is public domain does not mean someone can’t lock it up. Presumably there will be other sources that are not locked. I can lock up my copy of Gulliver’s Travels and deny you access except by some payment, but if it is public domain someone else may have a copy you can get. In any case, you can’t deny others the use of the content IF THEY HAVE IT. You don’t have to share your copy of public domain with anyone if you don’t want to.

Just so. It’s pretty hard to see the competition problems that could arise from facilitating more content providers making content available on the open web.

In short, Wu wants the W3C to develop limitations on rules when there are no relevant rules to modify. His dislike of DRM obscures his vision of the limited nature of the EME proposal which would largely track, rather than lead, the actions already being undertaken by the principal commercial actors on the internet, and which merely creates a structure for facilitating voluntary commercial transactions in ways that enhance the user experience.

The W3C process will not, as Wu intimates, introduce some pernicious, default protection system that would inadvertently lock down content; rather, it would encourage the development of digital markets on the open net rather than (or in addition to) through the proprietary, vertical markets where they are increasingly found today. Wu obscures reality rather than illuminating it through his poorly considered suggestion that EME will somehow lead to a new set of defaults that threaten core freedoms.

Finally, we can’t help but comment on Wu’s observation that

My larger point is that I think the history of the anti-circumvention laws suggests is (sic) hard to predict how [freedom would be affected]– no one quite predicted the inkjet market would be affected. But given the power of those laws, the potential for anti-competitive consequences certainly exists.

Let’s put aside the fact that W3C is not debating the laws surrounding circumvention, nor, as noted, developing usage rules. It remains troubling that Wu’s belief there are sometimes unintended consequences of actions (and therefore a potential for harm) would be sufficient to lead him to oppose a change to the status quo — as if any future, potential risk necessarily outweighs present, known harms. This is the Precautionary Principle on steroids. The EME proposal grew out of a desire to address impediments that prevent the viability and growth of online markets that sufficiently ameliorate the non-hypothetical harms of unauthorized uses. The EME proposal is a modest step towards addressing a known universe. A small step, but something to celebrate, not bemoan.

What does it mean to “own” something? A simple question (with a complicated answer, of course) that, astonishingly, goes unasked in a recent article in the Pennsylvania Law Review entitled, What We Buy When We “Buy Now,” by Aaron Perzanowski and Chris Hoofnagle (hereafter “P&H”). But how can we reasonably answer the question they pose without first trying to understand the nature of property interests?

P&H set forth a simplistic thesis for their piece: when an e-commerce site uses the term “buy” to indicate the purchase of digital media (instead of the term “license”), it deceives consumers. This is so, the authors assert, because the common usage of the term “buy” indicates that there will be some conveyance of property that necessarily includes absolute rights such as alienability, descendibility, and excludability, and digital content doesn’t generally come with these attributes. The authors seek to establish this deception through a poorly constructed survey regarding consumers’ understanding of the parameters of their property interests in digitally acquired copies. (The survey’s considerable limitations is a topic for another day….)

The issue is more than merely academic: NTIA and the USPTO have just announced that they will hold a public meeting

to discuss how best to communicate to consumers regarding license terms and restrictions in connection with online transactions involving copyrighted works… [as a precursor to] the creation of a multistakeholder process to establish best practices to improve consumers’ understanding of license terms and restrictions in connection with online transactions involving creative works.

Whatever the results of that process, it should not begin, or end, with P&H’s problematic approach.

Getting to their conclusion that platforms are engaged in deceptive practices requires two leaps of faith: First, that property interests are absolute and that any restraint on the use of “property” is inconsistent with the notion of ownership; and second, that consumers’ stated expectations (even assuming that they were measured correctly) alone determine the appropriate contours of legal (and economic) property interests. Both leaps are meritless.

Property and ownership are not absolute concepts

P&H are in such a rush to condemn downstream restrictions on the alienability of digital copies that they fail to recognize that “property” and “ownership” are not absolute terms, and are capable of being properly understood only contextually. Our very notions of what objects may be capable of ownership change over time, along with the scope of authority over owned objects. For P&H, the fact that there are restrictions on the use of an object means that it is not properly “owned.” But that overlooks our everyday understanding of the nature of property.

Ownership is far more complex than P&H allow, and ownership limited by certain constraints is still ownership. As Armen Alchian and Harold Demsetz note in The Property Right Paradigm (1973):

In common speech, we frequently speak of someone owning this land, that house, or these bonds. This conversational style undoubtedly is economical from the viewpoint of quick communication, but it masks the variety and complexity of the ownership relationship. What is owned are rights to use resources, including one’s body and mind, and these rights are always circumscribed, often by the prohibition of certain actions. To “own land” usually means to have the right to till (or not to till) the soil, to mine the soil, to offer those rights for sale, etc., but not to have the right to throw soil at a passerby, to use it to change the course of a stream, or to force someone to buy it. What are owned are socially recognized rights of action. (Emphasis added).

Literally, everything we own comes with a range of limitations on our use rights. Literally. Everything. So starting from a position that limitations on use mean something is not, in fact, owned, is absurd.

Moreover, in defining what we buy when we buy digital goods by reference to analog goods, P&H are comparing apples and oranges, without acknowledging that both apples and oranges are bought.

There has been a fair amount of discussion about the nature of digital content transactions (including by the USPTO and NTIA), and whether they are analogous to traditional sales of objects or more properly characterized as licenses. But this is largely a distinction without a difference, and the nature of the transaction is unnecessary in understanding that P&H’s assertion of deception is unwarranted.

Quite simply, we are accustomed to buying licenses as well as products. Whenever we buy a ticket — e.g., an airline ticket or a ticket to the movies — we are buying the right to use something or gain some temporary privilege. These transactions are governed by the terms of the license. But we certainly buy tickets, no? Alchian and Demsetz again:

The domain of demarcated uses of a resource can be partitioned among several people. More than one party can claim some ownership interest in the same resource. One party may own the right to till the land, while another, perhaps the state, may own an easement to traverse or otherwise use the land for specific purposes. It is not the resource itself which is owned; it is a bundle, or a portion, of rights to use a resource that is owned. In its original meaning, property referred solely to a right, title, or interest, and resources could not be identified as property any more than they could be identified as right, title, or interest. (Emphasis added).

P&H essentially assert that restrictions on the use of property are so inconsistent with the notion of property that it would be deceptive to describe the acquisition transaction as a purchase. But such a claim completely overlooks the fact that there are restrictions on any use of property in general, and on ownership of copies of copyright-protected materials in particular.

Take analog copies of copyright-protected works. While the lawful owner of a copy is able to lend that copy to a friend, sell it, or even use it as a hammer or paperweight, he or she can not offer it for rental (for certain kinds of works), cannot reproduce it, may not publicly perform or broadcast it, and may not use it to bludgeon a neighbor. In short, there are all kinds of restrictions on the use of said object — yet P&H have little problem with defining the relationship of person to object as “ownership.”

Consumers’ understanding of all the terms of exchange is a poor metric for determining the nature of property interests

P&H make much of the assertion that most users don’t “know” the precise terms that govern the allocation of rights in digital copies; this is the source of the “deception” they assert. But there is a cost to marking out the precise terms of use with perfect specificity (no contract specifies every eventuality), a cost to knowing the terms perfectly, and a cost to caring about them.

When we buy digital goods, we probably care a great deal about a few terms. For a digital music file, for example, we care first and foremost about whether it will play on our device(s). Other terms are of diminishing importance. Users certainly care whether they can play a song when offline, for example, but whether their children will be able to play it after they die? Not so much. That eventuality may, in fact, be specified in the license, but the nature of this particular ownership relationship includes a degree of rational ignorance on the users’ part: The typical consumer simply doesn’t care. In other words, she is, in Nobel-winning economist Herbert Simon’s term, “boundedly rational.” That isn’t deception; it’s a feature of life without which we would be overwhelmed by “information overload” and unable to operate. We have every incentive and ability to know the terms we care most about, and to ignore the ones about which we care little.

Relatedly, P&H also fail to understand the relationship between price and ownership. A digital song that is purchased from Amazon for $.99 comes with a set of potentially valuable attributes. For example:

  • It may be purchased on its own, without the other contents of an album;
  • It never degrades in quality, and it’s extremely difficult to misplace;
  • It may be purchased from one’s living room and be instantaneously available;
  • It can be easily copied or transferred onto multiple devices; and
  • It can be stored in Amazon’s cloud without taking up any of the consumer’s physical memory resources.

In many ways that matter to consumers, digital copies are superior to analog or physical ones. And yet, compared to physical media, on a per-song basis (assuming one could even purchase a physical copy of a single song without purchasing an entire album), $.99 may represent a considerable discount. Moreover, in 1982 when CDs were first released, they cost an average of $15. In 2017 dollars, that would be $38. Yet today most digital album downloads can be found for $10 or less.

Of course, songs purchased on CD or vinyl offer other benefits that a digital copy can’t provide. But the main thing — the ability to listen to the music — is approximately equal, and yet the digital copy offers greater convenience at (often) lower price. It is impossible to conclude that a consumer is duped by such a purchase, even if it doesn’t come with the ability to resell the song.

In fact, given the price-to-value ratio, it is perhaps reasonable to think that consumers know full well (or at least suspect) that there might be some corresponding limitations on use — the inability to resell, for example — that would explain the discount. For some people, those limitations might matter, and those people, presumably, figure out whether such limitations are present before buying a digital album or song For everyone else, however, the ability to buy a digital song for $.99 — including all of the benefits of digital ownership, but minus the ability to resell — is a good deal, just as it is worth it to a home buyer to purchase a house, regardless of whether it is subject to various easements.

Consumers are, in fact, familiar with “buying” property with all sorts of restrictions

The inability to resell digital goods looms inordinately large for P&H: According to them, by virtue of the fact that digital copies may not be resold, “ownership” is no longer an appropriate characterization of the relationship between the consumer and her digital copy. P&H believe that digital copies of works are sufficiently similar to analog versions, that traditional doctrines of exhaustion (which would permit a lawful owner of a copy of a work to dispose of that copy as he or she deems appropriate) should apply equally to digital copies, and thus that the inability to alienate the copy as the consumer wants means that there is no ownership interest per se.

But, as discussed above, even ownership of a physical copy doesn’t convey to the purchaser the right to make or allow any use of that copy. So why should we treat the ability to alienate a copy as the determining factor in whether it is appropriate to refer to the acquisition as a purchase? P&H arrive at this conclusion only through the illogical assertion that

Consumers operate in the marketplace based on their prior experience. We suggest that consumers’ “default” behavior is based on the experiences of buying physical media, and the assumptions from that context have carried over into the digital domain.

P&H want us to believe that consumers can’t distinguish between the physical and virtual worlds, and that their ability to use media doesn’t differentiate between these realms. But consumers do understand (to the extent that they care) that they are buying a different product, with different attributes. Does anyone try to play a vinyl record on his or her phone? There are perceived advantages and disadvantages to different kinds of media purchases. The ability to resell is only one of these — and for many (most?) consumers not likely the most important.

And, furthermore, the notion that consumers better understood their rights — and the limitations on ownership — in the physical world and that they carried these well-informed expectations into the digital realm is fantasy. Are we to believe that the consumers of yore understood that when they bought a physical record they could sell it, but not rent it out? That if they played that record in a public place they would need to pay performance royalties to the songwriter and publisher? Not likely.

Simply put, there is a wide variety of goods and services that we clearly buy, but that have all kinds of attributes that do not fit P&H’s crabbed definition of ownership. For example:

  • We buy tickets to events and membership in clubs (which, depending upon club rules, may not be alienated, and which always lapse for non-payment).
  • We buy houses notwithstanding the fact that in most cases all we own is the right to inhabit the premises for as long as we pay the bank (which actually retains more of the incidents of “ownership”).
  • In fact, we buy real property encumbered by a series of restrictive covenants: Depending upon where we live, we may not be able to build above a certain height, we may not paint the house certain colors, we may not be able to leave certain objects in the driveway, and we may not be able to resell without approval of a board.

We may or may not know (or care) about all of the restrictions on our use of such property. But surely we may accurately say that we bought the property and that we “own” it, nonetheless.

The reality is that we are comfortable with the notion of buying any number of limited property interests — including the purchasing of a license — regardless of the contours of the purchase agreement. The fact that some ownership interests may properly be understood as licenses rather than as some form of exclusive and permanent dominion doesn’t suggest that a consumer is not involved in a transaction properly characterized as a sale, or that a consumer is somehow deceived when the transaction is characterized as a sale — and P&H are surely aware of this.

Conclusion: The real issue for P&H is “digital first sale,” not deception

At root, P&H are not truly concerned about consumer deception; they are concerned about what they view as unreasonable constraints on the “rights” of consumers imposed by copyright law in the digital realm. Resale looms so large in their analysis not because consumers care about it (or are deceived about it), but because the real object of their enmity is the lack of a “digital first sale doctrine” that exactly mirrors the law regarding physical goods.

But Congress has already determined that there are sufficient distinctions between ownership of digital copies and ownership of analog ones to justify treating them differently, notwithstanding ownership of the particular copy. And for good reason: Trade in “used” digital copies is not a secondary market. Such copies are identical to those traded in the primary market and would compete directly with “pristine” digital copies. It makes perfect sense to treat ownership differently in these cases — and still to say that both digital and analog copies are “bought” and “owned.”

P&H’s deep-seated opposition to current law colors and infects their analysis — and, arguably, their failure to be upfront about it is the real deception. When one starts an analysis with an already-identified conclusion, the path from hypothesis to result is unlikely to withstand scrutiny, and that is certainly the case here.

I recently became aware of a decision from the High Court in South Africa that examines an interesting intersection of freedom of expression, copyright and contract. It addresses the issue of how to define the public interest in an environment of relatively unguarded rhetoric about the role of copyright in society that is worth exploring. But first, a quick recap of the relevant facts, none of which were in issue.

A well known filmmaker, Ms. SE Vollenhoven, was hired by South African broadcaster, SABC, to produce a documentary film exposing certain governmental improprieties. In her contract with SABC, Vollenhoven transferred all copyright interests to SABC in exchange for compensation. SABC ultimately decided that it was uncomfortable with the product, and decided against releasing it. Vollenhoven initiated a discussion with SABC in an effort to buy back the rights to the film, but SABC refused, leading SABC to seek an injunction preventing Vollenhoven from engaging in any acts that would infringe their rights in the film.

For the purposes of this analysis, let’s assume that all equities are with Vollenhoven, and that the public would gain from the release of the film. I am not in a position to make such a judgment personally, but certainly my sympathies would be with a filmmaker whose own expressive work is relegated to the dustbins due to a decision by a business partner to keep the film out of the public eye. Her frustration is clearly understandable. Let’s further assume that the government pressured SABC into not releasing the film—not because in fact I assume this, but because it is certainly possible, and I want to examine the copyright questions in a light least hospitable to the assertion of copyright. There is an axiom in legal circles that “bad facts make bad law,” but sometimes bad facts allow us to observe legal principles without artifice or obstruction in ways that are useful for our understanding of fundamental principles of law and justice.

This is just such a case. Much as we might sympathize with Vollenhoven, the arguments presented by her counsel would require us to believe that the rejection of free will that undergirds freedom of contract and self-determination is a legitimate price in the quest for perceived freedom. I believe that is a fundamentally flawed proposition, and that willingness to constrain free will that allows a person to determine the scope of her consent undermines rather than advances the public good. The ends, even assuming that they are noble and just, do not justify a means that eliminates consent while seeking to improve the human condition. Vollenhoven and amici (we will get to them later)  ask us to reject free will to achieve freedom. But there is no freedom at the end of that road. As the Court brilliantly and succinctly observed: “a limitation of freedom is irreconcilable with the right of choice.

There are a number of equitable doctrines under which contracts may be vitiated, for example when they are the result of duress or where the consent required for formation of a contract is found to be absent. But here, no such equitable doctrines would apply. Vollenhoven was an accomplished filmmaker who freely negotiated a contract with SABC for her services. There is no suggestion from any party that the contract was somehow unfair, nor are we talking about the application of a non-negotiated provision of law vesting copyright in an employer or commissioning party. Vollenhoven herself does not assert anything different. Her unhappiness with the result of the contract is understandable, but doesn’t justify the attempt to circumvent it through a novel and dangerous mischaracterization of copyright laws and exceptions thereto.

This is where things get interesting. Since the contract under which SABC obtained the copyright in the documentary was unassailable, Vollenhoven and her supporters determined to “free the film” by asserting an implied exception to copyright laws to permit dissemination of information in the public interest. This took a variety of forms, all of which eventually defaulted to the proposition that the public’s interest in access superseded the copyright owner’s interest in protection. I take particular note of the participation of the Freedom of Expression Institute (FXI) on behalf of Vollenhoven since they most perfectly articulate the position that copyright is a form of censorship, having written in their 2015 copyright reform submission to DTI that: “FXI believes that copyright law and free speech are fundamentally in conflict. It should come as no surprise, at all, that both governments and the private sector use copyright law to suppress speech and dissent.” Vollenhoven’s counsel, as summarized by the Court, argued that the Copyright law exists, inter alia, “to promote the free spread of art, ideas and information, not to hinder it and to regulate copyright so as to enhance a vibrant culture in South Africa. Thus on a purposeful interpretation of the Act, so it is argued, it is not just to protect owners of copyright but to advance the public good.”

The Court was unimpressed, finding that: “There is nothing…to support the meaning of public good relied on by the Respondents. Their construction of public good or welfare is equated to dissemination of ideas and this is nowhere to be found or implied….The view that copyright aims to promote public disclosure and dissemination of works cannot be regarded as a true reflection of the purpose or intent of the Act and is not part of our copyright law. The Respondents’  conception of the purposes of the Copyright Act is overbroad. The Act by no means purports to regulate or promote the free spread of ideas although it undoubtedly is a mechanism by which this result may be effected. It is straining the proper limits of the Act to find some kind of implied condition of dissemination in the conferral of copyright.”

And of course, the Court is absolutely correct–enjoining the distribution of the film doesn’t prevent the distribution of the information/ideas contained therein, only the specific original expression of said ideas. Vollerhoven, or anyone else, remains free to tell stories through separate vehicles. As the Court explained: “[Vollerhoven] concedes readily that the respondents have right to tell the story in a different work and have not attempted to stifle this form of expression. In truth the respondents’ freedom of speech is not impinged at all. What is impinged is the use of the work which the respondents sold to the applicant and were substantially rewarded monetarily. The copyrights are vested by law in the applicants. This cannot be conflated with an infringement of freedom of speech. Vollenhoven shows that she is alive to the distinction between the work and the underlying story or idea and does not shirk from asserting her rights to exploit the story as she is well entitled to do.”

The contrary rule argued by her counsel and by FXI is untenable, and would require embracing the perverse logic that the protection of expression is itself a restriction on freedom of expression, a proposition worthy of Wonderland’s Red Queen. If the right of access enjoyed by the public always supersedes the individual’s right to control the uses of her property, then copyright is truly meaningless. FXI’s position essentially acknowledges this. While I think that FXI is mistaken, and fails to capture how copyright serves to democratize the production of original cultural materials for the benefit of society, I will at least give them credit for their directness. Perhaps they believe that state support for the arts is a better tool for sustaining creators. Perhaps they believe in private patronage. But unlike many of their copyright-skeptic peers in the west, they at least own their narrative and don’t feel the need to say that they believe in copyright while rejecting any modality for its protection. It’s a flawed vision that fails to reflect that the interests of the public are served by sustaining creators, and by protecting fundamental human rights in connection with the creation of original works. But it is a vision. Hopefully one that will evolve through an increased recognition that ensuring consent in a technological universe that celebrates lack of permission is central to advancing our humanity and retaining and celebrating our cultural differences.

My colleague, Neil Turkewitz, begins his fine post for Fair Use Week (read: crashing Fair Use Week) by noting that

Many of the organizations celebrating fair use would have you believe, because it suits their analysis, that copyright protection and the public interest are diametrically opposed. This is merely a rhetorical device, and is a complete fallacy.

If I weren’t a recovering law professor, I would just end there: that about sums it up, and “the rest is commentary,” as they say. Alas….  

All else equal, creators would like as many people to license their works as possible; there’s no inherent incompatibility between “incentives and access” (which is just another version of the fallacious “copyright protection versus the public interest” trope). Everybody wants as much access as possible. Sure, consumers want to pay as little as possible for it, and creators want to be paid as much as possible. That’s a conflict, and at the margin it can seem like a conflict between access and incentives. But it’s not a fundamental, philosophical, and irreconcilable difference — it’s the last 15 minutes of negotiation before the contract is signed.

Reframing what amounts to a fundamental agreement into a pitched battle for society’s soul is indeed a purely rhetorical device — and a mendacious one, at that.

The devil is in the details, of course, and there are still disputes on the margin, as I said. But it helps to know what they’re really about, and why they are so far from the fanciful debates the copyright scolds wish we were having.

First, price is, in fact, a big deal. For the creative industries it can be the difference between, say, making one movie or a hundred, and for artists is can be the difference between earning a livelihood writing songs or packing it in for a desk job.

But despite their occasional lip service to the existence of trade-offs, many “fair-users” see price — i.e., licensing agreements — as nothing less than a threat to social welfare. After all, the logic runs, if copies can be made at (essentially) zero marginal cost, a positive price is just extortion. They say, “more access!,” but they don’t mean, “more access at an agreed-upon price;” they mean “zero-price access, and nothing less.” These aren’t the same thing, and when “fair use” is a stand-in for “zero-price use,” fair-users moving the goalposts — and being disingenuous about it.

The other, related problem, of course, is piracy. Sometimes rightsholders’ objections to the expansion of fair use are about limiting access. But typically that’s true only where fine-tuned contracting isn’t feasible, and where the only realistic choice they’re given is between no access for some people, and pervasive (and often unstoppable) piracy. There are any number of instances where rightsholders have no realistic prospect of efficiently negotiating licensing terms and receiving compensation, and would welcome greater access to their works even without a license — as long as the result isn’t also (or only) excessive piracy. The key thing is that, in such cases, opposition to fair use isn’t opposition to reasonable access, even free access. It’s opposition to piracy.

Time-shifting with VCRs and space-shifting with portable mp3 players (to take two contentious historical examples) fall into this category (even if they are held up — as they often are — by the fair-users as totems of their fanciful battle ). At least at the time of the Sony and Diamond Rio cases, when there was really no feasible way to enforce licenses or charge differential prices for such uses, the choice rightsholders faced was effectively all-or-nothing, and they had to pick one. I’m pretty sure, all else equal, they would have supported such uses, even without licenses and differential compensation — except that the piracy risk was so significant that it swamped the likely benefits, tilting the scale toward “nothing” instead of “all.”

Again, the reality is that creators and rightsholders were confronted with a choice between two imperfect options; neither was likely “right,” and they went with the lesser evil. But one can’t infer from that constrained decision an inherent antipathy to fair use. Sadly, such decisions have to be made in the real world, not law reviews and EFF blog posts. As economists Benjamin Klein, Andres Lerner and Kevin Murphy put it regarding the Diamond Rio case:

[R]ather than representing an attempt by copyright-holders to increase their profits by controlling legally established “fair uses,”… the obvious record-company motivation is to reduce the illegal piracy that is encouraged by the technology. Eliminating a “fair use” [more accurately, “opposing an expansion of fair use” -ed.] is not a benefit to the record companies; it is an unfortunate cost they have to bear to solve the much larger problem of infringing uses. The record companies face competitive pressure to avoid these costs by developing technologies that distinguish infringing from non-infringing copying.

This last point is important, too. Fair-users don’t like technological protection measures, either, even if they actually facilitate licensing and broader access to copyrighted content. But that really just helps to reveal the poverty of their position. They should welcome technology that expands access, even if it also means that it enables rightsholders to fine-tune their licenses and charge a positive price. Put differently: Why do they hate Spotify!?

I’m just hazarding a guess here, but I suspect that the antipathy to technological solutions goes well beyond the short-term limits on some current use of content that copyright minimalists think shouldn’t be limited. If technology, instead of fair use, is truly determinative of the extent of zero-price access, then their ability to seriously influence (read: rein in) the scope of copyright is diminished. Fair use is amorphous. They can bring cases, they can lobby Congress, they can pen strongly worded blog posts, and they can stage protests. But they can’t do much to stop technological progress. Of course, technology does at least as much to limit the enforceability of licenses and create new situations where zero-price access is the norm. But still, R&D is a lot harder than PR.

What’s more, if technology were truly determinative, it would frequently mean that former fair uses could become infringing at some point (or vice versa, of course). Frankly, there’s no reason for time-shifting of TV content to continue to be considered a fair use today. We now have the technology to both enable time shifting and to efficiently license content for the purpose, charge a differential price for it, and enforce the terms. In fact, all of that is so pervasive today that most users do pay for time-shifting technologies, under license terms that presumably define the scope of their right to do so; they just may not have read the contract. Where time-shifting as a fair use rears its ugly head today is in debates over new, infringing technology where, in truth, the fair use argument is really a malleable pretext to advocate for a restriction on the scope of copyright (e.g., Aereo).

In any case, as the success of business models like Spotify and Netflix (to say nothing of Comcast’s X1 interface and new Xfinity Stream app) attest, technology has enabled users to legitimately engage in what was once conceivable seemingly only under fair use. Yes, at a price — one that millions of people are willing to pay. It is surely the case that rightsholders’ licensing of technologies like these have made content more accessible, to more people, and with higher-quality service, than a regime of expansive unlicensed use could ever have done.

At the same time, let’s not forget that, often, even when they could efficiently distribute content only at a positive price, creators offer up scads of content for free, in myriad ways. Sure, the objective is to maximize revenue overall by increasing exposure, price discriminating, or enhancing the quality of paid-for content in some way — but so what? More content is more content, and easier access is easier access. All of that uncompensated distribution isn’t rightsholders nodding toward the copyright scolds’ arguments; it’s perfectly consistent with licensing. Obviously, the vast majority of music, for example, is listened-to subject to license agreements, not because of fair use exceptions or rightsholders’ largesse.

For the vast majority of creators, users and uses, licensed access works, and gets us massive amounts of content and near ubiquitous access. The fair use disputes we do have aren’t really about ensuring broad access; that’s already happening. Rather, those disputes are either niggling over the relatively few ambiguous margins on the one hand, or, on the other, fighting the fair-users’ manufactured, existential fight over whether copyright exceptions will subsume the rule. The former is to be expected: Copyright boundaries will always be imperfect, and courts will always be asked to make the close calls. The latter, however, is simply a drain on resources that could be used to create more content, improve its quality, distribute it more broadly, or lower prices.

Copyright law has always been, and always will be, operating in the shadow of technology — technology both for distribution and novel uses, as well as for pirating content. The irony is that, as digital distribution expands, it has dramatically increased the risk of piracy, even as copyright minimalists argue that the low costs of digital access justify a more expansive interpretation of fair use — which would, in turn, further increase the risk of piracy.

Creators’ opposition to this expansion has nothing to do with opposition to broad access to content, and everything to do with ensuring that piracy doesn’t overwhelm their ability to get paid, and to produce content in the first place.

Even were fair use to somehow disappear tomorrow, there would be more and higher-quality content, available to more people in more places, than ever before. But creators have no interest in seeing fair use disappear. What they do have is an interest in is licensing their content as broadly as possible when doing so is efficient, and in minimizing piracy. Sometimes legitimate fair-use questions get caught in the middle. We could and should have a reasonable debate over the precise contours of fair use in such cases. But the false dichotomy of creators against users makes that extremely difficult. Until the disingenuous rhetoric is clawed back, we’re stuck with needless fights that don’t benefit either users or creators — although they do benefit the policy scolds, academics, wonks and businesses that foment them.

In a recent article for the San Francisco Daily Journal I examine Google v. Equustek: a case currently before the Canadian Supreme Court involving the scope of jurisdiction of Canadian courts to enjoin conduct on the internet.

In the piece I argue that

a globally interconnected system of free enterprise must operationalize the rule of law through continuous evolution, as technology, culture and the law itself evolve. And while voluntary actions are welcome, conflicts between competing, fundamental interests persist. It is at these edges that the over-simplifications and pseudo-populism of the SOPA/PIPA uprising are particularly counterproductive.

The article highlights the problems associated with a school of internet exceptionalism that would treat the internet as largely outside the reach of laws and regulations — not by affirmative legislative decision, but by virtue of jurisdictional default:

The direct implication of the “internet exceptionalist’ position is that governments lack the ability to impose orders that protect its citizens against illegal conduct when such conduct takes place via the internet. But simply because the internet might be everywhere and nowhere doesn’t mean that it isn’t still susceptible to the application of national laws. Governments neither will nor should accept the notion that their authority is limited to conduct of the last century. The Internet isn’t that exceptional.

Read the whole thing!