Archives For Copyright

Various states recently have enacted legislation that requires authors, publishers, and other copyright holders to license to lending libraries digital texts, including e-books and audio books. These laws violate the Constitution’s conferral on Congress of the exclusive authority to set national copyright law. Furthermore, as a policy matter, they offend free-market principles.

The laws interfere with the right of copyright holders to set the price for the fruit of their intellectual labor. The laws lower incentives for the production of new creative digital works in the future, thereby reducing consumers’ and producers’ surplus. Furthermore, the claim that “unfair” pricing prevents libraries from stocking “sufficient” numbers of e-books to satisfy public demand is belied by the reality that libraries have substantially grown their digital collections in recent years.

Finally, proponents of legislation ignore the fact that libraries actually pay far less than consumers do when they purchase an e-book license for personal use.

A more detailed exploration of this important topic is found in the Federalist Society Regulatory Transparency Project’s just-released paper, “State Mandates for Digital Book Licenses to Libraries are Unconstitutional and Undermine the Free Market.” Read and enjoy!

Output of the LG Research AI to the prompt: “a system of copyright for artificial intelligence”

Not only have digital-image generators like Stable Diffusion, DALL-E, and Midjourney—which make use of deep-learning models and other artificial-intelligence (AI) systems—created some incredible (and sometimes creepy – see above) visual art, but they’ve engendered a good deal of controversy, as well. Human artists have banded together as part of a fledgling anti-AI campaign; lawsuits have been filed; and policy experts have been trying to think through how these machine-learning systems interact with various facets of the law.

Debates about the future of AI have particular salience for intellectual-property rights. Copyright is notoriously difficult to protect online, and these expert systems add an additional wrinkle: it can at least argued that their outputs can be unique creations. There are also, of course, moral and philosophical objections to those arguments, with many grounded in the supposition that only a human (or something with a brain, like humans) can be creative.

Leaving aside for the moment a potentially pitched battle over the definition of “creation,” we should be able to find consensus that at least some of these systems produce unique outputs and are not merely cutting and pasting other pieces of visual imagery into a new whole. That is, at some level, the machines are engaging in a rudimentary sort of “learning” about how humans arrange colors and lines when generating images of certain subjects. The machines then reconstruct this process and produce a new set of lines and colors that conform to the patterns they found in the human art.

But that isn’t the end of the story. Even if some of these systems’ outputs are unique and noninfringing, the way the machines learn—by ingesting existing artwork—can raise a number of thorny issues. Indeed, these systems are arguably infringing copyright during the learning phase, and such use may not survive a fair-use analysis.

We are still in the early days of thinking through how this new technology maps onto the law. Answers will inevitably come, but for now, there are some very interesting questions about the intellectual-property implications of AI-generated art, which I consider below.

The Points of Collision Between Intellectual Property Law and AI-Generated Art

AI-generated art is not a single thing. It is, rather, a collection of differing processes, each with different implications for the law. For the purposes of this post, I am going to deal with image-generation systems that use “generated adversarial networks” (GANs) and diffusion models. The various implementations of each will differ in some respects, but from what I understand, the ways that these techniques can be used generate all sorts of media are sufficiently similar that we can begin to sketch out some of their legal implications. 

A (very) brief technical description

This is a very high-level overview of how these systems work; for a more detailed (but very readable) description, see here.

A GAN is a type of machine-learning model that consists of two parts: a generator and a discriminator. The generator is trained to create new images that look like they come from a particular dataset, while the discriminator is trained to distinguish the generated images from real images in the dataset. The two parts are trained together in an adversarial manner, with the generator trying to produce images that can fool the discriminator and the discriminator trying to correctly identify the generated images.

A diffusion model, by contrast, analyzes the distribution of information in an image, as noise is progressively added to it. This kind of algorithm analyzes characteristics of sample images—like the distribution of colors or lines—in order to “understand” what counts as an accurate representation of a subject (i.e., what makes a picture of a cat look like a cat and not like a dog).

For example, in the generation phase, systems like Stable Diffusion start with randomly generated noise, and work backward in “denoising” steps to essentially “see” shapes:

The sampled noise is predicted so that if we subtract it from the image, we get an image that’s closer to the images the model was trained on (not the exact images themselves, but the distribution – the world of pixel arrangements where the sky is usually blue and above the ground, people have two eyes, cats look a certain way – pointy ears and clearly unimpressed).

It is relevant here that, once networks using these techniques are trained, they do not need to rely on saved copies of the training images in order to generate new images. Of course, it’s possible that some implementations might be designed in a way that does save copies of those images, but for the purposes of this post, I will assume we are talking about systems that save known works only during the training phase. The models that are produced during training are, in essence, instructions to a different piece of software about how to start with a text prompt from a user—a palette of pure noise—and progressively “discover” signal in that image until some new image emerges.

Input-stage use of intellectual property

The creators of OpenAI, one of the most popular AI tools, are not shy about their use of protected works in the training phase of AI algorithms. In comments to the U.S. Patent and Trademark Office (PTO), they note that:

…[m]odern AI systems require large amounts of data. For certain tasks, that data is derived from existing publicly accessible “corpora”… of data that include copyrighted works. By analyzing large corpora (which necessarily involves first making copies of the data to be analyzed), AI systems can learn patterns inherent in human-generated data and then use those patterns to synthesize similar data which yield increasingly compelling novel media in modalities as diverse as text, image, and audio. (emphasis added).

Thus, at the training stage, the most popular forms of machine-learning systems require making copies of existing works. And where the material being used is either not in the public domain or is not licensed, an infringement occurs (as Getty Images notes in a suit against Stability AI that it recently filed). Thus, some affirmative defense is needed to excuse the infringement.

Toward this end, OpenAI believes that its algorithmic training should qualify as a fair use. Other major services that use these AI techniques to “learn” from existing media would likely make similar arguments. But, at least in the way that OpenAI has framed the fair-use analysis (that these uses are sufficiently “transformative”), it’s not clear that they should qualify.

The purpose and character of the use

In brief, fair use—found in 17 USC § 107—provides for an affirmative defense against infringement when the use is  “for purposes such as criticism, comment, news reporting, teaching…, scholarship, or research.” When weighing a fair-use defense, a court must balance a number of factors:

  1. the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
  2. the nature of the copyrighted work;
  3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
  4. the effect of the use upon the potential market for or value of the copyrighted work.

OpenAI’s fair-use claim is rooted in the first factor: the nature and character of the use. I should note, then, that what follows is solely a consideration of Factor 1, with special attention paid to whether these uses are “transformative.” But it is important to stipulate fair-use analysis is a multi-factor test and that, even within the first factor, it’s not mandatory that a use be “transformative.” It is entirely possible that a court balancing all of the factors could, indeed, find that OpenAI is engaged in fair use, even if it does not agree that it is “transformative.”

Whether the use of copyrighted works to train an AI is “transformative” is certainly a novel question, but it is likely answered through an observation that the U.S. Supreme Court made in Campbell v. Acuff Rose Music:

[W]hat Sony said simply makes common sense: when a commercial use amounts to mere duplication of the entirety of an original, it clearly “supersede[s] the objects,”… of the original and serves as a market replacement for it, making it likely that cognizable market harm to the original will occur… But when, on the contrary, the second use is transformative, market substitution is at least less certain, and market harm may not be so readily inferred.

A key question, then, is whether training an AI on copyrighted works amounts to mere “duplication of the entirety of an original” or is sufficiently “transformative” to support a fair-use finding. Open AI, as noted above, believes its use is highly transformative. According to its comments:

Training of AI systems is clearly highly transformative. Works in training corpora were meant primarily for human consumption for their standalone entertainment value. The “object of the original creation,” in other words, is direct human consumption of the author’s ​expression.​ Intermediate copying of works in training AI systems is, by contrast, “non-expressive” the copying helps computer programs learn the patterns inherent in human-generated media. The aim of this process—creation of a useful generative AI system—is quite different than the original object of human consumption.  The output is different too: nobody looking to read a specific webpage contained in the corpus used to train an AI system can do so by studying the AI system or its outputs. The new purpose and expression are thus both highly transformative.

But the way that Open AI frames its system works against its interests in this argument. As noted above, and reinforced in the immediately preceding quote, an AI system like DALL-E or Stable Diffusion is actually made of at least two distinct pieces. The first is a piece of software that ingests existing works and creates a file that can serve as instructions to the second piece of software. The second piece of software then takes the output of the first part and can produce independent results. Thus, there is a clear discontinuity in the process, whereby the ultimate work created by the system is disconnected from the creative inputs used to train the software.

Therefore, contrary to what Open AI asserts, the protected works are indeed ingested into the first part of the system “for their standalone entertainment value.” That is to say, the software is learning what counts as “standalone entertainment value” and therefore, the works mustbe used in those terms.

Surely, a computer is not sitting on a couch and surfing for its own entertainment. But it is solely for the very “standalone entertainment value” that the first piece of software is being shown copyrighted material. By contrast, parody or “remixing”  uses incorporate the work into some secondary expression that transforms the input. The way these systems work is to learn what makes a piece entertaining and then to discard that piece altogether. Moreover, this use of art qua art most certainly interferes with the existing market insofar as this use is in lieu of reaching a licensing agreement with rightsholders.

The 2nd U.S. Circuit Court of Appeals dealt with an analogous case. In American Geophysical Union v. Texaco, the 2nd Circuit considered whether Texaco’s photocopying of scientific articles produced by the plaintiffs qualified for a fair-use defense. Texaco employed between 400 and 500 research scientists and, as part of supporting their work, maintained subscriptions to a number of scientific journals. It was common practice for Texaco’s scientists to photocopy entire articles and save them in a file.

The plaintiffs sued for copyright infringement. Texaco asserted that photocopying by its scientists for the purposes of furthering scientific research—that is to train the scientists on the content of the journal articles—should count as a fair use, at least in part because it was sufficiently “transformative.” The 2nd Circuit disagreed:

The “transformative use” concept is pertinent to a court’s investigation under the first factor because it assesses the value generated by the secondary use and the means by which such value is generated. To the extent that the secondary use involves merely an untransformed duplication, the value generated by the secondary use is little or nothing more than the value that inheres in the original. Rather than making some contribution of new intellectual value and thereby fostering the advancement of the arts and sciences, an untransformed copy is likely to be used simply for the same intrinsic purpose as the original, thereby providing limited justification for a finding of fair use… (emphasis added).

As in the case at hand, the 2nd Circuit observed that making full copies of the scientific articles was solely for the consumption of the material itself. A rejoinder, of course, is that training these AI systems surely advances scientific research and, thus, does foster the “advancement of the arts and sciences.” But in American Geophysical Union, where the secondary use was explicitly for the creation of new and different scientific outputs, the court still held that making copies of one scientific article in order to learn and produce new scientific innovations did not count as “transformative.”

What this case represents is that one cannot merely state that some social goal will be advanced in the future by permitting an exception to copyright protection today. As the 2nd Circuit put it:

…the dominant purpose of the use is a systematic institutional policy of multiplying the available number of copies of pertinent copyrighted articles by circulating the journals among employed scientists for them to make copies, thereby serving the same purpose for which additional subscriptions are normally sold, or… for which photocopying licenses may be obtained.

The secondary use itself must be transformative and different. Where an AI system ingests copyrighted works, that use is simply not transformative; it is using the works in their original sense in order to train a system to be able to make other original works. As in American Geophysical Union, the AI creators are completely free to seek licenses from rightsholders in order to train their systems.

Finally, there is a sense in which this machine learning might not infringe on copyrights at all. To my knowledge, the technology does not itself exist, but if it were possible for a machine to somehow “see” in the way that humans do—without using stored copies of copyrighted works—merely “learning” from those works, such as we can call it learning, probably would not violate copyright laws.

Do the outputs of these systems violate intellectual property laws?

The outputs of GANs and diffusion models may or may not violate IP laws, but there is nothing inherent in the processes described above to dictate that they must. As noted, the most common AI systems do not save copies of existing works, but merely “instructions” (more or less) on how to create new works that conform to patterns they found by examining existing work. If we assume that a system isn’t violating copyright at the input stage, it’s entirely possible that it can produce completely new pieces of art that have never before existed and do not violate copyright.

They can, however, be made to violate IP rights. For example, trademark violations appear to be one of the most popular uses of these AI systems by end users. To take but one example, a quick search of Google Images for “midjourney iron man” returns a slew of images that almost certainly violate trademarks for the character Iron Man. Similarly, these systems can be instructed to generate art that is not just “in the style” of a particular artist, but that very closely resembles existing pieces. In this sense, the system would be making a copy that theoretically infringes. 

There is a common bug in such systems that leads to outputs that are more likely to violate copyright in this way. Known as “overfitting,” the training leg of these AI systems can be presented with samples that contain too many instances of a particular image. This leads to a data set that contains too much information about the specific image, such that when the AI generates a new image, it is constrained to producing something very close to the original.

An argument can also be made that generating art “in the style of” a famous artist violates moral rights (in jurisdictions where such rights exist).

At least in the copyright space, cases like Sony are going to become crucial. Does the user side of these AI systems have substantial noninfringing uses? If so, the firms that host software for end users could avoid secondary-infringement liability, and the onus would fall on users to avoid violating copyright laws. At the same time, it seems plausible that legislatures could place some obligation on these providers to implement filters to mitigate infringement by end users.

Opportunities for New IP Commercialization with AI

There are a number of ways that AI systems may inexcusably infringe on intellectual-property rights. As a best practice, I would encourage the firms that operate these services to seek licenses from rightsholders. While this would surely be an expense, it also opens new opportunities for both sides to generate revenue.

For example, an AI firm could develop its own version of YouTube’s ContentID that allows creators to opt their work into training. For some well-known artists this could be negotiated with an upfront licensing fee. On the user-side, any artist who has opted in could then be selected as a “style” for the AI to emulate. When users generate an image, a royalty payment to the artist would be created. Creators would also have the option to remove their influence from the system if they so desired. 

Undoubtedly, there are other ways to monetize the relationship between creators and the use of their work in AI systems. Ultimately, the firms that run these systems will not be able to simply wish away IP laws. There are going to be opportunities for creators and AI firms to both succeed, and the law should help to generate that result.

[This post is the first in our FTC UMC Rulemaking symposium. You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1500-4000 word responses for potential inclusion in the symposium.]

There is widespread interest in the potential tools that the Biden administration’s Federal Trade Commission (FTC) may use to address a range of competition-related and competition-adjacent concerns. A focal point for this interest is the potential that the FTC may use its broad authority to regulate unfair methods of competition (UMC) under Section 5 of the FTC Act to make rules that address a wide range of conduct. This “potential” is expected to become a “likelihood” with confirmation of Alvaro Bedoya, a third Democratic commissioner, expected to occur any day.

This post marks the start of a Truth on the Market symposium that brings together academics, practitioners, and other commentators to discuss issues relating to potential UMC-related rulemaking. Contributions to this symposium will cover a range of topics, including:

  • Constitutional and administrative-law limits on UMC rulemaking: does such rulemaking potentially present “major question” or delegation issues, or other issues under the Administrative Procedure Act (APA)? If so, what is the scope of permissible rulemaking?
  • Substantive issues in UMC rulemaking: costs and benefits to be considered in developing rules, prudential concerns, and similar concerns.
  • Using UMC to address competition-adjacent issues: consideration of how or whether the FTC can use its UMC authority to address firm conduct that is governed by other statutory or regulatory regimes. For instance, firms using copyright law and the Digital Millennium Copyright Act (DMCA) to limit competitors’ ability to alter or repair products, or labor or entry issues that might be governed by licensure or similar laws.

Timing and Structure of the Symposium

Starting tomorrow, one or two contributions to this symposium will be posted each morning. During the first two weeks of the symposium, we will generally try to group posts on similar topics together. When multiple contributions are posted on the same day, they will generally be implicitly or explicitly in dialogue with each other. The first week’s contributions will generally focus on constitutional and administrative law issues relating to UMC rulemaking, while the second week’s contributions will focus on more specific substantive topics. 

Readers are encouraged to engage with these posts through comments. In addition, academics, practitioners, and other antitrust and regulatory commentators are invited to submit additional contributions for inclusion in this symposium. Such contributions may include responses to posts published by others or newly developed ideas. Interested authors should submit pieces for consideration to Gus Hurwitz and Keith Fierro Benson.

This symposium will run through at least Friday, May 6. We do not, however, anticipate, ending or closing it at that time. To the contrary, it is very likely that topics relating to FTC UMC rulemaking will continue to be timely and of interest to our community—we anticipate keeping the symposium running for the foreseeable future, and welcome submissions on an ongoing basis. Readers interested in these topics are encouraged to check in regularly for new posts, including by following the symposium page, the FTC UMC Rulemaking tag, or by subscribing to Truth on the Market for notifications of new posts.

All too frequently, vocal advocates for “Internet Freedom” imagine it exists along just a single dimension: the extent to which it permits individuals and firms to interact in new and unusual ways.

But that is not the sum of the Internet’s social value. The technologies that underlie our digital media remain a relatively new means to distribute content. It is not just the distributive technology that matters, but also the content that is distributed. Thus, the norms and laws that facilitate this interaction of content production and distribution are critical.

Sens. Patrick Leahy (D-Vt.) and Thom Tillis (R-N.C.)—the chair and ranking member, respectively, of the Senate Judiciary Committee’s Subcommittee on Intellectual Property—recently introduced legislation that would require online service providers (OSPs) to comply with a slightly heightened set of obligations to deter copyright piracy on their platforms. This couldn’t come at a better time.

S. 3880, the SMART Copyright Act, would amend Section 512 of the Copyright Act, originally enacted as part of the Digital Millennium Copyright Act of 1998. Section 512, among other things, provides safe harbor for OSPs for copyright infringements by their users. The expectation at the time was that OSPs would work voluntarily with rights holders to develop industry best practices to deal with pirated content, while also allowing the continued growth of the commercial Internet.

Alas, it has become increasingly apparent in the nearly quarter-century since the DMCA was passed that the law has not adequately kept pace with the technological capabilities of digital piracy. In April 2020 alone, U.S. consumers logged 725 million visits to pirate sites for movies and television programming. Close to 90% of those visits were attributable to illegal streaming services that use internet protocol television to distribute pirated content. Such services now serve more than 9 million U.S. subscribers and generate more than $1 billion in annual revenue.

Globally, there are more than 26.6 billion annual illicit views of U.S.-produced movies and 126.7 billion views of U.S.-produced television episodes. A report produced for the U.S. Chamber of Commerce by NERA Economic Consulting estimates the annual impact to the United States to be $30 to $70 billion of lost revenue, 230,000 to 560,000 of lost jobs, and between $45 and $115 billion in lower GDP.

Thus far, the most effective preventative measures produced have been filtering solutions adopted by YouTube, Facebook, and Audible Magic, but neither filtering nor other solutions have been adopted industrywide. As the U.S. Copyright Office has observed:

Throughout the Study, the Office heard from participants that Congress’ intent to have multi-stakeholder consensus drive improvements to the system has not been borne out in practice. By way of example, more than twenty years after passage of the DMCA, although some individual OSPs have deployed DMCA+ systems that are primarily open to larger content owners, not a single technology has been designated a “standard technical measure” under section 512(i). While numerous potential reasons were cited for this failure— from a lack of incentives for ISPs to participate in standards to the inappropriateness of one-size-fits-all technologies—the end result is that few widely-available tools have been created and consistently implemented across the internet ecosystem. Similarly, while various voluntary initiatives have been undertaken by different market participants to address the volume of true piracy within the system, these initiatives, although initially promising, likewise have suffered from various shortcomings, from limited participation to ultimate ineffectiveness.

Given the lack of standard technical measures (STMs), the Leahy-Tillis bill would empower the Office of the Librarian of Congress (LOC) broad latitude to recommend STMs for everything from off-the-shelf software to open-source software to general technical strategies that can be applied to a wide variety of systems. This would include the power to initiate public rulemakings in which it could either propose new STMs or revise or rescind existing STMs. The STMs could be as broad or as narrow as the LOC deems appropriate, including being tailored to specific types of content and specific types of providers. Following rulemaking, subject firms would have at least one year to adopt a given STM.

Critically, the SMART Copyright Act would not hold OSPs liable for the infringing content itself, but for failure to make reasonable efforts to accommodate the STM (or for interference with the STM). Courts finding an OSP to have violated their obligation for good-faith compliance could award an injunction, damages, and costs.

The SMART Copyright Act is a directionally correct piece of legislation with two important caveats: it all depends on the kinds of STMs that the LOC recommends and on how a “violation” is determined for the purposes of awarding damages.

The law would magnify the incentive for private firms to work together with rights holders to develop STMs that more reasonably recruit OSPs into the fight against online piracy. In this sense, the LOC would be best situated as a convener, encouraging STMs to emerge from the broad group of OSPs and rights holders. The fact that the LOC would be able to adopt STMs with or without stakeholders’ participation should provide more incentive for collaboration among the relevant parties.

Short of a voluntary set of STMs, the LOC could nonetheless rely on the technical suggestions and concerns of the multistakeholder community to discern a minimum viable set of practices that constitute best efforts to control piracy. The least desirable outcome—and, I suspect, the one most susceptible to failure—would be for the LOC to examine and select specific technologies. If implemented sensibly, the SMART Copyright Act would create a mechanism to enforce the original goals of Section 512.

The damages provisions are likewise directionally correct but need more clarity. Repeat “violations” allow courts to multiply damages awards. But there is no definition of what counts as a “violation,” nor is there adequate clarity about how a “violation” interacts with damages. For example, is a single infringement on a platform a “violation” such that if three occur, the platform faces treble damages for all the infringements in a single case? That seems unlikely.

More reasonable would be to interpret the provision as saying that a final adjudication that the platform behaved unreasonably is what counts for the purposes of calculating whether damages are multiplied. Then, within each adjudication, damages are calculated for all infringements, up to the statutory damages cap. This interpretation would put teeth in the law, but it’s just one possible interpretation. Congress would need to ensure the final language is clear.

An even better would be to make Section 512’s safe harbor contingent on an OSP’s reasonable compliance. Unreasonable behavior, in that case, provides a much more straightforward way to assess damages, without needing to leave it up to court interpretations about what counts as a “violation.” Particularly since courts have historically tended to interpret the DMCA in ways that are unfavorable to rights holders (e.g., “red flag” knowledge), it would be much better to create a simple standard here.

This is not to say there are no potential problems. Among the concerns that surround promulgating new STMs are potentially creating cybersecurity vulnerabilities, sources for privacy leaks, or accidentally chilling speech. Of course, it’s possible that there will be costs to implementing an STM, just as there are costs when private firms operate their own content-protection mechanisms. But just because harms can happen doesn’t mean they will happen, or that they are insurmountable when they do. The criticisms that have emerged have so far taken on the breathless quality of the empirically unfounded claims that 2012’s SOPA/PIPA legislation would spell doom for the Internet. If Section 512 reforms are well-calibrated and sufficiently flexible to adapt to the market realities, I think we can reasonably expect them to be, on net, beneficial.

Toward this end, the SMART Copyright Act contemplates, for each proposed STM, a public comment period and at least one meeting with relevant stakeholders, to allow time to understand its likely costs and benefits. This process would provide ample opportunities to alert the LOC to potential shortcomings.

But the criticisms do suggest a potentially valuable change to the bill’s structure. If a firm does indeed discover that a particular STM, in practice, leads to unacceptable security or privacy risks, or is systematically biased against lawful content, there should be a legal mechanism that would allow for good-faith compliance while also mitigating STMs’ unforeseen flaws. Ideally, this would involve working with the LOC in an iterative process to refine relevant compliance obligations.

Congress will soon be wrapped up in the volatile midterm elections, which could make it difficult for relatively low-salience issues like copyright to gain traction. Nonetheless, the Leahy-Tillis bill marks an important step toward addressing online piracy, and Congress should move deliberatively toward that goal.

In Fleites v. MindGeek—currently before the U.S. District Court for the District of Central California, Southern Division—plaintiffs seek to hold MindGeek subsidiary PornHub liable for alleged instances of human trafficking under the Racketeer Influenced and Corrupt Organizations (RICO) and the Trafficking Victims Protection Reauthorization Act (TVPRA). Writing for the International Center for Law & Economics (ICLE), we have filed a motion for leave to submit an amicus brief regarding whether it is valid to treat co-defendant Visa Inc. as a proper party under principles of collateral liability.

The proposed brief draws on our previous work on the law & economics of collateral liability, and argues that holding Visa liable as a participant under RICO or TVPRA would amount to stretching collateral liability far beyond what is reasonable. Such a move, we posit, would “generate a massive amount of social cost that would outweigh the potential deterrent or compensatory gains sought.”

Collateral liability can make sense when intermediaries are in a position to effectively monitor and control potential harms. That is, it can be appropriate to apply collateral liability to parties who are what is often referred to as a “least cost avoider.” As we write:

In some circumstances it is indeed proper to hold third parties liable even though they are not primary actors directly implicated in wrongdoing. Most significantly, such liability may be appropriate when a collateral actor stands in a relationship to the wrongdoing (or wrongdoers or victims) such that the threat of liability can incentivize it to take action (or refrain from taking action) to prevent or mitigate the wrongdoing. That is to say, collateral liability may be appropriate when the third party has a significant enough degree of control over the primary actors such that its actions can cause them to reduce the risk of harm at reasonable cost. Importantly, however, such liability is appropriate only when direct deterrence is insufficient and/or the third party can prevent harm at lower cost or more effectively than direct enforcement… From an economic perspective, liability should be imposed upon the party or parties best positioned to deter the harms in question, such that the costs of enforcement do not exceed the social gains realized.

The law of negligence under the common law, as well as contributory infringement under copyright law, both help illustrate this principle. Under the common law, collateral actors have a duty in only limited circumstances, when the harms are “reasonably foreseeable” and the actor has special access to particularized information about the victims or the perpetrators, as well as a special ability to control harmful conditions. Under copyright law, collateral liability is similarly limited to circumstances where collateral actors are best positioned to prevent the harm, and the benefits of holding such actors liable exceed the harms. 

Neither of these conditions are true in Fleites v. MindGeek: Visa is not the type of collateral actor that has any access to specialized information or the ability to control actual bad actors. Visa, as a card-payment network, simply processes payments. The only tool at the disposal of Visa is a giant sledgehammer: it can foreclose all transactions to particular sites that run over its network. There is no dispute that the vast majority of content hosted on sites like MindGeek is lawful, however awful one may believe pornography to be. Holding card networks liable here would create incentives to avoid processing payments for such sites altogether in order to avoid legal consequences. 

The potential costs of the theory of liability asserted here stretch far beyond Visa or this particular case. The plaintiffs’ theory would hold anyone liable who provides services that “allow[] the alleged principal actors to continue to do business.” This would mean that Federal Express, for example, would be liable for continuing to deliver packages to MindGeek’s address or that a waste-management company could be liable for providing custodial services to the building where MindGeek has an office. 

According to the plaintiffs, even the mere existence of a newspaper article alleging a company is doing something illegal is sufficient to find that professionals who have provided services to that company “participate” in a conspiracy. This would have ripple effects for professionals from many other industries—from accountants to bankers to insurance—who all would see significantly increased risk of liability.

To read the rest of the brief, see here.

Activists who railed against the Stop Online Piracy Act (SOPA) and the PROTECT IP Act (PIPA) a decade ago today celebrate the 10th anniversary of their day of protest, which they credit with sending the bills down to defeat.

Much of the anti-SOPA/PIPA campaign was based on a gauzy notion of “realizing [the] democratizing potential” of the Internet. Which is fine, until it isn’t.

But despite the activists’ temporary legislative victory, the methods of combating digital piracy that SOPA/PIPA contemplated have been employed successfully around the world. It may, indeed, be time for the United States to revisit that approach, as the very real problems the legislation sought to combat haven’t gone away.

From the perspective of rightsholders, the bill’s most important feature was also its most contentious: the ability to enforce judicial “site-blocking orders.” A site-blocking order is a type of remedy sometimes referred to as a no-fault injunction. Under SOPA/PIPA, a court would have been permitted to issue orders that could be used to force a range of firms—from financial providers to ISPs—to cease doing business with or suspend the service of a website that hosted infringing content.

Under current U.S. law, even when a court finds that a site has willfully engaged in infringement, stopping the infringement can be difficult, especially when the parties and their facilities are located outside the country. While Section 512 of the Digital Millennium Copyright Act does allow courts to issue injunctions, there is ambiguity as to whether it allows courts to issue injunctions that obligate online service providers (“OSP”) not directly party to a case to remove infringing material.

Section 512(j), for instance, provides for issuing injunctions “against a service provider that is not subject to monetary remedies under this section.” The “not subject to monetary remedies under this section” language could be construed to mean that such injunctions may be obtained even against OSPs that have not been found at fault for the underlying infringement. But as Motion Picture Association President Stanford K. McCoy testified in 2020:

In more than twenty years … these provisions of the DMCA have never been deployed, presumably because of uncertainty about whether it is necessary to find fault against the service provider before an injunction could issue, unlike the clear no-fault injunctive remedies available in other countries.

But while no-fault injunctions for copyright infringement have not materialized in the United States, this remedy has been used widely around the world. In fact, more than 40 countries—including Denmark, Finland, France, India, England, and Wales—have enacted or are under some obligation to enact rules allowing for no-fault injunctions that direct ISPs to disable access to websites that predominantly promote copyright infringement. 

In short, precisely the approach to controlling piracy that SOPA/PIPA envisioned has been in force around the world over the last decade. This demonstrates that, if properly tailored, no-fault injunctions are an ideal tool for courts to use in the fight to combat piracy.

If anything, we should be using the anniversary of SOPA/PIPA as an opportunity to reflect on a missed opportunity. Congress should take this opportunity to amend Section 512 to grant U.S. courts authority to issue no-fault injunctions that require OSPs to block access to sites that willfully engage in mass infringement.

Over the past decade and a half, virtually every branch of the federal government has taken steps to weaken the patent system. As reflected in President Joe Biden’s July 2021 executive order, these restraints on patent enforcement are now being coupled with antitrust policies that, in large part, adopt a “big is bad” approach in place of decades of economically grounded case law and agency guidelines.

This policy bundle is nothing new. It largely replicates the innovation policies pursued during the late New Deal and the postwar decades. That historical experience suggests that a “weak-patent/strong-antitrust” approach is likely to encourage neither innovation nor competition.

The Overlooked Shortfalls of New Deal Innovation Policy

Starting in the early 1930s, the U.S. Supreme Court issued a sequence of decisions that raised obstacles to patent enforcement. The Franklin Roosevelt administration sought to take this policy a step further, advocating compulsory licensing for all patents. While Congress did not adopt this proposal, it was partially implemented as a de facto matter through antitrust enforcement. Starting in the early 1940s and continuing throughout the postwar decades, the antitrust agencies secured judicial precedents that treated a broad range of licensing practices as per se illegal. Perhaps most dramatically, the U.S. Justice Department (DOJ) secured more than 100 compulsory licensing orders against some of the nation’s largest companies. 

The rationale behind these policies was straightforward. By compelling access to incumbents’ patented technologies, courts and regulators would lower barriers to entry and competition would intensify. The postwar economy declined to comply with policymakers’ expectations. Implementation of a weak-IP/strong-antitrust innovation policy over the course of four decades yielded the opposite of its intended outcome. 

Market concentration did not diminish, turnover in market leadership was slow, and private research and development (R&D) was confined mostly to the research labs of the largest corporations (who often relied on generous infusions of federal defense funding). These tendencies are illustrated by the dramatically unequal allocation of innovation capital in the postwar economy.  As of the late 1950s, small firms represented approximately 7% of all private U.S. R&D expenditures.  Two decades later, that figure had fallen even further. By the late 1970s, patenting rates had plunged, and entrepreneurship and innovation were in a state of widely lamented decline.

Why Weak IP Raises Entry Costs and Promotes Concentration

The decline in entrepreneurial innovation under a weak-IP regime was not accidental. Rather, this outcome can be derived logically from the economics of information markets.

Without secure IP rights to establish exclusivity, engage securely with business partners, and deter imitators, potential innovator-entrepreneurs had little hope to obtain funding from investors. In contrast, incumbents could fund R&D internally (or with federal funds that flowed mostly to the largest computing, communications, and aerospace firms) and, even under a weak-IP regime, were protected by difficult-to-match production and distribution efficiencies. As a result, R&D mostly took place inside the closed ecosystems maintained by incumbents such as AT&T, IBM, and GE.

Paradoxically, the antitrust campaign against patent “monopolies” most likely raised entry barriers and promoted industry concentration by removing a critical tool that smaller firms might have used to challenge incumbents that could outperform on every competitive parameter except innovation. While the large corporate labs of the postwar era are rightly credited with technological breakthroughs, incumbents such as AT&T were often slow in transforming breakthroughs in basic research into commercially viable products and services for consumers. Without an immediate competitive threat, there was no rush to do so. 

Back to the Future: Innovation Policy in the New New Deal

Policymakers are now at work reassembling almost the exact same policy bundle that ended in the innovation malaise of the 1970s, accompanied by a similar reliance on public R&D funding disbursed through administrative processes. However well-intentioned, these processes are inherently exposed to political distortions that are absent in an innovation environment that relies mostly on private R&D funding governed by price signals. 

This policy bundle has emerged incrementally since approximately the mid-2000s, through a sequence of complementary actions by every branch of the federal government.

  • In 2011, Congress enacted the America Invents Act, which enables any party to challenge the validity of an issued patent through the U.S. Patent and Trademark Office’s (USPTO) Patent Trial and Appeals Board (PTAB). Since PTAB’s establishment, large information-technology companies that advocated for the act have been among the leading challengers.
  • In May 2021, the Office of the U.S. Trade Representative (USTR) declared its support for a worldwide suspension of IP protections over Covid-19-related innovations (rather than adopting the more nuanced approach of preserving patent protections and expanding funding to accelerate vaccine distribution).  
  • President Biden’s July 2021 executive order states that “the Attorney General and the Secretary of Commerce are encouraged to consider whether to revise their position on the intersection of the intellectual property and antitrust laws, including by considering whether to revise the Policy Statement on Remedies for Standard-Essential Patents Subject to Voluntary F/RAND Commitments.” This suggests that the administration has already determined to retract or significantly modify the 2019 joint policy statement in which the DOJ, USPTO, and the National Institutes of Standards and Technology (NIST) had rejected the view that standard-essential patent owners posed a high risk of patent holdup, which would therefore justify special limitations on enforcement and licensing activities.

The history of U.S. technology markets and policies casts great doubt on the wisdom of this weak-IP policy trajectory. The repeated devaluation of IP rights is likely to be a “lose-lose” approach that does little to promote competition, while endangering the incentive and transactional structures that sustain robust innovation ecosystems. A weak-IP regime is particularly likely to disadvantage smaller firms in biotech, medical devices, and certain information-technology segments that rely on patents to secure funding from venture capital and to partner with larger firms that can accelerate progress toward market release. The BioNTech/Pfizer alliance in the production and distribution of a Covid-19 vaccine illustrates how patents can enable such partnerships to accelerate market release.  

The innovative contribution of BioNTech is hardly a one-off occurrence. The restoration of robust patent protection in the early 1980s was followed by a sharp increase in the percentage of private R&D expenditures attributable to small firms, which jumped from about 5% as of 1980 to 21% by 1992. This contrasts sharply with the unequal allocation of R&D activities during the postwar period.

Remarkably, the resurgence of small-firm innovation following the strong-IP policy shift, starting in the late 20th century, mimics tendencies observed during the late 19th and early-20th centuries, when U.S. courts provided a hospitable venue for patent enforcement; there were few antitrust constraints on licensing activities; and innovation was often led by small firms in partnership with outside investors. This historical pattern, encompassing more than a century of U.S. technology markets, strongly suggests that strengthening IP rights tends to yield a policy “win-win” that bolsters both innovative and competitive intensity. 

An Alternate Path: ‘Bottom-Up’ Innovation Policy

To be clear, the alternative to the policy bundle of weak-IP/strong antitrust does not consist of a simple reversion to blind enforcement of patents and lax administration of the antitrust laws. A nuanced innovation policy would couple modern antitrust’s commitment to evidence-based enforcement—which, in particular cases, supports vigorous intervention—with a renewed commitment to protecting IP rights for innovator-entrepreneurs. That would promote competition from the “bottom up” by bolstering maverick innovators who are well-positioned to challenge (or sometimes partner with) incumbents and maintaining the self-starting engine of creative disruption that has repeatedly driven entrepreneurial innovation environments. Tellingly, technology incumbents have often been among the leading advocates for limiting patent and copyright protections.  

Advocates of a weak-patent/strong-antitrust policy believe it will enhance competitive and innovative intensity in technology markets. History suggests that this combination is likely to produce the opposite outcome.  

Jonathan M. Barnett is the Torrey H. Webb Professor of Law at the University of Southern California, Gould School of Law. This post is based on the author’s recent publications, Innovators, Firms, and Markets: The Organizational Logic of Intellectual Property (Oxford University Press 2021) and “The Great Patent Grab,” in Battles Over Patents: History and the Politics of Innovation (eds. Stephen H. Haber and Naomi R. Lamoreaux, Oxford University Press 2021).

In recent years, a diverse cross-section of advocates and politicians have leveled criticisms at Section 230 of the Communications Decency Act and its grant of legal immunity to interactive computer services. Proposed legislative changes to the law have been put forward by both Republicans and Democrats.

It remains unclear whether Congress (or the courts) will amend Section 230, but any changes are bound to expand the scope, uncertainty, and expense of content risks. That’s why it’s important that such changes be developed and implemented in ways that minimize their potential to significantly disrupt and harm online activity. This piece focuses on those insurable content risks that most frequently result in litigation and considers the effect of the direct and indirect costs caused by frivolous suits and lawfare, not just the ultimate potential for a court to find liability. The experience of the 1980s asbestos-litigation crisis offers a warning of what could go wrong.

Enacted in 1996, Section 230 was intended to promote the Internet as a diverse medium for discourse, cultural development, and intellectual activity by shielding interactive computer services from legal liability when blocking or filtering access to obscene, harassing, or otherwise objectionable content. Absent such immunity, a platform hosting content produced by third parties could be held equally responsible as the creator for claims alleging defamation or invasion of privacy.

In the current legislative debates, Section 230’s critics on the left argue that the law does not go far enough to combat hate speech and misinformation. Critics on the right claim the law protects censorship of dissenting opinions. Legal challenges to the current wording of Section 230 arise primarily from what constitutes an “interactive computer service,” “good faith” restriction of content, and the grant of legal immunity, regardless of whether the restricted material is constitutionally protected. 

While Congress and various stakeholders debate various alternate statutory frameworks, several test cases simultaneously have been working their way through the judicial system and some states have either passed or are considering legislation to address complaints with Section 230. Some have suggested passing new federal legislation classifying online platforms as common carriers as an alternate approach that does not involve amending or repealing Section 230. Regardless of the form it may take, change to the status quo is likely to increase the risk of litigation and liability for those hosting or publishing third-party content.

The Nature of Content Risk

The class of individuals and organizations exposed to content risk has never been broader. Any information, content, or communication that is created, gathered, compiled, or amended can be considered “material” which, when disseminated to third parties, may be deemed “publishing.” Liability can arise from any step in that process. Those who republish material are generally held to the same standard of liability as if they were the original publisher. (See, e.g., Rest. (2d) of Torts § 578 with respect to defamation.)

Digitization has simultaneously reduced the cost and expertise required to publish material and increased the potential reach of that material. Where it was once limited to books, newspapers, and periodicals, “publishing” now encompasses such activities as creating and updating a website; creating a podcast or blog post; or even posting to social media. Much of this activity is performed by individuals and businesses who have only limited experience with the legal risks associated with publishing.

This is especially true regarding the use of third-party material, which is used extensively by both sophisticated and unsophisticated platforms. Platforms that host third-party-generated content—e.g., social media or websites with comment sections—have historically engaged in only limited vetting of that content, although this is changing. When combined with the potential to reach consumers far beyond the original platform and target audience—lasting digital traces that are difficult to identify and remove—and the need to comply with privacy and other statutory requirements, the potential for all manner of “publishers” to incur legal liability has never been higher.

Even sophisticated legacy publishers struggle with managing the litigation that arises from these risks. There are a limited number of specialist counsel, which results in higher hourly rates. Oversight of legal bills is not always effective, as internal counsel often have limited resources to manage their daily responsibilities and litigation. As a result, legal fees often make up as much as two-thirds of the average claims cost. Accordingly, defense spending and litigation management are indirect, but important, risks associated with content claims.

Effective risk management is any publisher’s first line of defense. The type and complexity of content risk management varies significantly by organization, based on its size, resources, activities, risk appetite, and sophistication. Traditional publishers typically have a formal set of editorial guidelines specifying policies governing the creation of content, pre-publication review, editorial-approval authority, and referral to internal and external legal counsel. They often maintain a library of standardized contracts; have a process to periodically review and update those wordings; and a process to verify the validity of a potential licensor’s rights. Most have formal controls to respond to complaints and to retraction/takedown requests.

Insuring Content Risks

Insurance is integral to most publishers’ risk-management plans. Content coverage is present, to some degree, in most general liability policies (i.e., for “advertising liability”). Specialized coverage—commonly referred to as “media” or “media E&O”—is available on a standalone basis or may be packaged with cyber-liability coverage. Terms of specialized coverage can vary significantly, but generally provides at least basic coverage for the three primary content risks of defamation, copyright infringement, and invasion of privacy.

Insureds typically retain the first dollar loss up to a specific dollar threshold. They may also retain a coinsurance percentage of every dollar thereafter in partnership with their insurer. For example, an insured may be responsible for the first $25,000 of loss, and for 10% of loss above that threshold. Such coinsurance structures often are used by insurers as a non-monetary tool to help control legal spending and to incentivize an organization to employ effective oversight of counsel’s billing practices.

The type and amount of loss retained will depend on the insured’s size, resources, risk profile, risk appetite, and insurance budget. Generally, but not always, increases in an insured’s retention or an insurer’s attachment (e.g., raising the threshold to $50,000, or raising the insured’s coinsurance to 15%) will result in lower premiums. Most insureds will seek the smallest retention feasible within their budget. 

Contract limits (the maximum coverage payout available) will vary based on the same factors. Larger policyholders often build a “tower” of insurance made up of multiple layers of the same or similar coverage issued by different insurers. Two or more insurers may partner on the same “quota share” layer and split any loss incurred within that layer on a pre-agreed proportional basis.  

Navigating the strategic choices involved in developing an insurance program can be complex, depending on an organization’s risks. Policyholders often use commercial brokers to aide them in developing an appropriate risk-management and insurance strategy that maximizes coverage within their budget and to assist with claims recoveries. This is particularly important for small and mid-sized insureds who may lack the sophistication or budget of larger organizations. Policyholders and brokers try to minimize the gaps in coverage between layers and among quota-share participants, but such gaps can occur, leaving a policyholder partially self-insured.

An organization’s options to insure its content risk may also be influenced by the dynamics of the overall insurance market or within specific content lines. Underwriters are not all created equal; it is a challenging responsibility requiring a level of prediction, and some underwriters may fail to adequately identify and account for certain risks. It can also be challenging to accurately measure risk aggregation and set appropriate reserves. An insurer’s appetite for certain lines and the availability of supporting reinsurance can fluctuate based on trends in the general capital markets. Specialty media/content coverage is a small niche within the global commercial insurance market, which makes insurers in this line more sensitive to these general trends.

Litigation Risks from Changes to Section 230

A full repeal or judicial invalidation of Section 230 generally would make every platform responsible for all the content they disseminate, regardless of who created the material requiring at least some additional editorial review. This would significantly disadvantage those platforms that host a significant volume of third-party content. Internet service providers, cable companies, social media, and product/service review companies would be put under tremendous strain, given the daily volume of content produced. To reduce the risk that they serve as a “deep pocket” target for plaintiffs, they would likely adopt more robust pre-publication screening of content and authorized third-parties; limit public interfaces; require registration before a user may publish content; employ more reactive complaint response/takedown policies; and ban problem users more frequently. Small and mid-sized enterprises (SMEs), as well as those not focused primarily on the business of publishing, would likely avoid many interactive functions altogether. 

A full repeal would be, in many ways, a blunderbuss approach to dealing with criticisms of Section 230, and would cause as many or more problems as it solves. In the current polarized environment, it also appears unlikely that Congress will reach bipartisan agreement on amended language for Section 230, or to classify interactive computer services as common carriers, given that the changes desired by the political left and right are so divergent. What may be more likely is that courts encounter a test case that prompts them to clarify the application of the existing statutory language—i.e., whether an entity was acting as a neutral platform or a content creator, whether its conduct was in “good faith,” and whether the material is “objectionable” within the meaning of the statute.

A relatively greater frequency of litigation is almost inevitable in the wake of any changes to the status quo, whether made by Congress or the courts. Major litigation would likely focus on those social-media platforms at the center of the Section 230 controversy, such as Facebook and Twitter, given their active role in these issues, deep pockets and, potentially, various admissions against interest helpful to plaintiffs regarding their level of editorial judgment. SMEs could also be affected in the immediate wake of a change to the statute or its interpretation. While SMEs are likely to be implicated on a smaller scale, the impact of litigation could be even more damaging to their viability if they are not adequately insured.

Over time, the boundaries of an amended Section 230’s application and any consequential effects should become clearer as courts develop application criteria and precedent is established for different fact patterns. Exposed platforms will likely make changes to their activities and risk-management strategies consistent with such developments. Operationally, some interactive features—such as comment sections or product and service reviews—may become less common.

In the short and medium term, however, a period of increased and unforeseen litigation to resolve these issues is likely to prove expensive and damaging. Insurers of content risks are likely to bear the brunt of any changes to Section 230, because these risks and their financial costs would be new, uncertain, and not incorporated into historical pricing of content risk. 

Remembering the Asbestos Crisis

The introduction of a new exposure or legal risk can have significant financial effects on commercial insurance carriers. New and revised risks must be accounted for in the assumptions, probabilities, and load factors used in insurance pricing and reserving models. Even small changes in those values can have large aggregate effects, which may undermine confidence in those models, complicate obtaining reinsurance, or harm an insurer’s overall financial health.

For example, in the 1980s, certain courts adopted the triple-trigger and continuous trigger methods[1] of determining when a policyholder could access coverage under an “occurrence” policy for asbestos claims. As a result, insurers paid claims under policies dating back to the early 1900s and, in some cases, under all policies from that date until the date of the claim. Such policies were written when mesothelioma related to asbestos was unknown and not incorporated into the policy pricing.

Insurers had long-since released reserves from the decades-old policy years, so those resources were not available to pay claims. Nor could underwriters retroactively increase premiums for the intervening years and smooth out the cost of these claims. This created extreme financial stress for impacted insurers and reinsurers, with some ultimately rendered insolvent. Surviving carriers responded by drastically reducing coverage and increasing prices, which resulted in a major capacity shortage that resolved only after the creation of the Bermuda insurance and reinsurance market. 

The asbestos-related liability crisis represented a perfect storm that is unlikely to be replicated. Given the ubiquitous nature of digital content, however, any drastic or misconceived changes to Section 230 protections could still cause significant disruption to the commercial insurance market. 

Content risk is covered, at least in part, by general liability and many cyber policies, but it is not currently a primary focus for underwriters. Specialty media underwriters are more likely to be monitoring Section 230 risk, but the highly competitive market will make it difficult for them to respond to any changes with significant price increases. In addition, the current market environment for U.S. property and casualty insurance generally is in the midst of correcting for years of inadequate pricing, expanding coverage, developing exposures, and claims inflation. It would be extremely difficult to charge an adequate premium increase if the potential severity of content risk were to increase suddenly.

In the face of such risk uncertainty and challenges to adequately increasing premiums, underwriters would likely seek to reduce their exposure to online content risks, i.e., by reducing the scope of coverage, reducing limits, and increasing retentions. How these changes would manifest, and the pain for all involved, would likely depend on how quickly such changes in policyholders’ risk profiles manifest. 

Small or specialty carriers caught unprepared could be forced to exit the market if they experienced a sharp spike in claims or unexpected increase in needed reserves. Larger, multiline carriers may respond by voluntarily reducing or withdrawing their participation in this space. Insurers exposed to ancillary content risk may simply exclude it from cover if adequate price increases are impractical. Such reactions could result in content coverage becoming harder to obtain or unavailable altogether. This, in turn, would incentivize organizations to limit or avoid certain digital activities.

Finding a More Thoughtful Approach

The tension between calls for reform of Section 230 and the potential for disrupting online activity does not mean that political leaders and courts should ignore these issues. Rather, it means that what’s required is a thoughtful, clear, and predictable approach to any changes, with the goal of maximizing the clarity of the changes and their application and minimizing any resulting litigation. Regardless of whether accomplished through legislation or the judicial process, addressing the following issues could minimize the duration and severity of any period of harmful disruption regarding content-risk:

  1. Presumptive immunity – Including an express statement in the definition of “interactive computer service,” or inferring one judicially, to clarify that platforms hosting third-party content enjoy a rebuttable presumption that statutory immunity applies would discourage frivolous litigation as courts establish precedent defining the applicability of any other revisions. 
  1. Specify the grounds for losing immunity – Clarify, at a minimum, what constitutes “good faith” with respect to content restrictions and further clarify what material is or is not “objectionable,” as it relates to newsworthy content or actions that trigger loss of immunity.
  1. Specify the scope and duration of any loss of immunity – Clarify whether the loss of immunity is total, categorical, or specific to the situation under review and the duration of that loss of immunity, if applicable.
  1. Reinstatement of immunity, subject to burden-shifting – Clarify what a platform must do to reinstate statutory immunity on a go-forward basis and clarify that it bears the burden of proving its go-forward conduct entitled it to statutory protection.
  1. Address associated issues – Any clarification or interpretation should address other issues likely to arise, such as the effect and weight to be given to a platform’s application of its community standards, adherence to neutral takedown/complain procedures, etc. Care should be taken to avoid overcorrecting and creating a “heckler’s veto.” 
  1. Deferred effect – If change is made legislatively, the effective date should be deferred for a reasonable time to allow platforms sufficient opportunity to adjust their current risk-management policies, contractual arrangements, content publishing and storage practices, and insurance arrangements in a thoughtful, orderly fashion that accounts for the new rules.

Ultimately, legislative and judicial stakeholders will chart their own course to address the widespread dissatisfaction with Section 230. More important than any of these specific policy suggestions is the principle underpins them: that any changes incorporate due consideration for the potential direct and downstream harm that can be caused if policy is not clear, comprehensive, and designed to minimize unnecessary litigation. 

It is no surprise that, in the years since Section 230 of the Communications Decency Act was passed, the environment and risks associated with digital platforms have evolved or that those changes have created a certain amount of friction in the law’s application. Policymakers should employ a holistic approach when evaluating their legislative and judicial options to revise or clarify the application of Section 230. Doing so in a targeted, predictable fashion should help to mitigate or avoid the risk of increased litigation and other unintended consequences that might otherwise prove harmful to online platforms in the commercial insurance market.

Aaron Tilley is a senior insurance executive with more than 16 years of commercial insurance experience in executive management, underwriting, legal, and claims working in or with the U.S., Bermuda, and London markets. He has served as chief underwriting officer of a specialty media E&O and cyber-liability insurer and as coverage counsel representing international insurers with respect to a variety of E&O and advertising liability claims

[1] The triple-trigger method allowed a policy to be accessed based on the date of the injury-in-fact, manifestation of injury, or exposure to substances known to cause injury. The continuous trigger allowed all policies issued by an insurer, not just one, to be accessed if a triggering event could be established during the policy period.

Policy discussions about the use of personal data often have “less is more” as a background assumption; that data is overconsumed relative to some hypothetical optimal baseline. This overriding skepticism has been the backdrop for sweeping new privacy regulations, such as the California Consumer Privacy Act (CCPA) and the EU’s General Data Protection Regulation (GDPR).

More recently, as part of the broad pushback against data collection by online firms, some have begun to call for creating property rights in consumers’ personal data or for data to be treated as labor. Prominent backers of the idea include New York City mayoral candidate Andrew Yang and computer scientist Jaron Lanier.

The discussion has escaped the halls of academia and made its way into popular media. During a recent discussion with Tesla founder Elon Musk, comedian and podcast host Joe Rogan argued that Facebook is “one gigantic information-gathering business that’s decided to take all of the data that people didn’t know was valuable and sell it and make f***ing billions of dollars.” Musk appeared to agree.

The animosity exhibited toward data collection might come as a surprise to anyone who has taken Econ 101. Goods ideally end up with those who value them most. A firm finding profitable ways to repurpose unwanted scraps is just the efficient reallocation of resources. This applies as much to personal data as to literal trash.

Unfortunately, in the policy sphere, few are willing to recognize the inherent trade-off between the value of privacy, on the one hand, and the value of various goods and services that rely on consumer data, on the other. Ideally, policymakers would look to markets to find the right balance, which they often can. When the transfer of data is hardwired into an underlying transaction, parties have ample room to bargain.

But this is not always possible. In some cases, transaction costs will prevent parties from bargaining over the use of data. The question is whether such situations are so widespread as to justify the creation of data property rights, with all of the allocative inefficiencies they entail. Critics wrongly assume the solution is both to create data property rights and to allocate them to consumers. But there is no evidence to suggest that, at the margin, heightened user privacy necessarily outweighs the social benefits that new data-reliant goods and services would generate. Recent experience in the worlds of personalized medicine and the fight against COVID-19 help to illustrate this point.

Data Property Rights and Personalized Medicine

The world is on the cusp of a revolution in personalized medicine. Advances such as the improved identification of biomarkers, CRISPR genome editing, and machine learning, could usher a new wave of treatments to markedly improve health outcomes.

Personalized medicine uses information about a person’s own genes or proteins to prevent, diagnose, or treat disease. Genetic-testing companies like 23andMe or Family Tree DNA, with the large troves of genetic information they collect, could play a significant role in helping the scientific community to further medical progress in this area.

However, despite the obvious potential of personalized medicine, many of its real-world applications are still very much hypothetical. While governments could act in any number of ways to accelerate the movement’s progress, recent policy debates have instead focused more on whether to create a system of property rights covering personal genetic data.

Some raise concerns that it is pharmaceutical companies, not consumers, who will reap the monetary benefits of the personalized medicine revolution, and that advances are achieved at the expense of consumers’ and patients’ privacy. They contend that data property rights would ensure that patients earn their “fair” share of personalized medicine’s future profits.

But it’s worth examining the other side of the coin. There are few things people value more than their health. U.S. governmental agencies place the value of a single life at somewhere between $1 million and $10 million. The commonly used quality-adjusted life year metric offers valuations that range from $50,000 to upward of $300,000 per incremental year of life.

It therefore follows that the trivial sums users of genetic-testing kits might derive from a system of data property rights would likely be dwarfed by the value they would enjoy from improved medical treatments. A strong case can be made that policymakers should prioritize advancing the emergence of new treatments, rather than attempting to ensure that consumers share in the profits generated by those potential advances.

These debates drew increased attention last year, when 23andMe signed a strategic agreement with the pharmaceutical company Almirall to license the rights related to an antibody Almirall had developed. Critics pointed out that 23andMe’s customers, whose data had presumably been used to discover the potential treatment, received no monetary benefits from the deal. Journalist Laura Spinney wrote in The Guardian newspaper:

23andMe, for example, asks its customers to waive all claims to a share of the profits arising from such research. But given those profits could be substantial—as evidenced by the interest of big pharma—shouldn’t the company be paying us for our data, rather than charging us to be tested?

In the deal’s wake, some argued that personal health data should be covered by property rights. A cardiologist quoted in Fortune magazine opined: “I strongly believe that everyone should own their medical data—and they have a right to that.” But this strong belief, however widely shared, ignores important lessons that law and economics has to teach about property rights and the role of contractual freedom.

Why Do We Have Property Rights?

Among the many important features of property rights is that they create “excludability,” the ability of economic agents to prevent third parties from using a given item. In the words of law professor Richard Epstein:

[P]roperty is not an individual conception, but is at root a social conception. The social conception is fairly and accurately portrayed, not by what it is I can do with the thing in question, but by who it is that I am entitled to exclude by virtue of my right. Possession becomes exclusive possession against the rest of the world…

Excludability helps to facilitate the trade of goods, offers incentives to create those goods in the first place, and promotes specialization throughout the economy. In short, property rights create a system of exclusion that supports creating and maintaining valuable goods, services, and ideas.

But property rights are not without drawbacks. Physical or intellectual property can lead to a suboptimal allocation of resources, namely market power (though this effect is often outweighed by increased ex ante incentives to create and innovate). Similarly, property rights can give rise to thickets that significantly increase the cost of amassing complementary pieces of property. Often cited are the historic (but contested) examples of tolling on the Rhine River or the airplane patent thicket of the early 20th century. Finally, strong property rights might also lead to holdout behavior, which can be addressed through top-down tools, like eminent domain, or private mechanisms, like contingent contracts.

In short, though property rights—whether they cover physical or information goods—can offer vast benefits, there are cases where they might be counterproductive. This is probably why, throughout history, property laws have evolved to achieve a reasonable balance between incentives to create goods and to ensure their efficient allocation and use.

Personal Health Data: What Are We Trying to Incentivize?

There are at least three critical questions we should ask about proposals to create property rights over personal health data.

  1. What goods or behaviors would these rights incentivize or disincentivize that are currently over- or undersupplied by the market?
  2. Are goods over- or undersupplied because of insufficient excludability?
  3. Could these rights undermine the efficient use of personal health data?

Much of the current debate centers on data obtained from direct-to-consumer genetic-testing kits. In this context, almost by definition, firms only obtain consumers’ genetic data with their consent. In western democracies, the rights to bodily integrity and to privacy generally make it illegal to administer genetic tests against a consumer or patient’s will. This makes genetic information naturally excludable, so consumers already benefit from what is effectively a property right.

When consumers decide to use a genetic-testing kit, the terms set by the testing firm generally stipulate how their personal data will be used. 23andMe has a detailed policy to this effect, as does Family Tree DNA. In the case of 23andMe, consumers can decide whether their personal information can be used for the purpose of scientific research:

You have the choice to participate in 23andMe Research by providing your consent. … 23andMe Research may study a specific group or population, identify potential areas or targets for therapeutics development, conduct or support the development of drugs, diagnostics or devices to diagnose, predict or treat medical or other health conditions, work with public, private and/or nonprofit entities on genetic research initiatives, or otherwise create, commercialize, and apply this new knowledge to improve health care.

Because this transfer of personal information is hardwired into the provision of genetic-testing services, there is space for contractual bargaining over the allocation of this information. The right to use personal health data will go toward the party that values it most, especially if information asymmetries are weeded out by existing regulations or business practices.

Regardless of data property rights, consumers have a choice: they can purchase genetic-testing services and agree to the provider’s data policy, or they can forgo the services. The service provider cannot obtain the data without entering into an agreement with the consumer. While competition between providers will affect parties’ bargaining positions, and thus the price and terms on which these services are provided, data property rights likely will not.

So, why do consumers transfer control over their genetic data? The main reason is that genetic information is inaccessible and worthless without the addition of genetic-testing services. Consumers must pass through the bottleneck of genetic testing for their genetic data to be revealed and transformed into usable information. It therefore makes sense to transfer the information to the service provider, who is in a much stronger position to draw insights from it. From the consumer’s perspective, the data is not even truly “transferred,” as the consumer had no access to it before the genetic-testing service revealed it. The value of this genetic information is then netted out in the price consumers pay for testing kits.

If personal health data were undersupplied by consumers and patients, testing firms could sweeten the deal and offer them more in return for their data. U.S. copyright law covers original compilations of data, while EU law gives 15 years of exclusive protection to the creators of original databases. Legal protections for trade secrets could also play some role. Thus, firms have some incentives to amass valuable health datasets.

But some critics argue that health data is, in fact, oversupplied. Generally, such arguments assert that agents do not account for the negative privacy externalities suffered by third-parties, such as adverse-selection problems in insurance markets. For example, Jay Pil Choi, Doh Shin Jeon, and Byung Cheol Kim argue:

Genetic tests are another example of privacy concerns due to informational externalities. Researchers have found that some subjects’ genetic information can be used to make predictions of others’ genetic disposition among the same racial or ethnic category.  … Because of practical concerns about privacy and/or invidious discrimination based on genetic information, the U.S. federal government has prohibited insurance companies and employers from any misuse of information from genetic tests under the Genetic Information Nondiscrimination Act (GINA).

But if these externalities exist (most of the examples cited by scholars are hypothetical), they are likely dwarfed by the tremendous benefits that could flow from the use of personal health data. Put differently, the assertion that “excessive” data collection may create privacy harms should be weighed against the possibility that the same collection may also lead to socially valuable goods and services that produce positive externalities.

In any case, data property rights would do little to limit these potential negative externalities. Consumers and patients are already free to agree to terms that allow or prevent their data from being resold to insurers. It is not clear how data property rights would alter the picture.

Proponents of data property rights often claim they should be associated with some form of collective bargaining. The idea is that consumers might otherwise fail to receive their “fair share” of genetic-testing firms’ revenue. But what critics portray as asymmetric bargaining power might simply be the market signaling that genetic-testing services are in high demand, with room for competitors to enter the market. Shifting rents from genetic-testing services to consumers would undermine this valuable price signal and, ultimately, diminish the quality of the services.

Perhaps more importantly, to the extent that they limit the supply of genetic information—for example, because firms are forced to pay higher prices for data and thus acquire less of it—data property rights might hinder the emergence of new treatments. If genetic data is a key input to develop personalized medicines, adopting policies that, in effect, ration the supply of that data is likely misguided.

Even if policymakers do not directly put their thumb on the scale, data property rights could still harm pharmaceutical innovation. If existing privacy regulations are any guide—notably, the previously mentioned GDPR and CCPA, as well as the federal Health Insurance Portability and Accountability Act (HIPAA)—such rights might increase red tape for pharmaceutical innovators. Privacy regulations routinely limit firms’ ability to put collected data to new and previously unforeseen uses. They also limit parties’ contractual freedom when it comes to gathering consumers’ consent.

At the margin, data property rights would make it more costly for firms to amass socially valuable datasets. This would effectively move the personalized medicine space further away from a world of permissionless innovation, thus slowing down medical progress.

In short, there is little reason to believe health-care data is misallocated. Proposals to reallocate rights to such data based on idiosyncratic distributional preferences threaten to stifle innovation in the name of privacy harms that remain mostly hypothetical.

Data Property Rights and COVID-19

The trade-off between users’ privacy and the efficient use of data also has important implications for the fight against COVID-19. Since the beginning of the pandemic, several promising initiatives have been thwarted by privacy regulations and concerns about the use of personal data. This has potentially prevented policymakers, firms, and consumers from putting information to its optimal social use. High-profile issues have included:

Each of these cases may involve genuine privacy risks. But to the extent that they do, those risks must be balanced against the potential benefits to society. If privacy concerns prevent us from deploying contact tracing or green passes at scale, we should question whether the privacy benefits are worth the cost. The same is true for rules that prohibit amassing more data than is strictly necessary, as is required by data-minimization obligations included in regulations such as the GDPR.

If our initial question was instead whether the benefits of a given data-collection scheme outweighed its potential costs to privacy, incentives could be set such that competition between firms would reduce the amount of data collected—at least, where minimized data collection is, indeed, valuable to users. Yet these considerations are almost completely absent in the COVID-19-related privacy debates, as they are in the broader privacy debate. Against this backdrop, the case for personal data property rights is dubious.


The key question is whether policymakers should make it easier or harder for firms and public bodies to amass large sets of personal data. This requires asking whether personal data is currently under- or over-provided, and whether the additional excludability that would be created by data property rights would offset their detrimental effect on innovation.

Swaths of personal data currently lie untapped. With the proper incentive mechanisms in place, this idle data could be mobilized to develop personalized medicines and to fight the COVID-19 outbreak, among many other valuable uses. By making such data more onerous to acquire, property rights in personal data might stifle the assembly of novel datasets that could be used to build innovative products and services.

On the other hand, when dealing with diffuse and complementary data sources, transaction costs become a real issue and the initial allocation of rights can matter a great deal. In such cases, unlike the genetic-testing kits example, it is not certain that users will be able to bargain with firms, especially where their personal information is exchanged by third parties.

If optimal reallocation is unlikely, should property rights go to the person covered by the data or to the collectors (potentially subject to user opt-outs)? Proponents of data property rights assume the first option is superior. But if the goal is to produce groundbreaking new goods and services, granting rights to data collectors might be a superior solution. Ultimately, this is an empirical question.

As Richard Epstein puts it, the goal is to “minimize the sum of errors that arise from expropriation and undercompensation, where the two are inversely related.” Rather than approach the problem with the preconceived notion that initial rights should go to users, policymakers should ensure that data flows to those economic agents who can best extract information and knowledge from it.

As things stand, there is little to suggest that the trade-offs favor creating data property rights. This is not an argument for requisitioning personal information or preventing parties from transferring data as they see fit, but simply for letting markets function, unfettered by misguided public policies.

Critics of big tech companies like Google and Amazon are increasingly focused on the supposed evils of “self-preferencing.” This refers to when digital platforms like Amazon Marketplace or Google Search, which connect competing services with potential customers or users, also offer (and sometimes prioritize) their own in-house products and services. 

The objection, raised by several members and witnesses during a Feb. 25 hearing of the House Judiciary Committee’s antitrust subcommittee, is that it is unfair to third parties that use those sites to allow the site’s owner special competitive advantages. Is it fair, for example, for Amazon to use the data it gathers from its service to design new products if third-party merchants can’t access the same data? This seemingly intuitive complaint was the basis for the European Commission’s landmark case against Google

But we cannot assume that something is bad for competition just because it is bad for certain competitors. A lot of unambiguously procompetitive behavior, like cutting prices, also tends to make life difficult for competitors. The same is true when a digital platform provides a service that is better than alternatives provided by the site’s third-party sellers. 

It’s probably true that Amazon’s access to customer search and purchase data can help it spot products it can undercut with its own versions, driving down prices. But that’s not unusual; most retailers do this, many to a much greater extent than Amazon. For example, you can buy AmazonBasics batteries for less than half the price of branded alternatives, and they’re pretty good.

There’s no doubt this is unpleasant for merchants that have to compete with these offerings. But it is also no different from having to compete with more efficient rivals who have lower costs or better insight into consumer demand. Copying products and seeking ways to offer them with better features or at a lower price, which critics of self-preferencing highlight as a particular concern, has always been a fundamental part of market competition—indeed, it is the primary way competition occurs in most markets. 

Store-branded versions of iPhone cables and Nespresso pods are certainly inconvenient for those companies, but they offer consumers cheaper alternatives. Where such copying may be problematic (say, by deterring investments in product innovations), the law awards and enforces patents and copyrights to reward novel discoveries and creative works, and trademarks to protect brand identity. But in the absence of those cases where a company has intellectual property, this is simply how competition works. 

The fundamental question is “what benefits consumers?” Services like Yelp object that they cannot compete with Google when Google embeds its Google Maps box in Google Search results, while Yelp cannot do the same. But for users, the Maps box adds valuable information to the results page, making it easier to get what they want. Google is not making Yelp worse by making its own product better. Should it have to refrain from offering services that benefit its users because doing so might make competing products comparatively less attractive?

Self-preferencing also enables platforms to promote their offerings in other markets, which is often how large tech companies compete with each other. Amazon has a photo-hosting app that competes with Google Photos and Apple’s iCloud. It recently emailed its customers to promote it. That is undoubtedly self-preferencing, since other services cannot market themselves to Amazon’s customers like this, but if it makes customers aware of an alternative they might not have otherwise considered, that is good for competition. 

This kind of behavior also allows companies to invest in offering services inexpensively, or for free, that they intend to monetize by preferencing their other, more profitable products. For example, Google invests in Android’s operating system and gives much of it away for free precisely because it can encourage Android customers to use the profitable Google Search service. Despite claims to the contrary, it is difficult to see this sort of cross-subsidy as harmful to consumers.

Self-preferencing can even be good for competing services, including third-party merchants. In many cases, it expands the size of their potential customer base. For example, blockbuster video games released by Sony and Microsoft increase demand for games by other publishers because they increase the total number of people who buy Playstations and Xboxes. This effect is clear on Amazon’s Marketplace, which has grown enormously for third-party merchants even as Amazon has increased the number of its own store-brand products on the site. By making the Amazon Marketplace more attractive, third-party sellers also benefit.

All platforms are open or closed to varying degrees. Retail “platforms,” for example, exist on a spectrum on which Craigslist is more open and neutral than eBay, which is more so than Amazon, which is itself relatively more so than, say, Each position on this spectrum offers its own benefits and trade-offs for consumers. Indeed, some customers’ biggest complaint against Amazon is that it is too open, filled with third parties who leave fake reviews, offer counterfeit products, or have shoddy returns policies. Part of the role of the site is to try to correct those problems by making better rules, excluding certain sellers, or just by offering similar options directly. 

Regulators and legislators often act as if the more open and neutral, the better, but customers have repeatedly shown that they often prefer less open, less neutral options. And critics of self-preferencing frequently find themselves arguing against behavior that improves consumer outcomes, because it hurts competitors. But that is the nature of competition: what’s good for consumers is frequently bad for competitors. If we have to choose, it’s consumers who should always come first.

The European Court of Justice issued its long-awaited ruling Dec. 9 in the Groupe Canal+ case. The case centered on licensing agreements in which Paramount Pictures granted absolute territorial exclusivity to several European broadcasters, including Canal+.

Back in 2015, the European Commission charged six U.S. film studios, including Paramount,  as well as British broadcaster Sky UK Ltd., with illegally limiting access to content. The crux of the EC’s complaint was that the contractual agreements to limit cross-border competition for content distribution ran afoul of European Union competition law. Paramount ultimately settled its case with the commission and agreed to remove the problematic clauses from its contracts. This affected third parties like Canal+, who lost valuable contractual protections. 

While the ECJ ultimately upheld the agreements on what amounts to procedural grounds (Canal+ was unduly affected by a decision to which it was not a party), the case provides yet another example of the European Commission’s misguided stance on absolute territorial licensing, sometimes referred to as “geo-blocking.”

The EC’s long-running efforts to restrict geo-blocking emerge from its attempts to harmonize trade across the EU. Notably, in its Digital Single Market initiative, the Commission envisioned

[A] Digital Single Market is one in which the free movement of goods, persons, services and capital is ensured and where individuals and businesses can s​eamlessly access and exercise online activities under conditions of f​air competition,​ and a high level of consumer and personal data protection, irrespective of their nationality or place of residence.

This policy stance has been endorsed consistently by the European Court of Justice. In the 2011 Murphy decision, for example, the court held that agreements between rights holders and broadcasters infringe European competition when they categorically prevent the latter from supplying “decoding devices” to consumers located in other member states. More precisely, while rights holders can license their content on a territorial basis, they cannot restrict so-called “passive sales”; broadcasters can be prevented from actively chasing consumers in other member states, but not from serving them altogether. If this sounds Kafkaesque, it’s because it is.

The problem with the ECJ’s vision is that it elides the complex factors that underlie a healthy free-trade zone. Geo-blocking frequently is misunderstood or derided by consumers as an unwarranted restriction on their consumption preferences. It doesn’t feel “fair” or “seamless” when a rights holder can decide who can access their content and on what terms. But that doesn’t mean geo-blocking is a nefarious or socially harmful practice. Quite the contrary: allowing creators to create different sets of distribution options offers both a return to the creators as well as more choice in general to consumers. 

In economic terms, geo-blocking allows rights holders to engage in third-degree price discrimination; that is, they have the ability to charge different prices for different sets of consumers. This type of pricing will increase total welfare so long as it increases output. As Hal Varian puts it:

If a new market is opened up because of price discrimination—a market that was not previously being served under the ordinary monopoly—then we will typically have a Pareto improving welfare enhancement.

Another benefit of third-degree price discrimination is that, by shifting some economic surplus from consumers to firms, it can stimulate investment in much the same way copyright and patents do. Put simply, the prospect of greater economic rents increases the maximum investment firms will be willing to make in content creation and distribution.

For these reasons, respecting parties’ freedom to license content as they see fit is likely to produce much more efficient outcomes than annulling those agreements through government-imposed “seamless access” and “fair competition” rules. Part of the value of copyright law is in creating space to contract by protecting creators’ property rights. Without geo-blocking, the enforcement of licensing agreements would become much more difficult. Laws restricting copyright owners’ ability to contract freely reduce allocational efficiency, as well as the incentives to create in the first place. Further, when individual creators have commercial and creative autonomy, they gain a degree of predictability that can ensure they will continue to produce content in the future. 

The European Union would do well to adopt a more nuanced understanding of the contractual relationships between producers and distributors. 

More than two decades after Congress sought to strike a balance between the interests of creators and service providers with the Digital Millennium Copyright Act (DMCA), it is clear that Section 512 of the Copyright Act has failed to create the right incentives to curb online copyright infringement. Indeed, as a May report from the U.S. Copyright Office concluded, the “original intended balance has been tilted askew.”

As laid out in the DMCA, Section 512’s goal was to “preserve strong incentives for service providers and copyright owners to cooperate to detect and deal with copyright infringements” while simultaneously providing “greater certainty to service providers concerning their legal exposure for infringements.” While the law has certainly accomplished the latter, it has been at the expense of the former.

The good news is that Congress has taken notice. Sens. Thom Tillis (R-N.C.) and Chris Coons (D-Del.)—the chair and ranking member, respectively, of the Senate Judiciary Subcommittee on Intellectual Property—have held a series of hearings on potential reforms to the Copyright Act, with another scheduled for Dec. 15. Tillis also recently solicited feedback to guide a discussion draft of reform legislation he intends to make public shortly after the hearing. (Our answers to Tillis’ questionnaire can be found here.) 

The problem

Back in 1998, there were reasons for lawmakers to believe Section 512 would help Internet users, copyright holders and online service providers (OSPs) alike. Holding OSPs culpable for any misuse of copyrighted material in the vast amount of user-generated content they carry would create unreasonable litigation risk and hinder development of online distribution services. That would be bad for Internet users, for copyright holders who benefit from the lawful dissemination of their content and for the OSPs themselves. In that sense, providing OSPs limited liability protection for collaborating to curb piracy was seen as a way to create a healthier online ecosystem to everyone’s advantage.

But as Section 512 has been applied by the courts, OSPs need do little more than respond to takedown notices from copyright holders. At that point, the copyrighted content has already been unlawfully disseminated and damage has already been done. Moreover, in the interim, service providers can continue to monetize the infringing content through ad placement or other mechanisms. In essence, Section 512 has in practice given OSPs an economic incentive to do as little as possible to prevent infringement for as long as possible so that they can avoid costs and continue to generate revenue. That is antithetical to the copyright system, which is supposed to give copyright holders the ability to determine how their content is disseminated and to negotiate compensation.

Such concerns are compounded by the fact that a single unauthorized version of a copyrighted work on one Internet site may quickly be replicated into hundreds of versions at hundreds of sites across the globe. Copyright holders must scour the entire Internet for unauthorized versions of their content in a constant state of notice-sending, only to have the content continue to pop up. That is a costly and time-consuming burden for any copyright holder. The burden is even greater for independent creators who do not have their own content-protection departments. The hours and days they lose policing the Internet for their copyrighted material is time they could be spending on their craft.

Potential solutions

Proper safe harbors should encourage OSPs to help prevent copyrighted content from being improperly disseminated. Ideally, such rules could also encourage OSPs to license content. That would enable them and their users to benefit from the content without litigation risk, but while respecting copyright holders’ rights. One of the benefits of intermediaries is that they can more efficiently negotiate such agreements with copyright holders than the copyright holders could with each of the service providers’ many users.

But the near-complete absence of intermediary liability means OSPs have little incentive to curb piracy or license content. As a condition of receiving safe harbor protection, OSPs should be required to take reasonable steps: 1) to prevent infringement and 2) to stop, upon notice, infringement that has already occurred. Such steps would include:

  • Authentication of Identities. Ensuring online service providers know their users’ true identities would discourage those users from engaging in piracy, while also making it harder for users to simply change account names once caught. It would also help copyright holders to seek redress, including in cases where all they want is to ask users to cease unintentional infringement. Identities could generally remain confidential, disclosed to third parties only when needed to resolve a case of infringement.
  • Education Measures. Unintentional infringement might be avoided if OSPs briefly explained to users the principles of copyright and fair use and asked whether they were transmitting content that contained someone else’s copyrighted work. Such explanations and inquiries should be provided at the point a user seeks to disseminate content. Links could be included pointing to more detailed information on the Copyright Office’s site.
  • Revisions to the Knowledge Standard. According to the text of Section 512, to be protected by the safe harbors, OSPs must not have either “actual knowledge” of infringement or be “aware of facts or circumstances from which infringing activity is apparent.” This awareness of facts or circumstances is often referred to as “red flag” knowledge. But courts have all but read this standard out of the statute. The statute should be revised to make clear that OSPs are required to act when infringement is apparent, even if they have not been alerted to a specific instance of infringement by a copyright holder.
  • Preservation of Rights Management Information. Digital works often have embedded data indicating who the copyright holders are and how the content may be used. OSPs should be held culpable if they negligently, recklessly or knowingly remove that data. Copyright holders should not be required, as is the case today, to demonstrate that the online service provider acted with an intent to facilitate infringement. The lack of accurate rights-management information makes it harder for copyright holders to enforce their rights, as well as for individuals willing to license content to determine who to approach to do so. OSPs should thus have an obligation to ensure that rights-management information included by a copyright holder remains intact, especially since OSPs often monetize that content through advertising or other means.
  • Filtering and Staydown: Allowing all copyright holders to provide “fingerprints” of their content would enable OSPs to prevent copyrighted content from being unlawfully uploaded or otherwise disseminated. It could also help ensure that any copyrighted content that slips through and is subsequently taken down manages to stay down. Preventing unauthorized dissemination through filtering could also reduce the number of takedown notices copyright holders would need to send and OSPs would need to process—saving everyone time, hassle and money. Filtering technologies, such as Google’s Content ID, already exist, although Google does not make it available to all copyright holders. The EU has recently adopted filtering requirements. A U.S. filtering requirement would help to foster a market for the creation of additional filtering solutions.
  • Adoption of Standard Technical Measures. Section 512(i) requires OSPs to accommodate standard technical measures for preventing piracy that have been developed through a voluntary, consensus process. The immunity from liability that the safe harbors provide, however, reduces OSPs’ incentive to collaborate to develop standard technical measures. The Copyright Office should be authorized to certify certain solutions as standard technical measures, and even to commission the creation of additional ones. This would help foster a market for such measures.
  • Improving the Takedown Process. The statute allows copyright holders to provide representative lists in their notices for takedown, rather than require them to itemize every URL for takedown. Yet OSPs often impose technicalities before they will act on a representative list. The Copyright Office should be authorized to create model forms deemed to provide adequate notice, as well as to specify what kind of information is both necessary and sufficient to require takedown.
  • Effective Repeat Infringer Policies. The statute already requires OSPs to have policies to terminate service to repeat infringers, and to reasonably implement those policies. Courts have historically interpreted those requirements rather laxly. The Copyright Office should be authorized to create a model repeat-infringer policy deemed to comply with the requirement.

In addition to creating baseline requirements such as the ones listed above, the Copyright Act should be revised to provide additional tools to resolve disputes. Creating a small claims process, as provided in the CASE Act, could alleviate the burdens of litigation for smaller copyright holders, smaller OSPs and individual users. Also, courts ordinarily have authority to issue no-fault injunctions to third parties when doing so is necessary to effectuate their rulings. In the copyright context, even when U.S. courts have ruled that websites have willfully engaged in infringement, ceasing the infringement can be difficult, especially when the parties and their facilities are located outside the United States. Courts should be clearly authorized to issue no-fault injunctions requiring OSPs to block access to sites that the courts have ruled are willfully engaged in mass infringement. Such orders are already available to courts in many other countries and have not, as some hyperbolically predict, “broken the Internet.”

Revising the Copyright Act as described above would encourage OSPs both to prevent the initial infringement and to more effectively curtail continued infringement that has slipped through. OSPs could decline to implement these content-protection requirements, but they would lose the safe harbors and be subject to the ordinary standards of copyright liability. OSPs also might more widely choose to license copyrighted works that are likely to appear on their platforms. That would benefit copyright holders and Internet consumers alike. The providers themselves might even find it leads to increased use of their service—as well as increased profits.