Archives For

Output of the LG Research AI to the prompt: “a system of copyright for artificial intelligence”

Not only have digital-image generators like Stable Diffusion, DALL-E, and Midjourney—which make use of deep-learning models and other artificial-intelligence (AI) systems—created some incredible (and sometimes creepy – see above) visual art, but they’ve engendered a good deal of controversy, as well. Human artists have banded together as part of a fledgling anti-AI campaign; lawsuits have been filed; and policy experts have been trying to think through how these machine-learning systems interact with various facets of the law.

Debates about the future of AI have particular salience for intellectual-property rights. Copyright is notoriously difficult to protect online, and these expert systems add an additional wrinkle: it can at least argued that their outputs can be unique creations. There are also, of course, moral and philosophical objections to those arguments, with many grounded in the supposition that only a human (or something with a brain, like humans) can be creative.

Leaving aside for the moment a potentially pitched battle over the definition of “creation,” we should be able to find consensus that at least some of these systems produce unique outputs and are not merely cutting and pasting other pieces of visual imagery into a new whole. That is, at some level, the machines are engaging in a rudimentary sort of “learning” about how humans arrange colors and lines when generating images of certain subjects. The machines then reconstruct this process and produce a new set of lines and colors that conform to the patterns they found in the human art.

But that isn’t the end of the story. Even if some of these systems’ outputs are unique and noninfringing, the way the machines learn—by ingesting existing artwork—can raise a number of thorny issues. Indeed, these systems are arguably infringing copyright during the learning phase, and such use may not survive a fair-use analysis.

We are still in the early days of thinking through how this new technology maps onto the law. Answers will inevitably come, but for now, there are some very interesting questions about the intellectual-property implications of AI-generated art, which I consider below.

The Points of Collision Between Intellectual Property Law and AI-Generated Art

AI-generated art is not a single thing. It is, rather, a collection of differing processes, each with different implications for the law. For the purposes of this post, I am going to deal with image-generation systems that use “generated adversarial networks” (GANs) and diffusion models. The various implementations of each will differ in some respects, but from what I understand, the ways that these techniques can be used generate all sorts of media are sufficiently similar that we can begin to sketch out some of their legal implications. 

A (very) brief technical description

This is a very high-level overview of how these systems work; for a more detailed (but very readable) description, see here.

A GAN is a type of machine-learning model that consists of two parts: a generator and a discriminator. The generator is trained to create new images that look like they come from a particular dataset, while the discriminator is trained to distinguish the generated images from real images in the dataset. The two parts are trained together in an adversarial manner, with the generator trying to produce images that can fool the discriminator and the discriminator trying to correctly identify the generated images.

A diffusion model, by contrast, analyzes the distribution of information in an image, as noise is progressively added to it. This kind of algorithm analyzes characteristics of sample images—like the distribution of colors or lines—in order to “understand” what counts as an accurate representation of a subject (i.e., what makes a picture of a cat look like a cat and not like a dog).

For example, in the generation phase, systems like Stable Diffusion start with randomly generated noise, and work backward in “denoising” steps to essentially “see” shapes:

The sampled noise is predicted so that if we subtract it from the image, we get an image that’s closer to the images the model was trained on (not the exact images themselves, but the distribution – the world of pixel arrangements where the sky is usually blue and above the ground, people have two eyes, cats look a certain way – pointy ears and clearly unimpressed).

It is relevant here that, once networks using these techniques are trained, they do not need to rely on saved copies of the training images in order to generate new images. Of course, it’s possible that some implementations might be designed in a way that does save copies of those images, but for the purposes of this post, I will assume we are talking about systems that save known works only during the training phase. The models that are produced during training are, in essence, instructions to a different piece of software about how to start with a text prompt from a user—a palette of pure noise—and progressively “discover” signal in that image until some new image emerges.

Input-stage use of intellectual property

The creators of OpenAI, one of the most popular AI tools, are not shy about their use of protected works in the training phase of AI algorithms. In comments to the U.S. Patent and Trademark Office (PTO), they note that:

…[m]odern AI systems require large amounts of data. For certain tasks, that data is derived from existing publicly accessible “corpora”… of data that include copyrighted works. By analyzing large corpora (which necessarily involves first making copies of the data to be analyzed), AI systems can learn patterns inherent in human-generated data and then use those patterns to synthesize similar data which yield increasingly compelling novel media in modalities as diverse as text, image, and audio. (emphasis added).

Thus, at the training stage, the most popular forms of machine-learning systems require making copies of existing works. And where the material being used is either not in the public domain or is not licensed, an infringement occurs (as Getty Images notes in a suit against Stability AI that it recently filed). Thus, some affirmative defense is needed to excuse the infringement.

Toward this end, OpenAI believes that its algorithmic training should qualify as a fair use. Other major services that use these AI techniques to “learn” from existing media would likely make similar arguments. But, at least in the way that OpenAI has framed the fair-use analysis (that these uses are sufficiently “transformative”), it’s not clear that they should qualify.

The purpose and character of the use

In brief, fair use—found in 17 USC § 107—provides for an affirmative defense against infringement when the use is  “for purposes such as criticism, comment, news reporting, teaching…, scholarship, or research.” When weighing a fair-use defense, a court must balance a number of factors:

  1. the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
  2. the nature of the copyrighted work;
  3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
  4. the effect of the use upon the potential market for or value of the copyrighted work.

OpenAI’s fair-use claim is rooted in the first factor: the nature and character of the use. I should note, then, that what follows is solely a consideration of Factor 1, with special attention paid to whether these uses are “transformative.” But it is important to stipulate fair-use analysis is a multi-factor test and that, even within the first factor, it’s not mandatory that a use be “transformative.” It is entirely possible that a court balancing all of the factors could, indeed, find that OpenAI is engaged in fair use, even if it does not agree that it is “transformative.”

Whether the use of copyrighted works to train an AI is “transformative” is certainly a novel question, but it is likely answered through an observation that the U.S. Supreme Court made in Campbell v. Acuff Rose Music:

[W]hat Sony said simply makes common sense: when a commercial use amounts to mere duplication of the entirety of an original, it clearly “supersede[s] the objects,”… of the original and serves as a market replacement for it, making it likely that cognizable market harm to the original will occur… But when, on the contrary, the second use is transformative, market substitution is at least less certain, and market harm may not be so readily inferred.

A key question, then, is whether training an AI on copyrighted works amounts to mere “duplication of the entirety of an original” or is sufficiently “transformative” to support a fair-use finding. Open AI, as noted above, believes its use is highly transformative. According to its comments:

Training of AI systems is clearly highly transformative. Works in training corpora were meant primarily for human consumption for their standalone entertainment value. The “object of the original creation,” in other words, is direct human consumption of the author’s ​expression.​ Intermediate copying of works in training AI systems is, by contrast, “non-expressive” the copying helps computer programs learn the patterns inherent in human-generated media. The aim of this process—creation of a useful generative AI system—is quite different than the original object of human consumption.  The output is different too: nobody looking to read a specific webpage contained in the corpus used to train an AI system can do so by studying the AI system or its outputs. The new purpose and expression are thus both highly transformative.

But the way that Open AI frames its system works against its interests in this argument. As noted above, and reinforced in the immediately preceding quote, an AI system like DALL-E or Stable Diffusion is actually made of at least two distinct pieces. The first is a piece of software that ingests existing works and creates a file that can serve as instructions to the second piece of software. The second piece of software then takes the output of the first part and can produce independent results. Thus, there is a clear discontinuity in the process, whereby the ultimate work created by the system is disconnected from the creative inputs used to train the software.

Therefore, contrary to what Open AI asserts, the protected works are indeed ingested into the first part of the system “for their standalone entertainment value.” That is to say, the software is learning what counts as “standalone entertainment value” and therefore, the works mustbe used in those terms.

Surely, a computer is not sitting on a couch and surfing for its own entertainment. But it is solely for the very “standalone entertainment value” that the first piece of software is being shown copyrighted material. By contrast, parody or “remixing”  uses incorporate the work into some secondary expression that transforms the input. The way these systems work is to learn what makes a piece entertaining and then to discard that piece altogether. Moreover, this use of art qua art most certainly interferes with the existing market insofar as this use is in lieu of reaching a licensing agreement with rightsholders.

The 2nd U.S. Circuit Court of Appeals dealt with an analogous case. In American Geophysical Union v. Texaco, the 2nd Circuit considered whether Texaco’s photocopying of scientific articles produced by the plaintiffs qualified for a fair-use defense. Texaco employed between 400 and 500 research scientists and, as part of supporting their work, maintained subscriptions to a number of scientific journals. It was common practice for Texaco’s scientists to photocopy entire articles and save them in a file.

The plaintiffs sued for copyright infringement. Texaco asserted that photocopying by its scientists for the purposes of furthering scientific research—that is to train the scientists on the content of the journal articles—should count as a fair use, at least in part because it was sufficiently “transformative.” The 2nd Circuit disagreed:

The “transformative use” concept is pertinent to a court’s investigation under the first factor because it assesses the value generated by the secondary use and the means by which such value is generated. To the extent that the secondary use involves merely an untransformed duplication, the value generated by the secondary use is little or nothing more than the value that inheres in the original. Rather than making some contribution of new intellectual value and thereby fostering the advancement of the arts and sciences, an untransformed copy is likely to be used simply for the same intrinsic purpose as the original, thereby providing limited justification for a finding of fair use… (emphasis added).

As in the case at hand, the 2nd Circuit observed that making full copies of the scientific articles was solely for the consumption of the material itself. A rejoinder, of course, is that training these AI systems surely advances scientific research and, thus, does foster the “advancement of the arts and sciences.” But in American Geophysical Union, where the secondary use was explicitly for the creation of new and different scientific outputs, the court still held that making copies of one scientific article in order to learn and produce new scientific innovations did not count as “transformative.”

What this case represents is that one cannot merely state that some social goal will be advanced in the future by permitting an exception to copyright protection today. As the 2nd Circuit put it:

…the dominant purpose of the use is a systematic institutional policy of multiplying the available number of copies of pertinent copyrighted articles by circulating the journals among employed scientists for them to make copies, thereby serving the same purpose for which additional subscriptions are normally sold, or… for which photocopying licenses may be obtained.

The secondary use itself must be transformative and different. Where an AI system ingests copyrighted works, that use is simply not transformative; it is using the works in their original sense in order to train a system to be able to make other original works. As in American Geophysical Union, the AI creators are completely free to seek licenses from rightsholders in order to train their systems.

Finally, there is a sense in which this machine learning might not infringe on copyrights at all. To my knowledge, the technology does not itself exist, but if it were possible for a machine to somehow “see” in the way that humans do—without using stored copies of copyrighted works—merely “learning” from those works, such as we can call it learning, probably would not violate copyright laws.

Do the outputs of these systems violate intellectual property laws?

The outputs of GANs and diffusion models may or may not violate IP laws, but there is nothing inherent in the processes described above to dictate that they must. As noted, the most common AI systems do not save copies of existing works, but merely “instructions” (more or less) on how to create new works that conform to patterns they found by examining existing work. If we assume that a system isn’t violating copyright at the input stage, it’s entirely possible that it can produce completely new pieces of art that have never before existed and do not violate copyright.

They can, however, be made to violate IP rights. For example, trademark violations appear to be one of the most popular uses of these AI systems by end users. To take but one example, a quick search of Google Images for “midjourney iron man” returns a slew of images that almost certainly violate trademarks for the character Iron Man. Similarly, these systems can be instructed to generate art that is not just “in the style” of a particular artist, but that very closely resembles existing pieces. In this sense, the system would be making a copy that theoretically infringes. 

There is a common bug in such systems that leads to outputs that are more likely to violate copyright in this way. Known as “overfitting,” the training leg of these AI systems can be presented with samples that contain too many instances of a particular image. This leads to a data set that contains too much information about the specific image, such that when the AI generates a new image, it is constrained to producing something very close to the original.

An argument can also be made that generating art “in the style of” a famous artist violates moral rights (in jurisdictions where such rights exist).

At least in the copyright space, cases like Sony are going to become crucial. Does the user side of these AI systems have substantial noninfringing uses? If so, the firms that host software for end users could avoid secondary-infringement liability, and the onus would fall on users to avoid violating copyright laws. At the same time, it seems plausible that legislatures could place some obligation on these providers to implement filters to mitigate infringement by end users.

Opportunities for New IP Commercialization with AI

There are a number of ways that AI systems may inexcusably infringe on intellectual-property rights. As a best practice, I would encourage the firms that operate these services to seek licenses from rightsholders. While this would surely be an expense, it also opens new opportunities for both sides to generate revenue.

For example, an AI firm could develop its own version of YouTube’s ContentID that allows creators to opt their work into training. For some well-known artists this could be negotiated with an upfront licensing fee. On the user-side, any artist who has opted in could then be selected as a “style” for the AI to emulate. When users generate an image, a royalty payment to the artist would be created. Creators would also have the option to remove their influence from the system if they so desired. 

Undoubtedly, there are other ways to monetize the relationship between creators and the use of their work in AI systems. Ultimately, the firms that run these systems will not be able to simply wish away IP laws. There are going to be opportunities for creators and AI firms to both succeed, and the law should help to generate that result.

States seeking broadband-deployment grants under the federal Broadband Equity, Access, and Deployment (BEAD) program created by last year’s infrastructure bill now have some guidance as to what will be required of them, with the National Telecommunications and Information Administration (NTIA) issuing details last week in a new notice of funding opportunity (NOFO).

All things considered, the NOFO could be worse. It is broadly in line with congressional intent, insofar as the requirements aim to direct the bulk of the funding toward connecting the unconnected. It declares that the BEAD program’s principal focus will be to deploy service to “unserved” areas that lack any broadband service or that can only access service with download speeds of less than 25 Mbps and upload speeds of less than 3 Mbps, as well as to “underserved” areas with speeds of less than 100/20 Mbps. One may quibble with the definition of “underserved,” but these guidelines are within the reasonable range of deployment benchmarks.

There are, however, also some subtle (and not-so-subtle) mandates the NTIA would introduce that could work at cross-purposes with the BEAD program’s larger goals and create damaging precedent that could harm deployment over the long term.

Some NOFO Requirements May Impinge Broadband Deployment

The infrastructure bill’s statutory text declares that:

Access to affordable, reliable, high-speed broadband is essential to full participation in modern life in the United States.

In keeping with that commitment, the bill established the BEAD program to finance the buildout of as much high-speed broadband access as possible for as many people as possible. This is necessarily an exercise in economizing and managing tradeoffs. There are many unserved consumers who need to be connected or underserved consumers who need access to faster connections, but resources are finite.

It is a relevant background fact to note that broadband speeds have grown consistently faster in recent decades, while quality-adjusted prices for broadband service have fallen. This context is important to consider given the prevailing inflationary environment into which BEAD funds will be deployed. The broadband industry is healthy, but it is certainly subject to distortion by well-intentioned but poorly directed federal funds.

This is particularly important given that Congress exempted the BEAD program from review under the Administrative Procedure Act (APA), which otherwise would have required NTIA to undertake much more stringent processes to demonstrate that implementation is effective and aligned with congressional intent.

Which is why it is disconcerting that some of the requirements put forward by NTIA could serve to deplete BEAD funding without producing an appropriate return. In particular, some elements of the NOFO suggest that NTIA may be interested in using BEAD funding as a means to achieve de facto rate regulation on broadband.

The Infrastructure Act requires that each recipient of BEAD funding must offer at least one low-cost broadband service option for eligible low-income consumers. For those low-cost plans, the NOFO bars the use of data caps, also known as “usage-based billing” or UBB. As Geoff Manne and Ian Adams have noted:

In simple terms, UBB allows networks to charge heavy users more, thereby enabling them to recover more costs from these users and to keep prices lower for everyone else. In effect, UBB ensures that the few heaviest users subsidize the vast majority of other users, rather than the other way around.

Thus, data caps enable providers to optimize revenue by tailoring plans to relatively high-usage or low-usage consumers and to build out networks in ways that meet patterns of actual user demand.

While not explicitly a regime to regulate rates, using the inducement of BEAD funds to dictate that providers may not impose data caps would have some of the same substantive effects. Of course, this would apply only to low-cost plans, so one might expect relatively limited impact. The larger concern is the precedent it would establish, whereby regulators could deem it appropriate to impose their preferences on broadband pricing, notwithstanding market forces.

But the actual impact of these de facto price caps could potentially be much larger. In one section, the NOFO notes that each “eligible entity” for BEAD funding (states, U.S. territories, and the District of Columbia) also must include in its initial and final proposals “a middle-class affordability plan to ensure that all consumers have access to affordable high-speed internet.”

The requirement to ensure “all consumers” have access to “affordable high-speed internet” is separate and apart from the requirement that BEAD recipients offer at least one low-cost plan. The NOFO is vague about how such “middle-class affordability plans” will be defined, suggesting that the states will have flexibility to “adopt diverse strategies to achieve this objective.”

For example, some Eligible Entities might require providers receiving BEAD funds to offer low-cost, high-speed plans to all middle-class households using the BEAD-funded network. Others might provide consumer subsidies to defray subscription costs for households not eligible for the Affordable Connectivity Benefit or other federal subsidies. Others may use their regulatory authority to promote structural competition. Some might assign especially high weights to selection criteria relating to affordability and/or open access in selecting BEAD subgrantees. And others might employ a combination of these methods, or other methods not mentioned here.

The concern is that, coupled with the prohibition on data caps for low-cost plans, states are being given a clear instruction: put as many controls on providers as you can get away with. It would not be surprising if many, if not all, state authorities simply imported the data-cap prohibition and other restrictions from the low-cost option onto plans meant to satisfy the “middle-class affordability plan” requirements.

Focusing on the Truly Unserved and Underserved

The “middle-class affordability” requirements underscore another deficiency of the NOFO, which is the extent to which its focus drifts away from the unserved. Given widely available high-speed broadband access and the acknowledged pressing need to connect the roughly 5% of the country (mostly in rural areas) who currently lack that access, it is a complete waste of scarce resources to direct BEAD funds to the middle class.

Some of the document’s other problems, while less dramatic, are deficient in a similar respect. For example, the NOFO requires that states consider government-owned networks (GON) and open-access models on the same terms as private providers; it also encourages states to waive existing laws that bar GONs. The problem, of course, is that GONs are best thought of as a last resort to be deployed only where no other provider is available. By and large, GONs have tended to become utter failures that require constant cross-subsidization from taxpayers and that crowd out private providers.

Similarly, the NOFO heavily prioritizes fiber, both in terms of funding priorities and in the definitions it sets forth to deem a location “unserved.” For instance, it lays out:

For the purposes of the BEAD Program, locations served exclusively by satellite, services using entirely unlicensed spectrum, or a technology not specified by the Commission of the Broadband DATA Maps, do not meet the criteria for Reliable Broadband Service and so will be considered “unserved.”

In many rural locations, wireless internet service providers (WISPs) use unlicensed spectrum to provide fast and reliable broadband. The NOFO could be interpreted as deeming homes served by such WISPs as underserved or underserved, while preferencing the deployment of less cost-efficient fiber. This would be another example of wasteful priorities.

Finally, the BEAD program requires states to forbid “unjust or unreasonable network management practices.” This is obviously a nod to the “Internet conduct standard” and other network-management rules promulgated by the Federal Communications Commission’s since-withdrawn 2015 Open Internet Order. As such, it would serve to provide cover for states to impose costly and inappropriate net-neutrality obligations on providers.

Conclusion

The BEAD program represents a straightforward opportunity to narrow, if not close, the digital divide. If NTIA can restrain itself, these funds could go quite a long way toward solving the hard problem of connecting more Americans to the internet. Unfortunately, as it stands, some of the NOFO’s provisions threaten to lose that proper focus.

Congress opted not to include in the original infrastructure bill these potentially onerous requirements that NTIA now seeks, all without an APA rulemaking. It would be best if the agency returned to the NOFO with clarifications that would fix these deficiencies.

Though details remain scant (and thus, any final judgment would be premature),  initial word on the new Trans-Atlantic Data Privacy Framework agreed to, in principle, by the White House and the European Commission suggests that it could be a workable successor to the Privacy Shield agreement that was invalidated by the Court of Justice of the European Union (CJEU) in 2020.

This new framework agreement marks the third attempt to create a lasting and stable legal regime to permit the transfer of EU citizens’ data to the United States. In the wake of the 2013 revelations by former National Security Agency contractor Edward Snowden about the extent of the United States’ surveillance of foreign nationals, the CJEU struck down (in its 2015 Schrems decision) the then-extant “safe harbor” agreement that had permitted transatlantic data flows. 

In the 2020 Schrems II decision (both cases were brought by Austrian privacy activist Max Schrems), the CJEU similarly invalidated the Privacy Shield, which had served as the safe harbor’s successor agreement. In Schrems II, the court found that U.S. foreign surveillance laws were not strictly proportional to the intelligence community’s needs and that those laws also did not give EU citizens adequate judicial redress.  

This new “Privacy Shield 2.0” agreement, announced during President Joe Biden’s recent trip to Brussels, is intended to address the issues raised in the Schrems II decision. In relevant part, the joint statement from the White House and European Commission asserts that the new framework will: “[s]trengthen the privacy and civil liberties safeguards governing U.S. signals intelligence activities; Establish a new redress mechanism with independent and binding authority; and Enhance its existing rigorous and layered oversight of signals intelligence activities.”

In short, the parties believe that the new framework will ensure that U.S. intelligence gathering is proportional and that there is an effective forum for EU citizens caught up in U.S. intelligence-gathering to vindicate their rights.

As I and my co-authors (my International Center for Law & Economics colleague Mikołaj Barczentewicz and Michael Mandel of the Progressive Policy Institute) detailed in an issue brief last fall, the stakes are huge. While the issue is often framed in terms of social-media use, transatlantic data transfers are implicated in an incredibly large swath of cross-border trade:

According to one estimate, transatlantic trade generates upward of $5.6 trillion in annual commercial sales, of which at least $333 billion is related to digitally enabled services. Some estimates suggest that moderate increases in data-localization requirements would result in a €116 billion reduction in exports from the EU.

The agreement will be implemented on this side of the Atlantic by a forthcoming executive order from the White House, at which point it will be up to EU courts to determine whether the agreement adequately restricts U.S. intelligence activities and protects EU citizens’ rights. For now, however, it appears at a minimum that the White House took the CJEU’s concerns seriously and made the right kind of concessions to reach agreement.

And now, once the framework is finalized, we just have to sit tight and wait for Mr. Schrems’ next case.

All too frequently, vocal advocates for “Internet Freedom” imagine it exists along just a single dimension: the extent to which it permits individuals and firms to interact in new and unusual ways.

But that is not the sum of the Internet’s social value. The technologies that underlie our digital media remain a relatively new means to distribute content. It is not just the distributive technology that matters, but also the content that is distributed. Thus, the norms and laws that facilitate this interaction of content production and distribution are critical.

Sens. Patrick Leahy (D-Vt.) and Thom Tillis (R-N.C.)—the chair and ranking member, respectively, of the Senate Judiciary Committee’s Subcommittee on Intellectual Property—recently introduced legislation that would require online service providers (OSPs) to comply with a slightly heightened set of obligations to deter copyright piracy on their platforms. This couldn’t come at a better time.

S. 3880, the SMART Copyright Act, would amend Section 512 of the Copyright Act, originally enacted as part of the Digital Millennium Copyright Act of 1998. Section 512, among other things, provides safe harbor for OSPs for copyright infringements by their users. The expectation at the time was that OSPs would work voluntarily with rights holders to develop industry best practices to deal with pirated content, while also allowing the continued growth of the commercial Internet.

Alas, it has become increasingly apparent in the nearly quarter-century since the DMCA was passed that the law has not adequately kept pace with the technological capabilities of digital piracy. In April 2020 alone, U.S. consumers logged 725 million visits to pirate sites for movies and television programming. Close to 90% of those visits were attributable to illegal streaming services that use internet protocol television to distribute pirated content. Such services now serve more than 9 million U.S. subscribers and generate more than $1 billion in annual revenue.

Globally, there are more than 26.6 billion annual illicit views of U.S.-produced movies and 126.7 billion views of U.S.-produced television episodes. A report produced for the U.S. Chamber of Commerce by NERA Economic Consulting estimates the annual impact to the United States to be $30 to $70 billion of lost revenue, 230,000 to 560,000 of lost jobs, and between $45 and $115 billion in lower GDP.

Thus far, the most effective preventative measures produced have been filtering solutions adopted by YouTube, Facebook, and Audible Magic, but neither filtering nor other solutions have been adopted industrywide. As the U.S. Copyright Office has observed:

Throughout the Study, the Office heard from participants that Congress’ intent to have multi-stakeholder consensus drive improvements to the system has not been borne out in practice. By way of example, more than twenty years after passage of the DMCA, although some individual OSPs have deployed DMCA+ systems that are primarily open to larger content owners, not a single technology has been designated a “standard technical measure” under section 512(i). While numerous potential reasons were cited for this failure— from a lack of incentives for ISPs to participate in standards to the inappropriateness of one-size-fits-all technologies—the end result is that few widely-available tools have been created and consistently implemented across the internet ecosystem. Similarly, while various voluntary initiatives have been undertaken by different market participants to address the volume of true piracy within the system, these initiatives, although initially promising, likewise have suffered from various shortcomings, from limited participation to ultimate ineffectiveness.

Given the lack of standard technical measures (STMs), the Leahy-Tillis bill would empower the Office of the Librarian of Congress (LOC) broad latitude to recommend STMs for everything from off-the-shelf software to open-source software to general technical strategies that can be applied to a wide variety of systems. This would include the power to initiate public rulemakings in which it could either propose new STMs or revise or rescind existing STMs. The STMs could be as broad or as narrow as the LOC deems appropriate, including being tailored to specific types of content and specific types of providers. Following rulemaking, subject firms would have at least one year to adopt a given STM.

Critically, the SMART Copyright Act would not hold OSPs liable for the infringing content itself, but for failure to make reasonable efforts to accommodate the STM (or for interference with the STM). Courts finding an OSP to have violated their obligation for good-faith compliance could award an injunction, damages, and costs.

The SMART Copyright Act is a directionally correct piece of legislation with two important caveats: it all depends on the kinds of STMs that the LOC recommends and on how a “violation” is determined for the purposes of awarding damages.

The law would magnify the incentive for private firms to work together with rights holders to develop STMs that more reasonably recruit OSPs into the fight against online piracy. In this sense, the LOC would be best situated as a convener, encouraging STMs to emerge from the broad group of OSPs and rights holders. The fact that the LOC would be able to adopt STMs with or without stakeholders’ participation should provide more incentive for collaboration among the relevant parties.

Short of a voluntary set of STMs, the LOC could nonetheless rely on the technical suggestions and concerns of the multistakeholder community to discern a minimum viable set of practices that constitute best efforts to control piracy. The least desirable outcome—and, I suspect, the one most susceptible to failure—would be for the LOC to examine and select specific technologies. If implemented sensibly, the SMART Copyright Act would create a mechanism to enforce the original goals of Section 512.

The damages provisions are likewise directionally correct but need more clarity. Repeat “violations” allow courts to multiply damages awards. But there is no definition of what counts as a “violation,” nor is there adequate clarity about how a “violation” interacts with damages. For example, is a single infringement on a platform a “violation” such that if three occur, the platform faces treble damages for all the infringements in a single case? That seems unlikely.

More reasonable would be to interpret the provision as saying that a final adjudication that the platform behaved unreasonably is what counts for the purposes of calculating whether damages are multiplied. Then, within each adjudication, damages are calculated for all infringements, up to the statutory damages cap. This interpretation would put teeth in the law, but it’s just one possible interpretation. Congress would need to ensure the final language is clear.

An even better would be to make Section 512’s safe harbor contingent on an OSP’s reasonable compliance. Unreasonable behavior, in that case, provides a much more straightforward way to assess damages, without needing to leave it up to court interpretations about what counts as a “violation.” Particularly since courts have historically tended to interpret the DMCA in ways that are unfavorable to rights holders (e.g., “red flag” knowledge), it would be much better to create a simple standard here.

This is not to say there are no potential problems. Among the concerns that surround promulgating new STMs are potentially creating cybersecurity vulnerabilities, sources for privacy leaks, or accidentally chilling speech. Of course, it’s possible that there will be costs to implementing an STM, just as there are costs when private firms operate their own content-protection mechanisms. But just because harms can happen doesn’t mean they will happen, or that they are insurmountable when they do. The criticisms that have emerged have so far taken on the breathless quality of the empirically unfounded claims that 2012’s SOPA/PIPA legislation would spell doom for the Internet. If Section 512 reforms are well-calibrated and sufficiently flexible to adapt to the market realities, I think we can reasonably expect them to be, on net, beneficial.

Toward this end, the SMART Copyright Act contemplates, for each proposed STM, a public comment period and at least one meeting with relevant stakeholders, to allow time to understand its likely costs and benefits. This process would provide ample opportunities to alert the LOC to potential shortcomings.

But the criticisms do suggest a potentially valuable change to the bill’s structure. If a firm does indeed discover that a particular STM, in practice, leads to unacceptable security or privacy risks, or is systematically biased against lawful content, there should be a legal mechanism that would allow for good-faith compliance while also mitigating STMs’ unforeseen flaws. Ideally, this would involve working with the LOC in an iterative process to refine relevant compliance obligations.

Congress will soon be wrapped up in the volatile midterm elections, which could make it difficult for relatively low-salience issues like copyright to gain traction. Nonetheless, the Leahy-Tillis bill marks an important step toward addressing online piracy, and Congress should move deliberatively toward that goal.

Activists who railed against the Stop Online Piracy Act (SOPA) and the PROTECT IP Act (PIPA) a decade ago today celebrate the 10th anniversary of their day of protest, which they credit with sending the bills down to defeat.

Much of the anti-SOPA/PIPA campaign was based on a gauzy notion of “realizing [the] democratizing potential” of the Internet. Which is fine, until it isn’t.

But despite the activists’ temporary legislative victory, the methods of combating digital piracy that SOPA/PIPA contemplated have been employed successfully around the world. It may, indeed, be time for the United States to revisit that approach, as the very real problems the legislation sought to combat haven’t gone away.

From the perspective of rightsholders, the bill’s most important feature was also its most contentious: the ability to enforce judicial “site-blocking orders.” A site-blocking order is a type of remedy sometimes referred to as a no-fault injunction. Under SOPA/PIPA, a court would have been permitted to issue orders that could be used to force a range of firms—from financial providers to ISPs—to cease doing business with or suspend the service of a website that hosted infringing content.

Under current U.S. law, even when a court finds that a site has willfully engaged in infringement, stopping the infringement can be difficult, especially when the parties and their facilities are located outside the country. While Section 512 of the Digital Millennium Copyright Act does allow courts to issue injunctions, there is ambiguity as to whether it allows courts to issue injunctions that obligate online service providers (“OSP”) not directly party to a case to remove infringing material.

Section 512(j), for instance, provides for issuing injunctions “against a service provider that is not subject to monetary remedies under this section.” The “not subject to monetary remedies under this section” language could be construed to mean that such injunctions may be obtained even against OSPs that have not been found at fault for the underlying infringement. But as Motion Picture Association President Stanford K. McCoy testified in 2020:

In more than twenty years … these provisions of the DMCA have never been deployed, presumably because of uncertainty about whether it is necessary to find fault against the service provider before an injunction could issue, unlike the clear no-fault injunctive remedies available in other countries.

But while no-fault injunctions for copyright infringement have not materialized in the United States, this remedy has been used widely around the world. In fact, more than 40 countries—including Denmark, Finland, France, India, England, and Wales—have enacted or are under some obligation to enact rules allowing for no-fault injunctions that direct ISPs to disable access to websites that predominantly promote copyright infringement. 

In short, precisely the approach to controlling piracy that SOPA/PIPA envisioned has been in force around the world over the last decade. This demonstrates that, if properly tailored, no-fault injunctions are an ideal tool for courts to use in the fight to combat piracy.

If anything, we should be using the anniversary of SOPA/PIPA as an opportunity to reflect on a missed opportunity. Congress should take this opportunity to amend Section 512 to grant U.S. courts authority to issue no-fault injunctions that require OSPs to block access to sites that willfully engage in mass infringement.

We can expect a decision very soon from the High Court of Ireland on last summer’s Irish Data Protection Commission (“IDPC”) decision that placed serious impediments in the transfer data across the Atlantic. That decision, coupled with the July 2020 Court of Justice of the European Union (“CJEU”) decision to invalidate the Privacy Shield agreement between the European Union and the United States, has placed the future of transatlantic trade in jeopardy.

In 2015, the EU Schrems decision invalidated the previously longstanding “safe harbor” agreement between the EU and U.S. to ensure data transfers between the two zones complied with EU privacy requirements. The CJEU later invalidated the Privacy Shield agreement that was created in response to Schrems. In its decision, the court reasoned that U.S. foreign intelligence laws like FISA Section 702 and Executive Order 12333—which give the U.S. government broad latitude to surveil data and offer foreign persons few rights to challenge such surveillance—rendered U.S. firms unable to guarantee the privacy protections of EU citizens’ data.

The IDPC’s decision employed the same logic: if U.S. surveillance laws give the government unreviewable power to spy on foreign citizens’ data, then standard contractual clauses—an alternative mechanism for firms for transferring data—are incapable of satisfying the requirements of EU law.

The implications that flow from this are troubling, to say the least. In the worst case, laws like the CLOUD Act could leave a wide swath of U.S. firms practically incapable doing business in the EU. In the slightly less bad case, firms could be forced to completely localize their data and disrupt the economies of scale that flow from being able to process global data in a unified manner. In any case, the costs for compliance will be massive.

But even if the Irish court upholds the IDPC’s decision, there could still be a path forward for the U.S. and EU to preserve transatlantic digital trade. EU Commissioner for Justice Didier Reynders and U.S. Commerce Secretary Gina Raimondo recently issued a joint statement asserting they are “intensifying” negotiations to develop an enhanced successor to the EU-US Privacy Shield agreement. One can hope the talks are both fast and intense.

It seems unlikely that the Irish High Court would simply overturn the IDPC’s ruling. Instead, the IDCP’s decision will likely be upheld, possibly with recommended modifications. But even in that case, there is a process that buys the U.S. and EU a bit more time before any transatlantic trade involving consumer data grinds to a halt.

After considering replies to its draft decision, the IDPC would issue final recommendations on the extent of the data-transfer suspensions it deems necessary. It would then need to harmonize its recommendations with the other EU data-protection authorities. Theoretically, that could occur in a matter of days, but practically speaking, it would more likely occur over weeks or months. Assuming we get a decision from the Irish High Court before the end of April, it puts the likely deadline for suspension of transatlantic data transfers somewhere between June and September.

That’s not great, but it is not an impossible hurdle to overcome and there are temporary fixes the Biden administration could put in place. Two major concerns need to be addressed.

  1. U.S. data collection on EU citizens needs to be proportional to the necessities of intelligence gathering. Currently, the U.S. intelligence agencies have wide latitude to collect a large amount of data.
  2. The ombudsperson the Privacy Shield agreement created to be responsible for administering foreign citizen data requests was not sufficiently insulated from the political process, creating the need for adequate redress by EU citizens.

As Alex Joel recently noted, the Biden administration has ample powers to effect many of these changes through executive action. After all, EO 12333 was itself a creation of the executive branch. Other changes necessary to shape foreign surveillance to be in accord with EU requirements could likewise arise from the executive branch.

Nonetheless, Congress should not take that as a cue for complacency. It is possible that even if the Biden administration acts, the CJEU could find some or all of the measures insufficient. As the Biden team works to put changes in place through executive order, Congress should pursue surveillance reform through legislation.

Theoretically, the above fixes should be possible; there is not much partisan rancor about transatlantic trade as a general matter. But time is short, and this should be a top priority on policymakers’ radars.

(note: edited to clarify that the Irish High Court is not reviewing SCC’s directly and that the CLOUD Act would not impose legal barriers for firms, but practical ones).

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Kristian Stout is director of innovation policy for the International Center for Law & Economics.]

One of the themes that has run throughout this symposium has been that, throughout his tenure as both a commissioner and as chairman, Ajit Pai has brought consistency and careful analysis to the Federal Communications Commission (McDowell, Wright). The reflections offered by the various authors in this symposium make one thing clear: the next administration would do well to learn from the considered, bipartisan, and transparent approach to policy that characterized Chairman Pai’s tenure at the FCC.

The following are some of the more specific lessons that can be learned from Chairman Pai. In an important sense, he laid the groundwork for his successful chairmanship when he was still a minority commissioner. His thoughtful dissents were rooted in consistent, clear policy arguments—a practice that both charted how he would look at future issues as chairman and would help the public to understand exactly how he would approach new challenges before the FCC (McDowell, Wright).

One of the most public instances of Chairman Pai’s consistency (and, as it turns out, his bravery) was with respect to net neutrality. From his dissent in the Title II Order, through his commission’s Restoring Internet Freedom Order, Chairman Pai focused on the actual welfare of consumers and the factors that drive network growth and adoption. As Brent Skorup noted, “Chairman Pai and the Republican commissioners recognized the threat that Title II posed, not only to free speech, but to the FCC’s goals of expanding telecommunications services and competition.” The result of giving in to the Title II advocates would have been to draw the FCC into a quagmire of mass-media regulation that would ultimately harm free expression and broadband deployment in the United States.

Chairman Pai’s vision worked out (Skorup, May, Manne, Hazlett). Despite prognostications of the “death of the internet” because of the Restoring Internet Freedom Order, available evidence suggests that industry investment grew over Chairman Pai’s term. More Americans are connected to broadband than ever before.

Relatedly, Chairman Pai was a strong supporter of liberalizing media-ownership rules that long had been rooted in 20th century notions of competition (Manne). Such rules systematically make it harder for smaller media outlets to compete with large news aggregators and social-media platforms. As Geoffrey Manne notes: 

Consistent with his unwavering commitment to promote media competition… Chairman Pai put forward a proposal substantially updating the media-ownership rules to reflect the dramatically changed market realities facing traditional broadcasters and newspapers.

This was a bold move for Chairman Pai—in essence, he permitted more local concentration by, e.g., allowing the purchase of a newspaper by a local television station that previously would have been forbidden. By allowing such combinations, the FCC enabled failing local news outlets to shore up their losses and continue to compete against larger, better-resourced organizations. The rule changes are in a case pending before the Supreme Court; should the court find for the FCC, the competitive outlook for local media looks much better thanks to Chairman Pai’s vision.

Chairman Pai’s record on spectrum is likewise impressive (Cooper, Hazlett). The FCC’s auctions under Chairman Pai raised more money and freed more spectrum for higher value uses than any previous commission (Feld, Hazlett). But there is also a lesson in how subsequent administrations can continue what Chairman Pai started. Unlicensed use, for instance, is not free or costless in its maintenance, and Tom Hazlett believes that there is more work to be done in further liberalizing access to the related spectrum—liberalizing in the sense of allowing property rights and market processes to guide spectrum to its highest use:

The basic theme is that regulators do better when they seek to create new rights that enable social coordination and entrepreneurial innovation, rather than enacting rules that specify what they find to be the “best” technologies or business models.

And to a large extent this is the model that Chairman Pai set down, from the issuance of the 12 GHZ NPRM to consider whether those spectrum bands could be opened up for wireless use, to the L-Band Order, where the commission worked hard to reallocate spectrum rights in ways that would facilitate more productive uses.

The controversial L-Band Order was another example of where Chairman Pai displayed both political acumen as well as an apolitical focus on improving spectrum policy (Cooper). Political opposition was sharp and focused after the commission finalized its order in April 2020. Nonetheless, Chairman Pai was deftly able to shepherd the L-Band Order and guarantee that important spectrum was made available for commercial wireless use.

As a native of Kansas, rural broadband rollout ranked highly in the list of priorities at the Pai FCC, and his work over the last four years is demonstrative of this pride of place (Hurwitz, Wright). As Gus Hurwitz notes, “the commission completed the Connect America Fund Phase II Auction. More importantly, it initiated the Rural Digital Opportunity Fund (RDOF) and the 5G Fund for Rural America, both expressly targeting rural connectivity.”

Further, other work, like the recently completed Rural Digital Opportunity Fund auction and the 5G fund provide the necessary policy framework with which to extend greater connectivity to rural America. As Josh Wright notes, “Ajit has also made sure to keep an eye out for the little guy, and communities that have been historically left behind.” This focus on closing the digital divide yielded gains in connectivity in places outside of traditional rural American settings, such as tribal lands, the U.S. Virgin Islands, and Puerto Rico (Wright).

But perhaps one of Chairman Pai’s best and (hopefully) most lasting contributions will be de-politicizing the FCC and increasing the transparency with which it operated. In contrast to previous administrations, the Pai FCC had an overwhelmingly bipartisan nature, with many bipartisan votes being regularly taken at monthly meetings (Jamison). In important respects, it was this bipartisan (or nonpartisan) nature that was directly implicated by Chairman Pai championing the Office of Economics and Analytics at the commission. As many of the commentators have noted (Jamison, Hazlett, Wright, Ellig) the OEA was a step forward in nonpolitical, careful cost-benefit analysis at the commission. As Wright notes, Chairman Pai was careful to not just hire a bunch of economists, but rather to learn from other agencies that have better integrated economics, and to establish a structure that would enable the commission’s economists to materially contribute to better policy.

We were honored to receive a post from Jerry Ellig just a day before he tragically passed away. As chief economist at the FCC from 2017-2018, he was in a unique position to evaluate past practice and participate in the creation of the OEA. According to Ellig, past practice tended to treat the work of the commission’s economists as a post-hoc gloss on the work of the agency’s attorneys. Once conclusions were reached, economics would often be backfilled in to support those conclusions. With the establishment of the OEA, economics took a front-seat role, with staff of that office becoming a primary source for information and policy analysis before conclusions were reached. As Wright noted, the Federal Trade Commission had adopted this approach. With the FCC moving to do this as well, communications policy in the United States is on much sounder footing thanks to Chairman Pai.

Not only did Chairman Pai push the commission in the direction of nonpolitical, sound economic analysis but, as many commentators note, he significantly improved the process at the commission (Cooper, Jamison, Lyons). Chief among his contributions was making it a practice to publish proposed orders weeks in advance, breaking with past traditions of secrecy around draft orders, and thereby giving the public an opportunity to see what the commission intended to do.

Critics of Chairman Pai’s approach to transparency feared that allowing more public view into the process would chill negotiations between the commissioners behind the scenes. But as Daniel Lyons notes, the chairman’s approach was a smashing success:

The Pai era proved to be the most productive in recent memory, averaging just over six items per month, which is double the average number under Pai’s immediate predecessors. Moreover, deliberations were more bipartisan than in years past: Nathan Leamer notes that 61.4% of the items adopted by the Pai FCC were unanimous and 92.1% were bipartisan compared to 33% and 69.9%, respectively, under Chairman Wheeler.

Other reforms from Chairman Pai helped open the FCC to greater scrutiny and a more transparent process, including limiting editorial privileges on staff on an order’s text, and by introducing the use of a simple “fact sheet” to explain orders (Lyons).

I found one of the most interesting insights into the character of Chairman Pai, was his willingness to reverse course and take risks to ensure that the FCC promoted innovation instead of obstructing it by relying on received wisdom (Nachbar). For instance, although he was initially skeptical of the prospects of Space X to introduce broadband through its low-Earth-orbit satellite systems, under Chairman Pai, the Starlink beta program was included in the RDOF auction. It is not clear whether this was a good bet, Thomas Nachbar notes, but it was a statement both of the chairman’s willingness to change his mind, as well as to not allow policy to remain in a comfortable zone that excludes potential innovation.

The next chair has an awfully big pair of shoes (or one oversized coffee mug) to fill. Chairman Pai established an important legacy of transparency and process improvement, as well as commitment to careful, economic analysis in the business of the agency. We will all be well-served if future commissions follow in his footsteps.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Kristian Stout is director of innovation policy for the International Center for Law & Economics.]

Ajit Pai will step down from his position as chairman of the Federal Communications Commission (FCC) effective Jan. 20. Beginning Jan. 15, Truth on the Market will host a symposium exploring Pai’s tenure, with contributions from a range of scholars and practitioners.

As we ponder the changes to FCC policy that may arise with the next administration, it’s also a timely opportunity to reflect on the chairman’s leadership at the agency and his influence on telecommunications policy more broadly. Indeed, the FCC has faced numerous challenges and opportunities over the past four years, with implications for a wide range of federal policy and law. Our symposium will offer insights into numerous legal, economic, and policy matters of ongoing importance.

Under Pai’s leadership, the FCC took on key telecommunications issues involving spectrum policy, net neutrality, 5G, broadband deployment, the digital divide, and media ownership and modernization. Broader issues faced by the commission include agency process reform, including a greater reliance on economic analysis; administrative law; federal preemption of state laws; national security; competition; consumer protection; and innovation, including the encouragement of burgeoning space industries.

This symposium asks contributors for their thoughts on these and related issues. We will explore a rich legacy, with many important improvements that will guide the FCC for some time to come.

Truth on the Market thanks all of these excellent authors for agreeing to participate in this interesting and timely symposium.

Look for the first posts starting Jan. 15.

We’re delighted to welcome Jonathan M. Barnett as our newest blogger at Truth on the Market.

Jonathan Barnett is director of the USC Gould School of Law Media, Entertainment and Technology Law Program. Barnett specializes in intellectual property, contracts, antitrust, and corporate law. He has published in the Harvard Law Review, Yale Law Journal, Journal of Legal Studies, Review of Law & Economics, Journal of Corporation Law and other scholarly journals.

He joined USC Law in fall 2006 and was a visiting professor at New York University School of Law in fall 2010. Prior to academia, Barnett practiced corporate law as a senior associate at Cleary Gottlieb Steen & Hamilton in New York, specializing in private equity and mergers and acquisitions transactions. He was also a visiting assistant professor at Fordham University School of Law in New York. A magna cum laude graduate of University of Pennsylvania, Barnett received a MPhil from Cambridge University and a JD from Yale Law School.

You can find his scholarship at SSRN.

As the initial shock of the COVID quarantine wanes, the Techlash waxes again bringing with it a raft of renewed legislative proposals to take on Big Tech. Prominent among these is the EARN IT Act (the Act), a bipartisan proposal to create a new national commission responsible for proposing best practices designed to mitigate the proliferation of child sexual abuse material (CSAM) online. The Act’s proposal is seemingly simple, but its fallout would be anything but.

Section 230 of the Communications Decency Act currently provides online services like Facebook and Google with a robust protection from liability that could arise as a result of the behavior of their users. Under the Act, this liability immunity would be conditioned on compliance with “best practices” that are produced by the new commission and adopted by Congress.  

Supporters of the Act believe that the best practices are necessary in order to ensure that platform companies effectively police CSAM. While critics of the Act assert that it is merely a backdoor for law enforcement to achieve its long-sought goal of defeating strong encryption. 

The truth of EARN IT—and how best to police CSAM—is more complicated. Ultimately, Congress needs to be very careful not to exceed its institutional capabilities by allowing the new commission to venture into areas beyond its (and Congress’s) expertise.

More can be done about illegal conduct online

On its face, conditioning Section 230’s liability protections on certain platform conduct is not necessarily objectionable. There is undoubtedly some abuse of services online, and it is also entirely possible that the incentives for finding and policing CSAM are not perfectly aligned with other conflicting incentives private actors face. It is, of course, first the responsibility of the government to prevent crime, but it is also consistent with past practice to expect private actors to assist such policing when feasible. 

By the same token, an immunity shield is necessary in some form to facilitate user generated communications and content at scale. Certainly in 1996 (when Section 230 was enacted), firms facing conflicting liability standards required some degree of immunity in order to launch their services. Today, the control of runaway liability remains important as billions of user interactions take place on platforms daily. Related, the liability shield also operates as a way to promote good samaritan self-policing—a measure that surely helps avoid actual censorship by governments, as opposed to the spurious claims made by those like Senator Hawley.

In this context, the Act is ambiguous. It creates a commission composed of a fairly wide cross-section of interested parties—from law enforcement, to victims, to platforms, to legal and technical experts—to recommend best practices. That hardly seems a bad thing, as more minds considering how to design a uniform approach to controlling CSAM would be beneficial—at least theoretically.

In practice, however, there are real pitfalls to imbuing any group of such thinkers—especially ones selected by political actors—with an actual or de facto final say over such practices. Much of this domain will continue to be mercurial, the rules necessary for one type of platform may not translate well into general principles, and it is possible that a public board will make recommendations that quickly tax Congress’s institutional limits. To the extent possible, Congress should be looking at ways to encourage private firms to work together to develop best practices in light of their unique knowledge about their products and their businesses. 

In fact, Facebook has already begun experimenting with an analogous idea in its recently announced Oversight Board. There, Facebook is developing a governance structure by giving the Oversight Board the ability to review content moderation decisions on the Facebook platform. 

So far as the commission created by the Act works to create best practices that align the incentives of firms with the removal of CSAM, it has a lot to offer. Yet, a better solution than the Act would be for Congress to establish policy that works with the private processes already in development.

Short of a more ideal solution, it is critical, however, that the Act establish the boundaries of the commission’s remit very clearly and keep it from venturing into technical areas outside of its expertise. 

The complicated problem of encryption (and technology)

The Act has a major problem insofar as the commission has a fairly open ended remit to recommend best practices, and this liberality can ultimately result in dangerous unintended consequences.

The Act only calls for two out of nineteen members to have some form of computer science background. A panel of non-technical experts should not design any technology—encryption or otherwise. 

To be sure, there are some interesting proposals to facilitate access to encrypted materials (notably, multi-key escrow systems and self-escrow). But such recommendations are beyond the scope of what the commission can responsibly proffer.

If Congress proceeds with the Act, it should put an explicit prohibition in the law preventing the new commission from recommending rules that would interfere with the design of complex technology, such as by recommending that encryption be weakened to provide access to law enforcement, mandating particular network architectures, or modifying the technical details of data storage.

Congress is right to consider if there is better policy to be had for aligning the incentives of the platforms with the deterrence of CSAM—including possible conditional access to Section 230’s liability shield.But just because there is a policy balance to be struck between policing CSAM and platform liability protection doesn’t mean that the new commission is suited to vetting, adopting and updating technical standards – it clearly isn’t. Conversely, to the extent that encryption and similarly complex technologies could be subject to broad policy change it should be through an explicit and considered democratic process, and not as a by-product of the Act. 

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Kristian Stout, (Associate Director, International Center for Law & Economics]

The public policy community’s infatuation with digital privacy has grown by leaps and bounds since the enactment of GDPR and the CCPA, but COVID-19 may leave the most enduring mark on the actual direction that privacy policy takes. As the pandemic and associated lockdowns first began, there were interesting discussions cropping up about the inevitable conflict between strong privacy fundamentalism and the pragmatic steps necessary to adequately trace the spread of infection. 

Axiomatic of this controversy is the Apple/Google contact tracing system, software developed for smartphones to assist with the identification of individuals and populations that have likely been in contact with the virus. The debate sparked by the Apple/Google proposal highlights what we miss when we treat “privacy” (however defined) as an end in itself, an end that must necessarily  trump other concerns. 

The Apple/Google contact tracing efforts

Apple/Google are doing yeoman’s work attempting to produce a useful contact tracing API given the headwinds of privacy advocacy they face. Apple’s webpage describing its new contact tracing system is a testament to the extent to which strong privacy protections are central to its efforts. Indeed, those privacy protections are in the very name of the service: “Privacy-Preserving Contact Tracing” program. But, vitally, the utility of the Apple/Google API is ultimately a function of its efficacy as a tracing tool, not in how well it protects privacy.

Apple/Google — despite the complaints of some states — are rolling out their Covid-19-tracking services with notable limitations. Most prominently, the APIs will not allow collection of location data, and will only function when users explicitly opt-in. This last point is important because there is evidence that opt-in requirements, by their nature, tend to reduce the flow of information in a system, and when we are considering tracing solutions to an ongoing pandemic surely less information is not optimal. Further, all of the data collected through the API will be anonymized, preventing even healthcare authorities from identifying particular infected individuals.

These restrictions prevent the tool from being as effective as it could be, but it’s not clear how Apple/Google could do any better given the political climate. For years, the Big Tech firms have been villainized by privacy advocates that accuse them of spying on kids and cavalierly disregarding consumer privacy as they treat individuals’ data as just another business input. The problem with this approach is that, in the midst of a generational crisis, our best tools are being excluded from the fight. Which begs the question: perhaps we have privacy all wrong? 

Privacy is one value among many

The U.S. constitutional order explicitly protects our privacy as against state intrusion in order to guarantee, among other things, fair process and equal access to justice. But this strong presumption against state intrusion—far from establishing a fundamental or absolute right to privacy—only accounts for part of the privacy story. 

The Constitution’s limit is a recognition of the fact that we humans are highly social creatures and that privacy is one value among many. Properly conceived, privacy protections are themselves valuable only insofar as they protect other things we value. Jane Bambauer explored some of this in an earlier post where she characterized privacy as, at best, an “instrumental right” — that is a tool used to promote other desirable social goals such as “fairness, safety, and autonomy.”

Following from Jane’s insight, privacy — as an instrumental good — is something that can have both positive and negative externalities, and needs to be enlarged or attenuated as its ability to serve instrumental ends changes in different contexts. 

According to Jane:

There is a moral imperative to ignore even express lack of consent when withholding important information that puts others in danger. Just as many states affirmatively require doctors, therapists, teachers, and other fiduciaries to report certain risks even at the expense of their client’s and ward’s privacy …  this same logic applies at scale to the collection and analysis of data during a pandemic.

Indeed, dealing with externalities is one of the most common and powerful justifications for regulation, and an extreme form of “privacy libertarianism” —in the context of a pandemic — is likely to be, on net, harmful to society.

Which brings us back to efforts of Apple/Google. Even if those firms wanted to risk the ire of  privacy absolutists, it’s not clear that they could do so without incurring tremendous regulatory risk, uncertainty and a popular backlash. As statutory matters, the CCPA and the GDPR chill experimentation in the face of potentially crippling fines. While the FTC Act’s Section 5 prohibition on “unfair or deceptive” practices is open to interpretation in manners which could result in existentially damaging outcomes. Further, some polling suggests that the public appetite for contact tracing is not particularly high – though, as is often the case, such pro-privacy poll outcomes rarely give appropriate shrift to the tradeoff required.

As a general matter, it’s important to think about the value of individual privacy, and how best to optimally protect it. But privacy does not stand above all other values in all contexts. It is entirely reasonable to conclude that, in a time of emergency, if private firms can devise more effective solutions for mitigating the crisis, they should have more latitude to experiment. Knee-jerk preferences for an amorphous “right of privacy” should not be used to block those experiments.

Much as with the Cosmic Turtle, its tradeoffs all the way down. Most of the U.S. is in lockdown, and while we vigorously protect our privacy, we risk frustrating the creation of tools that could put a light at the end of the tunnel. We are, in effect, trading liberty and economic self-determination for privacy.

Once the worst of the Covid-19 crisis has passed — hastened possibly by the use of contact tracing programs — we can debate the proper use of private data in exigent circumstances. For the immediate future, we should instead be encouraging firms like Apple/Google to experiment with better ways to control the pandemic. 

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Kristian Stout, (Associate Director, International Center for Law & Economics]


The ongoing pandemic has been an opportunity to explore different aspects of the human condition. For myself, I have learned that, despite a deep commitment to philosophical (neo- or classical-) liberalism, at heart I am pragmatic. I would prefer a society that optimizes for more individual liberty, but I am emphatically not someone who would even entertain the idea of using crises to advance my agenda when it is not clearly in service to amelioration of immediate problems.

Sadly, I have also learned that there are those who are not similarly pragmatic, and are willing to advance their ideological agenda come hell or high water. In this regard, I was disappointed yesterday to see the Gurry IP/COVID Letter passing around Twitter calling for widespread, worldwide interference with the property rights of IPR holders. 

The letter calls for a scattershot set of “remedies” to the crisis that would open access to copyright- and patent-protected inventions and content, including (among other things): 

  • voluntary licensing and non-enforcement of IP;
  • abrogation of IPR by WIPO members using the  “flexibility” in the international IP regime; 
  • the removal of geographical restrictions on IP licenses;
  • forcing patents into COVID-19 patent pools; and 
  • the implementation of compulsory licensing. 

And, unlike many prior efforts to push the envelope on weakening IP protections, the Gurry Letter also calls for measures that would weaken trade secrets and expose confidential business information in order to “achieve universal and equitable access to COVID-19 medicines and medical technologies as soon as reasonably possible.”

Notably, nothing in the letter suggests that any of these measures should be regarded as temporary.

We all want treatments for infection, vaccines for prevention, and ample supply of personal protective equipment as soon as possible, but if all the demands in this letter were met, it would do little to increase the supply of any of these things in the short term, while undermining incentives to develop new treatments, vaccines and better preventative tools in the long run. 

Fundamentally, the letter  reflects a willingness to use the COVID-19 pandemic to pursue an agenda that lacks merit and would be dismissed in the normal course of affairs. 

What is most certainly the case is that we need more innovation now, and we need it faster. There is no reason to believe that mandating open source status or forcing compulsory licensing on the firms doing that work will encourage that work to proceed with all due haste—and every indication that the opposite is the case. 

Where there are short term shortages of certain products that might be produced in much larger quantities by relaxing IP, companies are responding by doing just that—voluntarily. But this is fundamentally different from the imposition of unlimited compulsory licenses.

Further, private actors have displayed an impressive willingness to provide free or low cost access to technologies and content—without government coercion. The following is a short list of some of the content and inventions that have been opened up:

Culture, Fitness & Entertainment

  • HBO Will Stream 500 Hours of Free Programming, Including Full Seasons of ‘Veep,’ ‘The Sopranos,’ ‘Silicon Valley’”
  • Dozens (or more) of artists, both famous and lesser known, are releasing free back catalog performances or are taking part in free live streaming sessions on social media platforms. Notably, viewers are often welcome to donate or “pay what they” want to help support these artists (more on this below).
  • The NBA, NFL, and NHL are offering free access to their back catalogue of games.
  • A large array of music production software can now be used free on extended trials for 3 months (or completely free and unlimited in some cases). 
  • CBS All Access expanded its free trial period.
  • Neil Gaiman and Harper Collins granted permission to Levar Burton to livestream readings from their catalogs.
  • Disney is releasing movies early onto its (paid) Disney+ services.
  • Gold’s Gym is providing free access to its app-based workouts.
  • The Met is streaming free recordings of its Live in HD series.
  • The Seattle Symphony is offering free access to some of its recorded performances.
  • The UK National Theater is streaming some of its most popular plays for free.
  • Andrew Lloyd Weber is streaming his shows online for free.

Science, News & Education

  • Scholastica released free content intended to help educate students stuck at home while sheltering-in-place. 
  • Nearly 100 academic journals, societies, institutes, and companies signed a commitment to make research and data on COVID-19 freely available, at least for the duration of the outbreak.
  • The Atlantic lifted paywall restrictions on access to its COVID-19-related content.
  • The New England Journal of Medicine is allowing free access to COVID-19-related resources.
  • The Lancet allows free access to research it publishes on COVID-19.
  • All material published by theBMJ on the coronavirus outbreak is freely available.
  • The AAAS-published Science allows free access to its coronavirus research and commentary.
  • Elsevier gave full access to its content on its COVID-19 Information Center for PubMed Central and other public health databases.
  • The American Economic Association announced open access to all of its journals until the end of June.
  • JSTOR expanded free access to some of its scholarship.

Medicine & Technology

  • The Global Center for Medical Design is developing license-free PPE designs that can be quickly implemented by manufacturers.
  • Medtronic published “design specifications for the Puritan Bennett 560 (PB560) to allow innovators, inventors, start-ups, and academic institutions to leverage their own expertise and resources to evaluate options for rapid ventilator manufacturing.” It additionally provided software licenses for this technology.
  • AbbVie announced it won’t enforce its patent rights for Kaletra—a drug that may provide treatment for COVID-19 infections. Israel had earlier indicated it would impose compulsory licenses for the drug, but AbbVie is allowing use worldwide. The company, moreover, had donated supplies of the drug to China earlier in the year when the outbreak first became apparent.
  • Google is working with health researchers to provide anonymized and aggregated user location data. 
  • Cisco has extended free licenses and expanded usage counts at no extra charge for three of its security technologies to help strained IT teams and partners ready themselves and their clients for remote work.”
  • Microsoft is offering free subscriptions to its Teams product for six months.
  • Zoom expanded its free access and other limitations for educational institutions around the world.

Incentivize innovation, now more than ever

In addition to undermining the short-term incentives to draw more research resources into the fight against COVID-19, using this crisis to weaken the IP regime will cause long-term damage to the economies of the world. We still will need creators making new cultural products and researchers developing new medicines and technologies; weakening the IP regime will undermine the delicate set of incentives that cultural and scientific production depends upon. 

Any clear-eyed assessment of the broader course of the pandemic and the response to it gives lie to the notion that IP rights are oppressive or counterproductive. It is the pharmaceutical industry—hated as they may be in some quarters—that will be able to marshall the resources and expertise to develop treatments and vaccines. And it is artists and educators producing cultural content who (theoretically) depend on the licensing revenues of their creations for survival. 

In fact, one of the things that the pandemic has exposed is the fragility of artists’ livelihoods and the callousness with which they are often treated. Shortly after the lockdowns began in the US, the well-established rock musician David Crosby said in an interview that, if he could not tour this year, he would face tremendous financial hardship. 

As unfortunate as that may be for Crosby, a world-famous musician, imagine how much harder it is for struggling musicians who can hardly hope to achieve a fraction of Crosby’s success for their own tours, let alone for licensing. If David Crosby cannot manage well for a few months on the revenue from his popular catalog, what hope do small artists have?

Indeed, the flood of unable-to-tour artists who are currently offering “donate what you can” streaming performances are a symptom of the destructive assault on IPR exemplified in the letter. For decades, these artists have been told that they can only legitimately make money through touring. Although the potential to actually make a living while touring is possibly out of reach for many or most artists,  those that had been scraping by have now been brought to the brink of ruin as the ability to tour is taken away. 

There are certainly ways the various IP regimes can be improved (like, for instance, figuring out how to help creators make a living from their creations), but now is not the time to implement wishlist changes to an otherwise broadly successful rights regime. 

And, critically, there is a massive difference between achieving wider distribution of intellectual property voluntarily as opposed to through government fiat. When done voluntarily the IP owner determines the contours and extent of “open sourcing” so she can tailor increased access to her own needs (including the need to eat and pay rent). In some cases this may mean providing unlimited, completely free access, but in other cases—where the particular inventor or creator has a different set of needs and priorities—it may be something less than completely open access. When a rightsholder opts to “open source” her property voluntarily, she still retains the right to govern future use (i.e. once the pandemic is over) and is able to plan for reductions in revenue and how to manage future return on investment. 

Our lawmakers can consider if a particular situation arises where a particular piece of property is required for the public good, should the need arise. Otherwise, as responsible individuals, we should restrain ourselves from trying to capitalize on the current crisis to ram through our policy preferences.