Site icon Truth on the Market

Systemic Risk and Copyright in the EU AI Act

The European Parliament’s approval last week of the AI Act marked a significant milestone in the regulation of artificial intelligence. While the law’s final text is less alarming than what was initially proposed, it nonetheless still includes some ambiguities that could be exploited by regulators in ways that would hinder innovation in the EU. 

Among the key features emerging from the legislation are its introduction of “general purpose AI” (GPAI) as a regulatory category and the ways that these GPAI might interact with copyright rules. Moving forward in what is rapidly becoming a global market for generative-AI services, it also bears reflecting on how the AI Act’s copyright provisions contrast with current U.S. copyright law. 

Currently, U.S. copyright law may appear to offer a more permissive environment for AI training, while posing challenges for rightsholders who want to restrict the use of their creative works as training inputs for AI systems. Nevertheless, there are also ways that the U.S. copyright law framework may be more flexible, in that it can (at least, in theory) be modified by Congress to allow for incremental adjustments. Such tweaks could promote negotiations between rightsholders and AI developers by fostering markets for AI-generated outputs that could, in turn, offer compensation to rightsholders.

This approach contrasts with the EU’s AI Act, which risks cementing the current dynamic between rightsholders and AI producers. The act’s provisions may offer more immediate protection for rightsholders, but this rigidity could stifle the evolution of mutually beneficial markets. Therefore, while EU rightsholders might currently enjoy more favorable terms, the adaptable nature of the U.S. legal system could ultimately yield more innovative solutions that would better satisfy stakeholders in the long run.

Systemic Risk

The AI Act suggests that GPAI poses some degree of “systemic risk,” defined in Article 3(65) as: 

[A] risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain.

But rather than the longstanding notion of “systemic risk” as used in the financial sector to refer to the risk of cascading failures, the AI Act’s definition bears closer resemblance to the “Hand formula” in U.S. tort law. Derived from the case United States v. Carroll Towing Co., the Hand formula is a means to determine whether a party has acted negligently by failing to take appropriate precautions. 

The formula weighs the burden of taking precautions (B) against the probability of harm (P), multiplied by the severity of the potential harm (L). If the burden of precautions is less than the probability of harm multiplied by the severity (B

The designation of an AI system as posing a “systemic risk” is based on an assessment of the likelihood and severity of potential negative effects on public health, safety, security, fundamental rights, and society as a whole. This assessment—like the Hand formula—involves a balancing of factors to determine whether the risks posed by an AI system are acceptable or require additional regulatory intervention. Also like the Hand formula, the “systemic risk” designation appears to contemplate systems operating at scale that could pose very minor risks in any particular case, but that aggregate harms in a meaningful way.

There are, however, some key differences between the two concepts. The Hand formula is applied on a case-by-case basis in tort law to determine whether a specific party has acted negligently in a particular situation. By contrast, the AI Act’s “systemic risk” designation is a broader regulatory classification that applies to entire categories of AI systems based on their potential for widespread harm. It thus creates the presumption of risk merely wherever a large-scale GPAI is operating. 

Moreover, while the Hand formula focuses on the actions of individual parties, the “systemic risk” designation places the burden on AI providers to proactively address and mitigate potential risks associated with their systems. This would appear to have the potential for massive unintended consequences in inviting myriad opportunities for unwarranted regulatory intervention. As usual, whether this threat of harmful regulation will ultimately manifest comes down to how the law is implemented.

The AI Act’s Copyright Requirements

The AI Act imposes several copyright-related obligations on GPAI providers. As Andres Guadamuz of the University of Sussex notes in his analysis:

The main provision for GPAI models regarding copyright can be found in Art 53, under the obligations for providers of GPAI models. This imposes a transparency obligation that include the following:

  • Draw up and keep up-to-date technical documentation about the model’s training. This should include, amongst others, its purpose, the computational power it consumes, and details about the data used in training.
  • Draw up and keep up-to-date technical documentation for providers adopting the model. This documentation should enable providers to comprehend the model’s limitations while respecting trade secrets and other intellectual property rights. It can encompass a range of technical data, including the model’s interaction with hardware and software not included in the model itself.
  • “put in place a policy to respect Union copyright law in particular to identify and respect, including through state of the art technologies, the reservations of rights expressed pursuant to Article 4(3) of Directive (EU) 2019/790”.
  • “draw up and make publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model, according to a template provided by the AI Office”.

Particularly relevant, according to Guadamuz, is the interaction between the exception for “Text and Data Mining” (TDM) in the Digital Single Market Directive and the potential for AI training:

Firstly, the requirement to establish policies that respect copyright essentially serves as a reminder to abide by existing laws. More crucially, however, providers are mandated to implement technologies enabling them to honour copyright holders’ opt-outs. This is due to Article 4 introducing a framework for utilising technological tools to manage opt-outs and rights reservations, good news for the providers of such technologies. Additionally, it now appears unequivocally clear that the exceptions for TDM in the DSM Directive include AI training, as it is specified in the AI Act. The need for clarification is needed because there were some doubts that TDM covered AI training, but its inclusion in a legal framework specifically addressing AI training suggests that the TDM exception indeed covers it. 

This suggests complex interactions among GPAI producers (who need to train their models on large corpuses of text, images, audio, and/or video); rightsholders (who will enjoy an opt-out entitlement in the EU); and the producers of technical measures to facilitate opt-outs. 

Moreover, the AI Act’s transparency requirements for GPAI producers will potentially lead to future issues for producers, for better or worse. As Guadamuz notes: “A recurring theme in ongoing copyright infringement cases has been the use of training content disclosure by plaintiffs, those who have disclosed training data have tended to be on the receiving end of suits.”

Another issue the act raises concerns the question of so-called “deepfakes,” which have proven particularly contentious in the United States. One concern expressed on this side of the Atlantic is that banning deepfakes could hamper creators’ ability to make legitimate replicas of individuals’ likenesses that—while meeting the technical definition of a deepfake—are used for new artistic purposes, such as a biopic. 

The AI Act addresses the regulation of deepfakes through its provisions on transparency obligations for certain AI systems. Article 50 requires providers and deployers of AI systems that generate or manipulate image, audio, or video content to disclose when the content has been artificially generated or manipulated. This obligation applies to deepfake content, defined as:

AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful.

The act provides an exception for content that “forms part of an evidently artistic, creative, satirical, fictional analogous work or programme.” Such content would, however, still need to include disclosures about any deepfakes that are present “in an appropriate manner that does not hamper the display or enjoyment of the work.” 

This provision is notable in that it would appear to create a means to identify AI-generated content, which could have implications for the copyrightability of such content in jurisdictions that impose restrictions on AI authorship based on human-author requirements. But it’s unclear how broad the “artistic, creative” and “satirical” exceptions will be interpreted. There almost certainly are many other benign uses of deepfakes that could run afoul of the act which should nonetheless be permitted.

Implications for GPAI Development

As noted earlier, the AI Act’s impact on GPAI will depend largely on how it is implemented. At this point, little can be said with certainty regarding what effects its GPAI provisions will have on producers of large models. It’s probably fair to assume, however, there will be a major scramble among producers to try to stand up compliance mechanisms.

Drawing from the experience of the General Data Protection Regulation (GDPR), there is concern that the “systemic risk” category could become a significant lever for regulators to intervene in how firms like Mistral and Anthropic develop and release their products. Ideally, AI deployment should not be blocked in the EU absent evidence of tangible harms, as European citizens stand to gain from developing and accessing cutting-edge tools.

In the U.S. context, copyright law’s “fair use” exemption has become a bone of contention in a growing body of litigation. I remain skeptical that the fair-use defense is as clear cut as defendants currently appear to believe it is. As outlined in the International Center for Law & Economics’ (ICLE) submission to the U.S. Copyright Office, existing U.S. copyright law may not support the use of copyrighted material for training AI systems under fair use.

This does not, however, mean that copyright should stand in the way of AI development. There is a broad middle ground of legislative reforms that Congress could explore that would more appropriately balance protecting rightsholders’ interests and fostering the development of GPAI. Whether it will do so remains an open question.

As suggested in our Copyright Office submission, it appears the best path forward—on either side of the Atlantic—is to facilitate bargaining among rightsholders and AI producers to create a new kind of market. Indeed, excessive focus on the use of copyrighted work in AI training may ultimately just lead to unproductive negotiations.

While it is possible that U.S. copyright law will be amended or reinterpreted to provide greater flexibility for AI producers, the AI Act appears to create a stronger bulwark for rightsholders to protect their works against use by GPAI. One can hope that it will also provide sufficient flexibility to facilitate bargaining among the parties. If it fails in this respect, the EU risks hindering the development of AI within its borders.

Exit mobile version