EU’s Compromise AI Legislation Remains Fundamentally Flawed

Cite this Article
Mikolaj Barczentewicz, EU’s Compromise AI Legislation Remains Fundamentally Flawed, Truth on the Market (February 08, 2022), https://truthonthemarket.com/2022/02/08/eus-compromise-ai-legislation-remains-fundamentally-flawed/

European Union (EU) legislators are now considering an Artificial Intelligence Act (AIA)—the original draft of which was published by the European Commission in April 2021—that aims to ensure AI systems are safe in a number of uses designated as “high risk.” One of the big problems with the AIA is that, as originally drafted, it is not at all limited to AI, but would be sweeping legislation covering virtually all software. The EU governments seem to have realized this and are trying to fix the proposal. However, some pressure groups are pushing in the opposite direction. 

While there can be reasonable debate about what constitutes AI, almost no one would intuitively consider most of the software covered by the original AIA draft to be artificial intelligence. Ben Mueller and I covered this in more detail in our report “More Than Meets The AI: The Hidden Costs of a European Software Law.” Among other issues, the proposal would seriously undermine the legitimacy of the legislative process: the public is told that a law is meant to cover one sphere of life, but it mostly covers something different. 

It also does not appear that the Commission drafters seriously considered the costs that would arise from imposing the AIA’s regulatory regime on virtually all software across a sphere of “high-risk” uses that include education, employment, and personal finance.

The following example illustrates how the AIA would work in practice: A school develops a simple logic-based expert system to assist in making decisions related to admissions. It could be as basic as a Microsoft Excel macro that checks if a candidate is in the school’s catchment area based on the candidate’s postal code, by comparing the content of one column of a spreadsheet with another column. 

Under the AIA’s current definitions, this would not only be an “AI system,” but also a “high-risk AI system” (because it is “intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions” – Annex III of the AIA). Hence, to use this simple Excel macro, the school would be legally required to, among other things:

  1. put in place a quality management system;
  2. prepare detailed “technical documentation”;
  3. create a system for logging and audit trails;
  4. conduct a conformity assessment (likely requiring costly legal advice);
  5. issue an “EU declaration of conformity”; and
  6. register the “AI system” in the EU database of high-risk AI systems.

This does not sound like proportionate regulation. 

Some governments of EU member states have been pushing for a narrower definition of an AI system, drawing rebuke from pressure groups Access Now and Algorithm Watch, who issued a statement effectively defending the “all-software” approach. For its part, the European Council, which represents member states, unveiled compromise text in November 2021 that changed general provisions around the AIA’s scope (Article 3), but not the list of in-scope techniques (Annex I).

While the new definition appears slightly narrower, it remains overly broad and will create significant uncertainty. It is likely that many software developers and users will require costly legal advice to determine whether a particular piece of software in a particular use case is in scope or not. 

The “core” of the new definition is found in Article(3)(1)(ii), according to which an AI system is one that: “infers how to achieve a given set of human-defined objectives using learning, reasoning or modeling implemented with the techniques and approaches listed in Annex I.” This redefinition does precious little to solve the AIA’s original flaws. A legal inquiry focused on an AI system’s capacity for “reasoning” and “modeling” will tend either toward overinclusion or toward imagining a software reality that doesn’t exist at all. 

The revised text can still be interpreted so broadly as to cover virtually all software. Given that the list of in-scope techniques (Annex I) was not changed, any “reasoning” implemented with “Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems” (i.e., all software) remains in scope. In practice, the combined effect of those two provisions will be hard to distinguish from the original Commission draft. In other words, we still have an all-software law, not an AI-specific law. 

The AIA deliberations highlight two basic difficulties in regulating AI. First, it is likely that many activists and legislators have in mind science-fiction scenarios of strong AI (or “artificial general intelligence”) when pushing for regulations that will apply in a world where only weak AI exists. Strong AI is AI that is at least equal to human intelligence and is therefore capable of some form of agency. Weak AI is akin to  software techniques that augment human processing of information. For as long as computer scientists have been thinking about AI, there have been serious doubts that software systems can ever achieve generalized intelligence or become strong AI. 

Thus, what’s really at stake in regulating AI is regulating software-enabled extensions of human agency. But leaving aside the activists who explicitly do want controls on all software, lawmakers who promote the AIA have something in mind conceptually distinct from “software.” This raises the question of whether the “AI” that lawmakers imagine they are regulating is actually a null set. These laws can only regulate the equivalent of Excel spreadsheets at scale, and lawmakers need to think seriously about how they intervene. For such interventions to be deemed necessary, there should at least be quantifiable consumer harms that require redress. Focusing regulation on such broad topics as “AI” or “software” is almost certain to generate unacceptable unseen costs.

Even if we limit our concern to the real, weak AI, settling on an accepted “scientific” definition will be a challenge. Lawmakers inevitably will include either too much or too little. Overly inclusive regulation may seem like a good way to “future proof” the rules, but such future-proofing comes at the cost of significant legal uncertainty. It will also come at the cost of making some uses of software too costly to be worthwhile.