Output of the LG Research AI to the prompt: “artificial intelligence regulator”
It appears that the emergence of ChatGPT and other artificial-intelligence systems has complicated the European Union’s efforts to implement its AI Act, mostly by challenging its underlying assumptions. The proposed regulation seeks to govern a diverse and rapidly growing AI landscape. In reality, however, there is no single thing that can be called “AI.” Instead, the category comprises various software tools that employ different methods to achieve different objectives. The EU’s attempt to cover such a disparate array of subjects under a common regulatory framework is likely to be ill-fitted to achieve its intended goals.
Overview of the AI Act
As proposed by the European Commission, the AI Act would regulate the use of AI systems that ostensibly pose risks to health, safety, and fundamental rights. The proposal defines AI systems broadly to include any software that uses machine learning, and sorts them into three risk levels: unacceptable, high, and limited risk. Unacceptable-risk systems are prohibited outright, while high-risk systems are subject to strict requirements, including mandatory conformity assessments. Limited-risk systems face certain requirements specifically related to adequate documentation and transparency.
As my colleague Mikolaj Barczentewicz has pointed out, however, the AI Act remains fundamentally flawed. By defining AI so broadly, the act would apply even to ordinary general-purpose software, let alone to software that uses machine learning but does not pose significant risks. The plain terms of the AI Act could be read to encompass common office applications, spam filters, and recommendation engines, thus potentially imposing considerable compliance burdens on businesses for their use of objectively harmless software.
Understanding Regulatory Overaggregation
Regulatory overaggregation—that is, the grouping of a huge number of disparate and only nominally related subjects under a single regulatory regime embodied by an abstract concept—is not a new issue. We can see evidence of it in the EU’s previous attempts to use the General Data Protection Regulation (GDPR) to oversee the vast domain of “privacy.”
“Privacy” is a capacious concept that includes, for instance, both the creeped-out feelings that certain individual users may feel in response to being tracked by adtech software, as well as potential violations of individuals’ expectations of privacy in location data when cell providers sell data to bounty hunters. In truth, what we consider “privacy” comprises numerous distinct problem domains better defined and regulated according to the specific harms they pose, rather than under one all-encompassing regulatory umbrella.
Similarly, “AI” regulation faces the challenge of addressing various mostly unrelated concerns, from discriminatory bias in lending or hiring to intellectual-property usage to opaque algorithms employed for fraudulent or harmful purposes. Overaggregated regulation, like the AI Act, results in a framework that is both overinclusive (creating unnecessary burdens on individuals and businesses) and underinclusive (failing to address potential harms in its ostensible area of focus, due to its overly broad scope).
In other words, as noted by Kai Zenner, an aide to Minister of European Parliament Axel Voss, the AI Act is obsessed with risks to the detriment of innovation:
This overaggregation is likely to hinder the AI Act’s ability to effectively address the unique challenges and risks associated with the different types of technology that constitute AI systems. As AI continues to evolve rapidly and to diversify in its applications, a one-size-fits-all approach may prove inadequate to the specific needs and concerns of different sectors and technologies. At the same time, the regulation’s overly broad scope threatens to chill innovation by causing firms to second guess whether they should use algorithmic tools.
Disaggregating Regulation and Developing a Proper Focus
The AI landscape is complex and constantly changing. Its systems include various applications across industries like health care, finance, entertainment, and security. As such, a regulatory framework to address AI must be flexible and adaptive, capable of accommodating the wide array of AI technologies and use cases.
More importantly, regulatory frameworks in general should focus on addressing harms, rather than the technology itself. If, for example, bias is suspected in hiring practices—whether facilitated through an AI algorithm or a simple Excel spreadsheet—that issue should be dealt with as a matter of labor law. If labor law or any other class of laws fails to account for the negative use of algorithmic tools, they should be updated accordingly.
Similar nuance should be applied in areas like intellectual property, criminal law, housing, fraud, and so forth. It’s the illicit behavior of bad actors that we want the law to capture, not to adopt a universal position on a particular software tool that might be used differently in different contexts.
To reiterate: it is the harm that matters, and regulations should be designed to address known or highly likely harms in well-defined areas. Creating an overarching AI regulation that attempts to address every conceivable harm at the root technological level is impractical, and could be a burdensome hindrance on innovation. Before it makes a mistake, the EU should reconsider the AI Act and adopt a more targeted approach to the harms that AI technologies may pose.