Site icon Truth on the Market

The AI Legislative Puzzle

With Donald Trump’s victory in this week’s presidential election, the federal government’s approach to the regulation of artificial intelligence (AI) stands at a crucial inflection point. While there may be pressure to rush through AI legislation during Congress’ upcoming lame-duck session, such haste could prove counterproductive for U.S. leadership in AI development. Instead, this transition period presents an opportunity to start with a clean slate and to engage in thoughtful consideration of how the government can establish appropriate guardrails, while promoting innovation and development.

Despite AI’s growing presence across industries—from health care and finance to manufacturing and transportation—federal regulatory action to date has been relatively limited. President Joe Biden did issue an executive order (EO) on AI, and various agencies have taken comments pursuant to that EO, but Congress has yet to pass any major AI-specific legislation. 

Then-candidate Trump told a crowd at one of his rallies last year: “When I’m reelected, I will cancel Biden’s artificial intelligence executive order and ban the use of AI to censor the speech of American citizens on day one.” His promise worked its way into the Republican National Committee’s platform this summer, which stated:

We will repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology. In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing.

But it would be foolish to believe that Congress will remain hands off indefinitely. Because it seems likely the federal government will do “something” about regulating AI, it’s critical that any new regulatory framework prioritizes U.S. competitiveness in AI development, while ensuring appropriate safety measures. Analyzing a selection of current legislative proposals can help us to detect key priorities and potential compromises, offering insights into where Congress may ultimately head on AI regulation.

The Status Quo

It should be noted that several of the recently proposed regulatory approaches to AI distinguish themselves as stark departures from how we’ve handled previous technological transformations. The rush to impose preemptive regulations and extensive oversight stands in sharp contrast, e.g., to how lawmakers and regulators treated the early days of e-commerce. Had we demanded pre-market safety testing, mandated transparency requirements, or required certification before allowing online transactions, we likely would have stifled one of the most transformative innovations in modern commerce.

Instead, e-commerce was allowed to develop organically, with regulations responding to actual harms, rather than speculative risks. This enabled rapid innovation, while still protecting consumers through existing fraud and consumer-protection frameworks. The historical success of this approach raises serious questions about whether applying the precautionary principle to AI development is wise or necessary.

While AI development has surged ahead in recent years, thehas been relatively limited. The most significant federal action to date has come from the executive branch, with President Biden’s EO on AI and various agency-level initiatives aimed at assessing the risks and opportunities AI presents. 

The actual federal regulatory response to AI has, to date, consisted primarily of information gathering, developing voluntary guidelines, and soliciting feedback from stakeholders (although the Biden EO did also make dubious claims of vast executive powers under the Defense Production Act). 

Among its provisions, the EO directed the National Institute of Standards and Technology (NIST) to issue AI safety standards and guidelines. Agencies like the Federal Trade Commission (FTC) have also taken a role, seeking public comment on how existing consumer-protection laws might apply to AI, and warning companies about the risks of biased or deceptive AI models. On top of these federal efforts, there has also been a growing patchwork of state laws (see here and here) that potentially create massive compliance burdens, with only speculative benefits in return. 

This creates a Scylla and Charybdis problem. On the one hand, we don’t want a patchwork of conflicting federal and state regulations and laws. And on the other, we don’t want a unified but half-baked federal “solution” that does more harm than good. The question, then, is what is Congress considering, and what should it do?

The Ghost of AI Regulation Future

AI remains a nascent industry, with many of its most significant effects and potential risks still unknown. With that said, it may be instructive to draw from the current crop of federal proposals to piece together a roadmap for what future AI regulation might look like. 

Several bills have been introduced, each reflecting differing priorities and approaches to how AI should be governed. These bills sketch some key themes and potential compromises that lawmakers may settle on as AI regulatory proposals move forward.

Safety and Accountability

One of the most prominent themes across the proposed bills is their emphasis on AI safety and accountability. Bills like S. 4769, the Validation and Evaluation for Trustworthy AI (VET AI) Act and S. 3312, the AI Research, Innovation, and Accountability Act prioritize the need for rigorous risk management and auditing frameworks to ensure that AI systems do not harm the public or violate individual rights. Both measures propose that “high impact” AI systems—particularly those used in sensitive sectors like health care and law enforcement—undergo continuous testing and certification to detect biases, security flaws, and other potential dangers. The VET AI Act focuses on establishing so-called “red-teaming” procedures that identify potential vulnerabilities and flaws, as well as other risk-mitigation measures to require that AI systems be evaluated by independent auditors before deployment. 

While many of the bills propose mandatory risk-management and certification frameworks, S. 4178, the Future of Artificial Intelligence Innovation Act would introduce what is theoretically a more flexible-sounding approach that would include codifying recommendations of NIST’s Artificial Intelligence Safety Institute. The institute would be directed to create best practices and technical standards for AI systems, offering an ostensibly voluntary path for compliance. 

But the bill’s elevation of the AI Safety Institute, which was created by the Biden EO, highlight what might be a concerning institutional mission creep. Initially created as a standards-setting body within NIST—an agency traditionally focused on technical coordination and stakeholder engagement—the institute could be poised to become a de-facto regulatory authority. This could create an avenue for AI safety alarmists to shape restrictive policies through “voluntary” standards and guidelines that effectively become mandatory through market pressure, government contracting, and judicial deference. This backdoor regulation through technical standards threatens to bypass proper legislative oversight, while concentrating power in an unaccountable technical body.

Transparency and Data

Transparency is another significant priority common to AI legislation. For example, S. 2714, the CREATE AI Act, proposes creating the National Artificial Intelligence Research Resource (NAIRR), a centralized platform where researchers and developers could access datasets and AI systems for testing and development. By establishing common standards for data transparency and AI-system documentation, the measure seeks to make AI development more open and accountable.

S. 3312, the AI Research, Innovation, and Accountability Act, likewise emphasizes transparency by requiring AI systems to disclose key information about their datasets, model architectures, and performance metrics. This focus on transparency is intended to build public trust in AI by making it easier to understand how these systems make decisions, particularly in high-risk environments like finance or health care.

Defining ‘High Impact’

As defined by the VET AI Act, a “high-impact” AI system is any AI technology with the potential to significantly impact human rights, public safety, or critical infrastructure. Such systems are typically deployed in sensitive sectors—such as health care, finance, law enforcement, and national security—where the consequences of malfunction, bias, or misuse could cause substantial harm. Under the bill’s terms, high-impact systems would be subject to stringent regulatory oversight, including mandatory risk assessments, continuous auditing, independent evaluations, and certification requirements to ensure they operate safely and transparently.

In many ways, the bill’s definition aligns with the approach the EU took to “high risk” and “systemic risk” systems in its AI Act. Unsurprisingly, the VET AI suffers similar defects. The definition is capacious, likely extending to any context in which a general-purpose AI tool—including large language models (LLMs) like ChatGPT or Claude—is used by individuals in positions of power or influence. 

While these tools may not be inherently “high impact,” the potential for misuse in decision-making processes (especially in sensitive areas like health care or law enforcement) means they could easily be swept into the “high impact” category. The broad nature of this statutory language suggests that even general-purpose AI systems could face stringent oversight due to how they are used, not just their core functionality.

The Balancing Act: Innovation vs Regulation

Looking at the web of obligations that the major pieces of proposed legislation would impose on AI developers, the potential for overregulation remains a significant concern. If lawmakers fail to properly assess the tradeoffs involved, they risk creating a regulatory landscape that stifles the very innovation they seek to support—particularly for smaller developers.

Open-source AI development, in particular, would face significant challenges under these frameworks. Many open-source developers operate with minimal funding, relying on collaboration and shared resources to innovate. Burdensome certification requirements or the need for costly independent audits could deter open-source contributions, ultimately limiting the diversity and dynamism of the AI ecosystem.

The introduction of voluntary standards, like those proposed in the Future of Artificial Intelligence Innovation Act, could also create a honeypot for special interests. Large, politically connected firms might influence the standards-setting process in ways that favor their own interests, thereby transforming voluntary guidelines into de-facto requirements that stifle competition. Such de-facto rules—particularly if they come from an ostensibly “expert” independent body like NIST—could also prove very influential before the courts during litigation. Open-source projects, often operating outside of these corporate interests, would likely be left out of the process or overwhelmed by compliance expectations.

While the CREATE AI Act has the commendable stated aim of helping smaller developers by providing access to centralized datasets through the newly established NAIRR, unintended consequences may similarly lurk in the details. On the surface, it may seem like a lifeline to smaller companies, especially those who might otherwise struggle to gather the necessary data to train competitive AI models. Centralizing such critical resources could, however, introduce several unforeseen challenges.

First, if access to NAIRR becomes restricted or shaped by special interests, smaller developers and open-source projects may be at a disadvantage, undermining the very goal of a more level playing field. There’s also the potential problem of bias and standardization. While centralized datasets aim to provide consistent and reliable training data, they may unintentionally reinforce existing biases in AI models if the data is not sufficiently diverse. If NAIRR prioritizes certain types of data over others, this could lead to models that fail to perform adequately across different demographics or use cases, perpetuating bias rather than mitigating it.

Moreover, dependency on centralized datasets could create a lock-in effect, where developers become overly reliant on the provided data, stifling experimentation and innovation. Instead of fostering a diverse ecosystem of AI research and development, it could result in homogenization, where every AI model is trained on the same narrow datasets, limiting breakthroughs that might arise from alternative approaches.

Finally, there’s a risk of regulatory capture, where larger firms could shape the curation and access rules of NAIRR to their advantage, making it harder for smaller developers to compete. These companies may have the resources to influence decision making, thus ensuring the platform operates in ways that serve their own interests, rather than fostering true open access for all.

Toward Scalable Regulation Based on Marginal Risk

While the proposals outlined in these bills show congressional desire to establish a regulatory framework for AI, it is critical that policymakers proceed cautiously.

One promising approach is the marginal risk framework that has been advanced by the National Telecommunications and Information Administration (NTIA). This framework focuses on evaluating AI systems based on the additional risks they introduce, rather than applying blanket regulations across the board. It offers a measured, scalable way to manage the complexities of AI technologies—particularly those like open-source foundation models—without stifling innovation. This approach could serve as a middle ground between overly broad risk-based frameworks and my preferred harm-based approach that focuses on mitigating real, demonstrated harms, rather than trying to preemptively manage every possible risk.

Rather than impose ex-ante regulations that assume certain technologies are inherently high risk, a marginal-risk framework would allow regulators to focus on the incremental risks and benefits of particular AI systems relative to similar technologies. It would encourage flexibility and ensure that regulation is driven by the specific context and application of AI, rather than speculative risks. This is crucial for allowing open-source development to continue thriving, as blanket restrictions or broad risk classifications could disproportionately harm smaller developers who rely on open-access resources to innovate.

In contrast, many of the current legislative proposals risk entrenching a one-size-fits-all regulatory regime that would burden smaller developers and open-source projects with compliance requirements better suited to large, truly high-risk systems. By adopting a marginal risk approach, Congress could help to ensure that regulations are proportional to the actual risks involved, fostering a dynamic and innovation-friendly environment for AI development. This method would also mitigate the risk of centralizing power among a few politically connected firms by creating more tailored, flexible standards that can evolve alongside the technology.

Finally, a glaring omission in current legislative proposals is the absence of federal-preemption provisions. None of the bills noted above address the growing patchwork of state AI regulations, but would instead merely add new federal requirements atop existing state rules. This threatens to create an unmanageable compliance burden that would jeopardize both innovation and effective deployment.

Without preemption, we risk creating a regulatory maze that favors large, well-resourced companies, while making compliance nearly impossible for smaller innovators and open-source projects. This oversight could effectively balkanize AI development in the United States, undermining our competitive position in these critical technologies.

Conclusion

As Congress grapples with how to regulate AI, the stakes are high. Missteps in regulatory design could stifle innovation, burden smaller developers, and centralize power among a few well-connected firms. While the bills currently on the table reflect a desire to address safety, transparency, and accountability, their one-size-fits-all approach risks imposing significant compliance burdens on open-source projects and smaller companies. 

A more balanced framework, such as the marginal-risk approach proposed by NTIA, offers a sensible path forward. Such an approach would emphasize context-specific regulations that target real, demonstrated harms without prematurely stifling innovation. If Congress can adopt that sort of measured, flexible model, it may find a way to ensure AI safety and accountability while still fostering a vibrant, competitive ecosystem for AI development.