Site icon Truth on the Market

The Biden Executive Order on AI: A Recipe for Anticompetitive Overregulation

The Biden administration’s Oct. 30 “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” proposes to “govern… the development and use of AI safely and responsibly” by “advancing a coordinated, Federal Government-wide approach to doing so.” (Emphasis added.)

This “all-of-government approach,” which echoes the all-of-government approach of the 2021 “Executive Order on Competition” (see here and here), establishes a blueprint for heightened regulation to deal with theorized problems stemming from the growing use of AI by economic actors. As was the case with the competition order, the AI order threatens to impose excessive regulatory costs that would harm the American economy and undermine competitive forces. As such, the order’s implementation warrants close scrutiny.

Introduction: Is AI Regulation Justified?

Before turning to the order’s specifics, let’s review when it is appropriate to regulate. The mere existence of “market failure” (a term often misleadingly applied to vibrant markets that do not yield unachievable “socially optimal” results) does not, in and of itself, justify government regulation (see here, here, and here).

“Government failure” analysis (see the work of Nobel laureate James Buchanan, here) suggests that government interventions to “correct” markets may well yield more welfare-inferior outcomes than nonintervention. As Art Carden and Steven Horwitz point out:

[N]non-economist critics of the market are frequently unaware of the comparative institutional analysis that public choice theory has made a necessary part of thinking about the role of government in the economy. Pointing out imperfections in the market does not ipso facto justify government intervention, and the only certain way that market “failures” are “failures” is by comparison to an unreachable theoretical idea. Market imperfections are not magic wands that make market solutions and government imperfections disappear. Real understanding of comparative political economy begins rather than ends with the recognition that markets are not always perfect.

It follows that the executive order’s provisions calling for regulation on the basis of supposed market imperfections are only justifiable if it has been determined that the costs of such government intervention are less than the costs of these problems. Such a determination plainly has not been made. Even worse, it is far from clear that the use of AI has undermined market forces—such harms are purely theoretical and hypothetical.

The AI order defines AI as:

[A] machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.  Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.

Thus, when employed by firms, AI is a mere input to business decision-making. And as a tool, AI clearly may enhance economic welfare by creating value. As Goldman Sachs points out:

[AI] is poised to unlock new business models, transform industries, reshape people’s jobs, and boost economic productivity.

So, is the Biden administration wise to charge ahead at this time with regulatory proposals to rein in AI, without a careful consideration of regulatory costs and benefits? Competitive Enterprise Institute Senior Fellow James Broughel does not think so:

The Biden administration’s new artificial intelligence executive order is the kind of Ready! Fire! Aim! regulatory approach that has failed in the past, like when EPA and Congress rushed out ethanol fuel regulations before understanding the core issues. Those ethanol mandates unintentionally drove up food prices globally while failing to reduce greenhouse gases.

Now, rather than consider the novel problems AI presents – and whether the government or the private sector would be better at solving any particular problem that arises — the administration is pre-committing to feel-good regulatory ‘solutions’ that have no basis in science or evidence. This scattershot regulatory approach is unlikely to address the specific, legitimate concerns people have about the risks and benefits of artificial intelligence.

A quick preliminary examination of the AI order’s key regulatory provisions supports Broughel’s concerns.

The AI Order’s Regulatory Provisions

Introductory Overview (Sections 1-3)

Section 1 (“Purpose”) announces that the AI order is intended to advance a coordinated, federal-government-wide approach to “governing the development and use of AI safely and responsibly.”

Section 2 (“Policies and Principles”) presents general policies and principles that the order will promote. In particular, Section 2(e) reveals the administration’s sweeping regulatory plans. It states that the federal government will:

[E]nact appropriate safeguards against fraud, unintended bias, discrimination, infringements on privacy, and other harms from AI.  Such protections are especially important in critical fields like healthcare, financial services, education, housing, law, and transportation, where mistakes by or misuse of AI could harm patients, cost consumers or small businesses, or jeopardize safety or rights.  At the same time, my Administration will promote responsible uses of AI that protect consumers, raise the quality of goods and services, lower their prices, or expand selection and availability.

Section 3 (“Definitions”) sets forth definitions.

Subsequent sections fill in the details.

Section 4 (‘Ensuring the Safety and Security of AI Technology’)

Section 4 deals with ensuring the safety and security of AI technology. The many provisions in this highly technical section provide guidance bearing on national defense and homeland security (including critical infrastructure and cybersecurity). They do not warrant further discussion here.

Section 5 (‘Promoting Innovation and Competition’)

Section 5 addresses promoting innovation and competition.

Section 5.1 concerns immigration policy to attract foreign AI experts to the United States. Section 5.2 focuses on the National Science Foundation’s (NSF) AI-related initiatives, the issuance of patent and copyright-law guidance, and the making of various government grants. These subsections do not appear to raise particular concerns.

Section 5.3(a) (“promoting competition”), however, is another matter. All federal agencies developing AI policies and regulations are charged with “addressing risks from concentrated control of key inputs, taking steps to stop unlawful collusion and prevent dominant firms from disadvantaging competitors, and working to provide new opportunities for small businesses and entrepreneurs.”

The implicit suggestion is that large firms could be discouraged (if not barred) from obtaining “key AI inputs” that could have enhanced their efficiency, and from entering into efficient contracts that could have somehow harmed their competitors

U.S. antitrust law is concerned with preventing harm to competition, not to competitors, and is not averse to efficiency-seeking acquisition of inputs by large firms. In other words, small businesses and entrepreneurs should not be favored artificially by disadvantaging large efficient firms. Moreover, collusion is already handled by the antitrust agencies; encouraging other agencies to enact new anti-collusion programs could have negative unintended consequences, if not handled carefully. In short, these Section 5.3(a) clauses may threaten to undermine innovation-seeking competition. Antitrust law should be left to apply as is.

Section 5.3(a) has another very ill-advised clause, which encourages the FTC to exercise its “existing authorities, including its rulemaking authority, . . . to ensure fair competition in the AI marketplace and to ensure that consumers and workers are protected from harms that may be enabled by the use of AI.” This provision is clearly harmful, and should not have been included in the order.

As I have previously written, it is almost certain that the FTC does not have competition-rulemaking authority, and thus the expenditure of scarce FTC resources on an AI-related competition rulemaking would be pure waste.

Furthermore, the FTC has no specific statutory authority to “protect workers” (see here), and thus is not empowered to address “worker harm” in a rulemaking or enforcement action under Section 5 of the FTC Act. (I presume that the AI order is not addressing the extremely rare case of a monopsonistic merger, but rather the FTC’s “unfair methods of competition” authority under Section 5 of the FTC Act.)

Finally, the Section 5.3(a) reference to the FTC’s exercising its “existing authorities . . . to ensure fair competition in the AI marketplace” also necessarily includes “unfair method of competition” (UMC) complaints under Section 5 of the FTC Act. Under the FTC’s November 2022 UMC policy statement, the FTC claims authorization to attack almost any business conduct it finds distasteful (see here). Such an unprincipled position would fail in court, and would offend the rule of law, but there is a real risk that this Section 5.3(a) language might spur the FTC to challenge innovative and wealth-inducing uses of AI by large firms, because the particular uses somehow seemed unfair to smaller businesses, consumers, or workers. This risk could discourage some firms from employing AI in a creative, economically beneficial, and wealth-enhancing fashion.

Section 5.3(b) deals with U.S. Commerce Department and Small Business Administration’s responsibilities in promoting the U.S. semiconductor industry to benefit U.S. competitors, consistent with the CHIPS Act of 2022. The subsection notes the importance of semiconductors in powering AI technologies and in facilitating AI competition. (For a discussion of the CHIPS Act’s industrial policy implications from a free-market perspective, see here and here.)

Section 6 (‘Supporting Workers’)

Section 6 directs the U.S. Labor Department (DOL) to engage in several actions to assist workers who could theoretically be harmed by the introduction of AI. These include consultations with labor unions and workers in developing and publishing “best practices for employers” to mitigate AI-related harm. The best practices would address job displacement and career opportunities; labor standards and job quality (including health, safety, and compensation in the workplace); and implications for workers of employers’ AI-related data collection. The NSF would promote AI-related education and workforce development “to foster a diverse AI-ready workforce”.

The various Section 6 provisions, while cabined by existing law, could potentially lead to new regulatory strictures that could constrain the adoption and implementation of AI systems.

Section 7 (‘Advancing Equity and Civil Rights’)

Section 7 directs the assistant U.S. attorney general for civil rights—in conjunction with federal agency civil-rights offices—to address discrimination in the use of AI systems. It also requires a comprehensive analysis of the use of AI in the criminal-justice system to protect individuals’ privacy, civil rights, and civil liberties.

Section 7 also mandates the development of guidance regarding “the responsible application of AI” for state, local, tribal, and territorial law-enforcement authorities. Federal agencies are also ordered to use their civil-rights offices to prevent unlawful discrimination arising from use of AI in federal government programs and benefits administration. The U.S. Department of Health and Human Services (HHS) and the U.S. Agriculture Department (USDA) are to promote “equitable and just outcomes” (among other things) in states’ and localities’ administration of public benefits with the assistance of AI.

Section 7.3 (“Strengthening AI and Civil Rights in the Broader Economy”) requires the DOL to publish guidance for federal contractors in hiring involving AI and other technology-based hiring systems, with an emphasis on preventing “bias or disparities affecting protected groups.” Section 7.3 also directs the U.S. Department of Housing and Urban Development (HUD) and the Consumer Financial Protection Bureau “to combat unlawful discrimination enabled by automated or algorithmic tools used to make decisions about access to housing, . . . credit, and other real estate-related transactions”.

Overall, the Section 7 references to “equity” and “discrimination” encourage the adoption of regulations that would seek to sanction the use of AI in a manner that provides “unfair” or “unjust” outcomes for groups favored by the administration. This could lead to the artificial skewing of government contracts, public-benefit disbursements, hiring, and criminal-enforcement decisions. Such skewing could interfere with efficient market outcomes and potentially deny true equality of opportunity and treatment in affected areas.

Section 8 (‘Protecting Consumers, Patients, Passengers, and Students’)

Section 8 calls for independent regulatory agencies and major cabinet departments to develop plans and regulatory actions designed to protect American consumers from AI-related fraud, discrimination, threats to privacy, threats to financial stability, and other risks that may arise from the use of AI. The mandate is extremely broad and contemplates that a significant degree of rulemaking and guidance will be undertaken throughout the federal government.

Section 9 (‘Protecting Privacy’)

Section 9 directs the U.S. Office of Management and Budget (OMB) to provide guidance for federal agencies on ways to mitigate privacy and confidentiality risks arising from the use of commercially available information. The DOC and National Institute for Standards and Technology (NIST) are also ordered to create guidelines for agencies to evaluate the efficacy of differential-privacy-guarantee protections, including AI.

Section 10 (‘Advancing Federal Government Use of AI’)

Section 10 directs the establishment of government-wide procedures and policies (under the aegis of certain components of the Executive Office of the President) to promote the use of AI throughout the federal government. Agencies are required to appoint “chief artificial intelligence officers” to oversee implementation of AI policies and procedures.

Section 11 (‘Strengthening American Leadership Abroad’)

Section 11 assigns tasks to specific federal government entities to promote the foreign acceptance and adoption of AI principles and standards developed under the order. It also directs the U.S. State Department and the Department of Homeland Security (DHS) to enhance international cooperation to protect critical infrastructure and prevent malicious uses of AI.

Section 12 (‘Implementation’)

Section 12 establishes the White House AI Council and designates the agencies charged with implementing the order.

Section 13 (‘General Provisions’)

Section 13 includes standard boilerplate language regarding the order’s consistency with applicable law.

Conclusion

The AI order is complex and undoubtedly will receive substantial public scrutiny.

Some portions—such as the provisions dealing with national defense and homeland security, and federal government use of AI—pertain to core roles of the federal government, and may be well-grounded. (In any event, they focus on specialized topics on which I am not qualified to comment.)

Other highly regulatory provisions of the order, however, are ill-thought-out and problematic. One particularly troublesome section promotes the likely misuse of antitrust law, to the detriment of business innovation and efficiency.

Certain problematic sections call for application of AI in an “equitable” fashion in the disbursement of federal benefits and in the administration of federal labor, civil rights, and criminal laws. In addition, agencies are tasked with regulating  firms’ uses of AI based on potential effects on consumers and workers, without a showing of likely harm. These various directives could impose enormous new costs on the private sector and stifle businesses’ expanded beneficial incorporation of AI.

Given AI’s potential, these uncalled-for regulations, if implemented, could seriously undermine substantial dynamic economic growth that would spur the creation of innovative products and processes, as well as new markets. American consumers, workers, producers, and the overall economy would suffer. In a world where economic growth is increasingly driven by new technologies, this would handicap the American economy on the international stage.

The implications are straightforward. The administration should withdraw the harmful and unnecessarily regulatory portions of the order (and the section 5.3 provisions dealing with antitrust) and retain only those provisions addressing core federal government concerns.

Exit mobile version