Artificial Intelligence Meets Organic Folly

Cite this Article
Daniel J. Gilman, Artificial Intelligence Meets Organic Folly, Truth on the Market (May 03, 2023), https://truthonthemarket.com/2023/05/03/artificial-intelligence-meets-organic-folly/

In a May 3 op-ed in The New York Times, Federal Trade Commission (FTC) Chair Lina Khan declares that “We Must Regulate A.I. Here’s How.” I’m concerned after reading it that I missed both the regulatory issue and the “here’s how” part, although she does tell us that “enforcers and regulators must be vigilant.”

Indeed, enforcers should be vigilant in exercising their established authority, pace not-a-little controversy about the scope of the FTC’s authority.

Most of the chair’s column reads like a parade of horribles. And there’s nothing wrong with identifying risks, even if not every worry represents a serious risk. As Descartes said—or, at least, sort of implied—feelings are never wrong, qua feelings. If one has a thought, it’s hard to deny that one is having it.

To be clear, I can think of non-fanciful instantiations of the floats in Khan’s parade. Artificial intelligence (AI) could be used to commit fraud, which is and ought to be unlawful. Enforcers should be on the lookout for new forms of fraud, as well as new instances of it. Antitrust violations, likewise, may occur in the tech sector, just as they’ve been found in the hospital sector, electrical manufacturing, and air travel.

Tech innovations entail costs as well as benefits, and we ought to be alert to both. But there’s a real point to parsing those harms from benefits—and the actual from the likely from the possible—if one seeks to identify and balance the tradeoffs entailed by conduct that may or may not cause harm on net.

Doing so can be complicated. AI is not just ChatGPT; it’s not just systems that employ foundational large language learning models; and it’s not just systems that employ one or another form of machine learning. It’s not all (or chiefly) about fraud. The regulatory problem is not just what to do about AI but what to do about…what?

That is, what forms, applications, or consequences do we mean to address, and how and why? If some AI application costs me my job, is that a violation of the FTC Act? Some other law? Abstracting from my own preferences and inflated sense of self-importance, is it a federal issue?

If one is to enforce the law or engage in regulation, there’s a real need to be specific about one’s subject matter, as well as what one plans to do about it, lest one throw out babies with bathwater. Which reminds me of parts of a famous (for certain people of a certain age) essay in 1970s computer science: Drew McDermott’s, “Artificial Intelligence Meets Natural Stupidity,” which is partly about oversimplification in characterizing AI.

The cynic in me has three basic worries about Khan’s FTC, if not about AI generally:

  1. Vigilance is not so much a method as a state of mind (or part of a slogan, or a motto, sometimes put in Latin). It’s about being watchful.
  2. The commission’s current instantiation won’t stop at vigilance, and it won’t stick to established principles of antitrust and consumer-protection law, or to its established jurisdiction.
  3. Doing so without being clear on what counts as an actionable harm under Section 5 of the FTC Act risks considerable damage to innovation, and to the consumer benefits produced by such innovation.

Perhaps I’m not being all that cynical, given the commission’s expansive new statement of enforcement principles regarding unfair methods of competition (UMC), not to mention the raft of new FTC regulatory proposals. For example, the Khan’s op-ed includes a link to the FTC’s proposed commercial surveillance and data security rulemaking, as Khan notes (without specifics) that “innovative services … came at a steep cost. What were initially conceived of as free services were monetized through extensive surveillance of people and businesses that used them.”

That reads like targeted advertising (as opposed to blanket advertising) engaged in cosplay as the Stasi:

I’ll never talk.  Oh, yes, you’ll talk. You’ll talk or else we’ll charge you for some of your favorite media. Ok, so maybe I’ll talk a little. 

Here again, it’s not that one couldn’t object to certain acquisitions or applications of consumer data (on some or another definition of “consumer data”). It’s that the concerns purported to motivate regulation read like a laundry list of myriad potential harms with barely a nod to the possibility—much less the fact—of benefits. Surveillance, we’re told in the FTC’s notice of proposed rulemaking, involves:

…the collection, aggregation, retention, analysis, transfer, or monetization of consumer data and the direct derivatives of that information. These data include both information that consumers actively provide—say, when they affirmatively register for a service or make a purchase—as well as personal identifiers and other information that companies collect, for example, when a consumer casually browses the web or opens an app.

That seems to encompass, roughly, anything one might do with data somehow connected to a consumer. For example, there’s the storage of information I voluntarily provide when registering for an airline’s rewards program, because I want the rewards miles. And there’s the information my physician collects, stores, and analyzes in treating me and maintaining medical records, including—but not limited to—things I tell the doctor because I want informed medical treatment.

Anyone might be concerned that personal medical information might be misused. It turns out that there are laws against various forms of misuse, but those laws are imperfect. But are all such practices really “surveillance”? Don’t many have some utility? Incidentally, don’t many consumers—as studies indicate—prefer arrangements whereby they can obtain “content” without a monetary payment? Should all such practices be regulated by the FTC without a new congressional charge, or allocated under a general prohibition of either UMC  or “unfair and deceptive acts or practices” (UDAP)? The commission is, incidentally, considering either or both as grounds.

By statute, the FTC’s “unfairness” authority extends only to conduct that “causes or is likely to cause substantial injury to consumers which is not reasonably avoided by consumers themselves.” And it does not cover conduct where those costs are “outweighed by countervailing benefits to consumers or competition.” So which ones are those?

Chair Khan tells us that we have “an online economy where access to increasingly essential services is conditioned on widespread hoarding and sale of our personal data.”  “Essential” seems important, if unspecific. And “hoarding” seems bad, if undistinguished from legitimate collection and storage. It sounds as if Google’s servers are like a giant ball of aluminum foil distributed across many cluttered, if virtual, apartments.

Khan breezily assures readers that the:

…FTC is well equipped with legal jurisdiction to handle the issues brought to the fore by the rapidly evolving A.I. sector, including collusion, monopolization, mergers, price discrimination and unfair methods of competition.

But I wonder whether concerns about AI—both those well-founded and those fanciful—all fit under these rubrics. And there’s really no explanation for how the agency means to parse, say, unlawful mergers (under the Sherman and/or Clayton acts) from lawful ones, whether they are to do with AI or not.

We’re told that a “handful of powerful businesses control the necessary raw materials that start-ups and other companies rely on to develop and deploy A.I. tools.” Perhaps, but why link to a newspaper article about Google and Microsoft for “powerful businesses” without establishing any relevant violations of the law? And why link to an article about Google and Nvidia AI systems—which are not raw materials—in suggesting that some firms control “essential” raw materials (as inputs) to innovation, without any further explanation? Was there an antitrust violation?

Maybe we already regulate AI in various ways. And maybe we should consider some new ones. But I’m stuck at the headline of Khan’s piece: Must we regulate further? If so, how? And not incidentally, why, and at what cost?