Archives For Constitutional Law

The Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law will host a hearing this afternoon on Gonzalez v. Google, one of two terrorism-related cases currently before the U.S. Supreme Court that implicate Section 230 of the Communications Decency Act of 1996.

We’ve written before about how the Court might and should rule in Gonzalez (see here and here), but less attention has been devoted to the other Section 230 case on the docket: Twitter v. Taamneh. That’s unfortunate, as a thoughtful reading of the dispute at issue in Taamneh could highlight some of the law’s underlying principles. At first blush, alas, it does not appear that the Court is primed to apply that reading.

During the recent oral arguments, the Court considered whether Twitter (and other social-media companies) can be held liable under the Antiterrorism Act for providing a general communications platform that may be used by terrorists. The question under review by the Court is whether Twitter “‘knowingly’ provided substantial assistance [to terrorist groups] under [the statute] merely because it allegedly could have taken more ‘meaningful’ or ‘aggressive’ action to prevent such use.” Plaintiffs’ (respondents before the Court) theory is, essentially, that Twitter aided and abetted terrorism through its inaction.

The oral argument found the justices grappling with where to draw the line between aiding and abetting, and otherwise legal activity that happens to make it somewhat easier for bad actors to engage in illegal conduct. The nearly three-hour discussion between the justices and the attorneys yielded little in the way of a viable test. But a more concrete focus on the law & economics of collateral liability (which we also describe as “intermediary liability”) would have significantly aided the conversation.   

Taamneh presents a complex question of intermediary liability generally that goes beyond the bounds of a (relatively) simpler Section 230 analysis. As we discussed in our amicus brief in Fleites v. Mindgeek (and as briefly described in this blog post), intermediary liability generally cannot be predicated on the mere existence of harmful or illegal content on an online platform that could, conceivably, have been prevented by some action by the platform or other intermediary.

The specific statute may impose other limits (like the “knowing” requirement in the Antiterrorism Act), but intermediary liability makes sense only when a particular intermediary defendant is positioned to control (and thus remedy) the bad conduct in question, and when imposing liability would cause the intermediary to act in such a way that the benefits of its conduct in deterring harm outweigh the costs of impeding its normal functioning as an intermediary.

Had the Court adopted such an approach in its questioning, it could have better honed in on the proper dividing line between parties whose normal conduct might reasonably give rise to liability (without some heightened effort to mitigate harm) and those whose conduct should not entail this sort of elevated responsibility.

Here, the plaintiffs have framed their case in a way that would essentially collapse this analysis into a strict liability standard by simply asking “did something bad happen on a platform that could have been prevented?” As we discuss below, the plaintiffs’ theory goes too far and would overextend intermediary liability to the point that the social costs would outweigh the benefits of deterrence.

The Law & Economics of Intermediary Liability: Who’s Best Positioned to Monitor and Control?

In our amicus brief in Fleites v. MindGeek (as well as our law review article on Section 230 and intermediary liability), we argued that, in limited circumstances, the law should (and does) place responsibility on intermediaries to monitor and control conduct. It is not always sufficient to aim legal sanctions solely at the parties who commit harms directly—e.g., where harms are committed by many pseudonymous individuals dispersed across large online services. In such cases, social costs may be minimized when legal responsibility is placed upon the least-cost avoider: the party in the best position to limit harm, even if it is not the party directly committing the harm.

Thus, in some circumstances, intermediaries (like Twitter) may be the least-cost avoider, such as when information costs are sufficiently low that effective monitoring and control of end users is possible, and when pseudonymity makes remedies against end users ineffective.

But there are costs to imposing such liability—including, importantly, “collateral censorship” of user-generated content by online social-media platforms. This manifests in platforms acting more defensively—taking down more speech, and generally moving in a direction that would make the Internet less amenable to open, public discussion—in an effort to avoid liability. Indeed, a core reason that Section 230 exists in the first place is to reduce these costs. (Whether Section 230 gets the balance correct is another matter, which we take up at length in our law review article linked above).

From an economic perspective, liability should be imposed on the party or parties best positioned to deter the harms in question, so long as the social costs incurred by, and as a result of, enforcement do not exceed the social gains realized. In other words, there is a delicate balance that must be struck to determine when intermediary liability makes sense in a given case. On the one hand, we want illicit content to be deterred, and on the other, we want to preserve the open nature of the Internet. The costs generated from the over-deterrence of legal, beneficial speech is why intermediary liability for user-generated content can’t be applied on a strict-liability basis, and why some bad content will always exist in the system.

The Spectrum of Properly Construed Intermediary Liability: Lessons from Fleites v. Mindgeek

Fleites v. Mindgeek illustrates well that proper application of liability to intermedium exists on a spectrum. Mindgeek—the owner/operator of the website Pornhub—was sued under Racketeer Influenced and Corrupt Organizations Act (RICO) and Victims of Trafficking and Violence Protection Act (TVPA) theories for promoting and profiting from nonconsensual pornography and human trafficking. But the plaintiffs also joined Visa as a defendant, claiming that Visa knowingly provided payment processing for some of Pornhub’s services, making it an aider/abettor.

The “best” defendants, obviously, would be the individuals actually producing the illicit content, but limiting enforcement to direct actors may be insufficient. The statute therefore contemplates bringing enforcement actions against certain intermediaries for aiding and abetting. But there are a host of intermediaries you could theoretically bring into a liability scheme. First, obviously, is Mindgeek, as the platform operator. Plaintiffs felt that Visa was also sufficiently connected to the harm by processing payments for MindGeek users and content posters, and that it should therefore bear liability, as well.

The problem, however, is that there is no limiting principle in the plaintiffs’ theory of the case against Visa. Theoretically, the group of intermediaries “facilitating” the illicit conduct is practically limitless. As we pointed out in our Fleites amicus:

In theory, any sufficiently large firm with a role in the commerce at issue could be deemed liable if all that is required is that its services “allow[]” the alleged principal actors to continue to do business. FedEx, for example, would be liable for continuing to deliver packages to MindGeek’s address. The local waste management company would be liable for continuing to service the building in which MindGeek’s offices are located. And every online search provider and Internet service provider would be liable for continuing to provide service to anyone searching for or viewing legal content on MindGeek’s sites.

Twitter’s attorney in Taamneh, Seth Waxman, made much the same point in responding to Justice Sonia Sotomayor:

…the rule that the 9th Circuit has posited and that the plaintiffs embrace… means that as a matter of course, every time somebody is injured by an act of international terrorism committed, planned, or supported by a foreign terrorist organization, each one of these platforms will be liable in treble damages and so will the telephone companies that provided telephone service, the bus company or the taxi company that allowed the terrorists to move about freely. [emphasis added]

In our Fleites amicus, we argued that a more practical approach is needed; one that tries to draw a sensible line on this liability spectrum. Most importantly, we argued that Visa was not in a position to monitor and control what happened on MindGeek’s platform, and thus was a poor candidate for extending intermediary liability. In that case, because of the complexities of the payment-processing network, Visa had no visibility into what specific content was being purchased, what content was being uploaded to Pornhub, and which individuals may have been uploading illicit content. Worse, the only evidence—if it can be called that—that Visa was aware that anything illicit was happening consisted of news reports in the mainstream media, which may or may not have been accurate, and on which Visa was unable to do any meaningful follow-up investigation.

Our Fleites brief didn’t explicitly consider MindGeek’s potential liability. But MindGeek obviously is in a much better position to monitor and control illicit content. With that said, merely having the ability to monitor and control is not sufficient. Given that content moderation is necessarily an imperfect activity, there will always be some bad content that slips through. Thus, the relevant question is, under the circumstances, did the intermediary act reasonably—e.g., did it comply with best practices—in attempting to identify, remove, and deter illicit content?

In Visa’s case, the answer is not difficult. Given that it had no way to know about or single out transactions as likely to be illegal, its only recourse to reduce harm (and its liability risk) would be to cut off all payment services for Mindgeek. The constraints on perfectly legal conduct that this would entail certainly far outweigh the benefits of reducing illegal activity.

Moreover, such a theory could require Visa to stop processing payments for an enormous swath of legal activity outside of PornHub. For example, purveyors of illegal content on PornHub use ISP services to post their content. A theory of liability that held Visa responsible simply because it plays some small part in facilitating the illegal activity’s existence would presumably also require Visa to shut off payments to ISPs—certainly, that would also curtail the amount of illegal content.

With MindGeek, the answer is a bit more difficult. The anonymous or pseudonymous posting of pornographic content makes it extremely difficult to go after end users. But knowing that human trafficking and nonconsensual pornography are endemic problems, and knowing (as it arguably did) that such content was regularly posted on Pornhub, Mindgeek could be deemed to have acted unreasonably for not having exercised very strict control over its users (at minimum, say, by verifying users’ real identities to facilitate law enforcement against bad actors and deter the posting of illegal content). Indeed, it is worth noting that MindGeek/Pornhub did implement exactly this control, among others, following the public attention arising from news reports of nonconsensual and trafficking-related content on the site. 

But liability for MindGeek is only even plausible given that it might be able to act in such a way that imposes greater burdens on illegal content providers without deterring excessive amounts of legal content. If its only reasonable means of acting would be, say, to shut down PornHub entirely, then just as with Visa, the cost of imposing liability in terms of this “collateral censorship” would surely outweigh the benefits.

Applying the Law & Economics of Collateral Liability to Twitter in Taamneh

Contrast the situation of MindGeek in Fleites with Twitter in Taamneh. Twitter may seem to be a good candidate for intermediary liability. It also has the ability to monitor and control what is posted on its platform. And it faces a similar problem of pseudonymous posting that may make it difficult to go after end users for terrorist activity. But this is not the end of the analysis.

Given that Twitter operates a platform that hosts the general—and overwhelmingly legal—discussions of hundreds of millions of users, posting billions of pieces of content, it would be reasonable to impose a heightened responsibility on Twitter only if it could exercise it without excessively deterring the copious legal content on its platform.

At the same time, Twitter does have active policies to police and remove terrorist content. The relevant question, then, is not whether it should do anything to police such content, but whether a failure to do some unspecified amount more was unreasonable, such that its conduct should constitute aiding and abetting terrorism.

Under the Antiterrorism Act, because the basis of liability is “knowingly providing substantial assistance” to a person who committed an act of international terrorism, “unreasonableness” here would have to mean that the failure to do more transforms its conduct from insubstantial to substantial assistance and/or that the failure to do more constitutes a sort of willful blindness. 

The problem is that doing more—policing its site better and removing more illegal content—would do nothing to alter the extent of assistance it provides to the illegal content that remains. And by the same token, almost by definition, Twitter does not “know” about illegal content it fails to remove. In theory, there is always “more” it could do. But given the inherent imperfections of content moderation at scale, this will always be true, right up to the point that the platform is effectively forced to shut down its service entirely.  

This doesn’t mean that reasonable content moderation couldn’t entail something more than Twitter was doing. But it does mean that the mere existence of illegal content that, in theory, Twitter could have stopped can’t be the basis of liability. And yet the Taamneh plaintiffs make no allegation that acts of terrorism were actually planned on Twitter’s platform, and offer no reasonable basis on which Twitter could have practical knowledge of such activity or practical opportunity to control it.

Nor did plaintiffs point out any examples where Twitter had actual knowledge of such content or users and failed to remove them. Most importantly, the plaintiffs did not demonstrate that any particular content-moderation activities (short of shutting down Twitter entirely) would have resulted in Twitter’s knowledge of or ability to control terrorist activity. Had they done so, it could conceivably constitute a basis for liability. But if the only practical action Twitter can take to avoid liability and prevent harm entails shutting down massive amounts of legal speech, the failure to do so cannot be deemed unreasonable or provide the basis for liability.   

And, again, such a theory of liability would contain no viable limiting principle if it does not consider the practical ability to control harmful conduct without imposing excessively costly collateral damage. Indeed, what in principle would separate a search engine from Twitter, if the search engine linked to an alleged terrorist’s account? Both entities would have access to news reports, and could thus be assumed to have a generalized knowledge that terrorist content might exist on Twitter. The implication of this case, if the plaintiff’s theory is accepted, is that Google would be forced to delist Twitter whenever a news article appears alleging terrorist activity on the service. Obviously, that is untenable for the same reason it’s not tenable to impose an effective obligation on Twitter to remove all terrorist content: the costs of lost legal speech and activity.   

Justice Ketanji Brown Jackson seemingly had the same thought when she pointedly asked whether the plaintiffs’ theory would mean that Linda Hamilton in the Halberstram v. Welch case could have been held liable for aiding and abetting, merely for taking care of Bernard Welch’s kids at home while Welch went out committing burglaries and the murder of Michael Halberstram (instead of the real reason she was held liable, which was for doing Welch’s bookkeeping and helping sell stolen items). As Jackson put it:

…[I]n the Welch case… her taking care of his children [was] assisting him so that he doesn’t have to be at home at night? He’s actually out committing robberies. She would be assisting his… illegal activities, but I understood that what made her liable in this situation is that the assistance that she was providing was… assistance that was directly aimed at the criminal activity. It was not sort of this indirect supporting him so that he can actually engage in the criminal activity.

In sum, the theory propounded by the plaintiffs (and accepted by the 9th U.S. Circuit Court of Appeals) is just too far afield for holding Twitter liable. As Twitter put it in its reply brief, the plaintiffs’ theory (and the 9th Circuit’s holding) is that:

…providers of generally available, generic services can be held responsible for terrorist attacks anywhere in the world that had no specific connection to their offerings, so long as a plaintiff alleges (a) general awareness that terrorist supporters were among the billions who used the services, (b) such use aided the organization’s broader enterprise, though not the specific attack that injured the plaintiffs, and (c) the defendant’s attempts to preclude that use could have been more effective.

Conclusion

If Section 230 immunity isn’t found to apply in Gonzalez v. Google, and the complaint in Taamneh is allowed to go forward, the most likely response of social-media companies will be to reduce the potential for liability by further restricting access to their platforms. This could mean review by some moderator or algorithm of messages or videos before they are posted to ensure that there is no terrorist content. Or it could mean review of users’ profiles before they are able to join the platform to try to ascertain their political leanings or associations with known terrorist groups. Such restrictions would entail copious false negatives, along with considerable costs to users and to open Internet speech.

And in the end, some amount of terrorist content would still get through. If the plaintiffs’ theory leads to liability in Taamneh, it’s hard to see how the same principle wouldn’t entail liability even under these theoretical, heightened practices. Absent a focus on an intermediary defendant’s ability to control harmful content or conduct, without imposing excessive costs on legal content or conduct, the theory of liability has no viable limit.

In sum, to hold Twitter (or other social-media platforms) liable under the facts presented in Taamneh would stretch intermediary liability far beyond its sensible bounds. While Twitter may seem a good candidate for intermediary liability in theory, it should not be held liable for, in effect, simply providing its services.

Perhaps Section 230’s blanket immunity is excessive. Perhaps there is a proper standard that could impose liability on online intermediaries for user-generated content in limited circumstances properly tied to their ability to control harmful actors and the costs of doing so. But liability in the circumstances suggested by the Taamneh plaintiffs—effectively amounting to strict liability—would be an even bigger mistake in the opposite direction.

It seems that large language models (LLMs) are all the rage right now, from Bing’s announcement that it plans to integrate the ChatGPT technology into its search engine to Google’s announcement of its own LLM called “Bard” to Meta’s recent introduction of its Large Language Model Meta AI, or “LLaMA.” Each of these LLMs use artificial intelligence (AI) to create text-based answers to questions.

But it certainly didn’t take long after these innovative new applications were introduced for reports to emerge of LLM models just plain getting facts wrong. Given this, it is worth asking: how will the law deal with AI-created misinformation?

Among the first questions courts will need to grapple with is whether Section 230 immunity applies to content produced by AI. Of course, the U.S. Supreme Court already has a major Section 230 case on its docket with Gonzalez v. Google. Indeed, during oral arguments for that case, Justice Neil Gorsuch appeared to suggest that AI-generated content would not receive Section 230 immunity. And existing case law would appear to support that conclusion, as LLM content is developed by the interactive computer service itself, not by its users.

Another question raised by the technology is what legal avenues would be available to those seeking to challenge the misinformation. Under the First Amendment, the government can only regulate false speech under very limited circumstances. One of those is defamation, which seems like the most logical cause of action to apply. But under defamation law, plaintiffs—especially public figures, who are the most likely litigants and who must prove “malice”—may have a difficult time proving the AI acted with the necessary state of mind to sustain a cause of action.

Section 230 Likely Does Not Apply to Information Developed by an LLM

Section 230(c)(1) states:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

The law defines an interactive computer service as “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.”

The access software provider portion of that definition includes any tool that can “filter, screen, allow, or disallow content; pick, choose, analyze, or digest content; or transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content.”

And finally, an information content provider is “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.”

Taken together, Section 230(c)(1) gives online platforms (“interactive computer services”) broad immunity for user-generated content (“information provided by another information content provider”). This even covers circumstances where the online platform (acting as an “access software provider”) engages in a great deal of curation of the user-generated content.

Section 230(c)(1) does not, however, protect information created by the interactive computer service itself.

There is case law to help determine whether content is created or developed by the interactive computer service. Online platforms applying “neutral tools” to help organize information have not lost immunity. As the 9th U.S. Circuit Court of Appeals put it in Fair Housing Council v. Roommates.com:

Providing neutral tools for navigating websites is fully protected by CDA immunity, absent substantial affirmative conduct on the part of the website creator promoting the use of such tools for unlawful purposes.

On the other hand, online platforms are liable for content they create or develop, which does not include “augmenting the content generally,” but does include “materially contributing to its alleged unlawfulness.” 

The question here is whether the text-based answers provided by LLM apps like Bing’s Sydney or Google’s Bard comprise content created or developed by those online platforms. One could argue that LLMs are neutral tools simply rearranging information from other sources for display. It seems clear, however, that the LLM is synthesizing information to create new content. The use of AI to answer a question, rather than a human agent of Google or Microsoft, doesn’t seem relevant to whether or not it was created or developed by those companies. (Though, as Matt Perault notes, how LLMs are integrated into a product matters. If an LLM just helps “determine which search results to prioritize or which text to highlight from underlying search results,” then it may receive Section 230 protection.)

The technology itself gives text-based answers based on inputs from the questioner. LLMs uses AI-trained engines to guess the next word based on troves of data from the internet. While the information may come from third parties, the creation of the content itself is due to the LLM. As ChatGPT put it in response to my query here:

Proving Defamation by AI

In the absence of Section 230 immunity, there is still the question of how one could hold Google’s Bard or Microsoft’s Sydney accountable for purveying misinformation. There are no laws against false speech in general, nor can there be, since the Supreme Court declared such speech was protected in United States v. Alvarez. There are, however, categories of false speech, like defamation and fraud, which have been found to lie outside of First Amendment protection.

Defamation is the most logical cause of action that could be brought for false information provided by an LLM app. But it is notable that it is highly unlikely that people who have not received significant public recognition will be known by these LLM apps (believe me, I tried to get ChatGPT to tell me something about myself—alas, I’m not famous enough). On top of that, those most likely to have significant damages from their reputations being harmed by falsehoods spread online are those who are in the public eye. This means that, for the purposes of the defamation suit, it is public figures who are most likely to sue.

As an example, if ChatGPT answers the question of whether Johnny Depp is a wife-beater by saying that he is, contrary to one court’s finding (but consistent with another’s), Depp could sue the creators of the service for defamation. He would have to prove that a false statement was publicized to a third party that resulted in damages to him. For the sake of argument, let’s say he can do both. The case still isn’t proven because, as a public figure, he would also have to prove “actual malice.”

Under New York Times v. Sullivan and its progeny, a public figure must prove the defendant acted with “actual malice” when publicizing false information about the plaintiff. Actual malice is defined as “knowledge that [the statement] was false or with reckless disregard of whether it was false or not.”

The question arises whether actual malice can be attributed to a LLM. It seems unlikely that it could be said that the AI’s creators trained it in a way that they “knew” the answers provided would be false. But it may be a more interesting question whether the LLM is giving answers with “reckless disregard” of their truth or falsity. One could argue that these early versions of the technology are exactly that, but the underlying AI is likely to improve over time with feedback. The best time for a plaintiff to sue may be now, when the LLMs are still in their infancy and giving off false answers more often.

It is possible that, given enough context in the query, LLM-empowered apps may be able to recognize private figures, and get things wrong. For instance, when I asked ChatGPT to give a biography of myself, I got no results:

When I added my workplace, I did get a biography, but none of the relevant information was about me. It was instead about my boss, Geoffrey Manne, the president of the International Center for Law & Economics:

While none of this biography is true, it doesn’t harm my reputation, nor does it give rise to damages. But it is at least theoretically possible that an LLM could make a defamatory statement against a private person. In such a case, a lower burden of proof would apply to the plaintiff, that of negligence, i.e., that the defendant published a false statement of fact that a reasonable person would have known was false. This burden would be much easier to meet if the AI had not been sufficiently trained before being released upon the public.

Conclusion

While it is unlikely that a service like ChapGPT would receive Section 230 immunity, it also seems unlikely that a plaintiff would be able to sustain a defamation suit against it for false statements. The most likely type of plaintiff (public figures) would encounter difficulty proving the necessary element of “actual malice.” The best chance for a lawsuit to proceed may be against the early versions of this service—rolled out quickly and to much fanfare, while still being in a beta stage in terms of accuracy—as a colorable argument can be made that they are giving false answers in “reckless disregard” of their truthfulness.

The Federal Trade Commission (FTC) announced in a notice of proposed rulemaking (NPRM) last month that it intends to ban most noncompete agreements. Is that a good idea? As a matter of policy, the question is debatable. So far as the NPRM is concerned, however, that debate is largely hypothetical. It is unlikely that any rule the FTC issues will ever take effect. 

Several formidable legal obstacles stand in the way. The FTC seeks to stand its rule on the authority of Section 5 of the FTC Act, which bars “unfair methods of competition” in commerce. But Section 5 says nothing about rulemaking, as opposed to case-by-case prosecution. 

There is a rulemaking provision in Section 6, but for reasons explained elsewhere, it only empowers the FTC to set out its own internal procedures. And if the FTC could craft binding substantive rules—such as a ban on noncompete agreements—that would violate the U.S. Constitution. It would transfer lawmaking power from Congress to an administrative agency, in violation of Article I.

What’s more, the U.S. Supreme Court recently confirmed the existence of a “major questions doctrine,” under which an agency attempting to “make major policy decisions itself” must “point to clear congressional authorization for the power it claims.” The FTC’s proposed rule would sweep aside tens of millions of noncompete clauses; it would very likely alter salaries to the tune of hundreds of billions of dollars a year; and it would preempt dozens of state laws. That’s some “major” policymaking. Nothing in the FTC Act “clear[ly]” authorizes the FTC to undertake it.

But suppose that none of these hurdles existed. Surely, then the FTC would get somewhere—right? In seeking to convince a court to read the statute its way, after all, it could make a bid for Chevron deference. Named for Chevron v. NRDC (1984), that rule (of course) requires a court to defer to an agency’s reasonable construction of a law the agency administers. With the benefit of such judicial obeisance, the FTC would not have to show that noncompete clauses are unlawful under the best reading of Section 5. It could get away with showing merely that they’re unlawful under a plausible reading of Section 5.

But Chevron won’t do the trick.

The Chevron test can be broken down into three phases. A court begins by determining whether the test even applies (often called Chevron “step zero”). If it does, the court next decides whether the statute in question has a clear meaning (Chevron step one). And if it turns out that the statute is unclear—is ambiguous—the court proceeds to ask whether the agency’s interpretation of the statute is reasonable, and if it is, to yield to it (Chevron step two).

Each of these stages poses a problem for the FTC. Not long ago, the Supreme Court showed why this is so. True, Kisor v. Wilkie (2019) is not about Chevron deference. Not directly. But the decision upholds a cognate doctrine, Auer deference (named for Auer v. Robbins (1997)), under which a court typically defers to an agency’s understanding of its own regulations. Kisor leans heavily, in its analysis, both on Chevron itself and on later opinions about the Chevron test, such as United States v. Mead Corp. (2001) and City of Arlington v. FCC (2013). So it is hardly surprising that Kisor makes several points that are salient here.

Start with what Kisor says about when Chevron comes into play at all. Chevron and Auer stand, Kisor reminds us, on a presumption that Congress generally wants expert agencies, not generalist courts, to make the policy judgments needed to fill in the details of a statutory scheme. It follows, Kisor remarks, that if an “agency’s interpretation” does not “in some way implicate its substantive expertise,” there’s no reason to defer to it.

When is an agency not wielding its “substantive expertise”? One example Kisor offers is when the disputed statutory language is derived from the common law. Parsing common-law terms, Kisor notes, “fall[s] more naturally into a judge’s bailiwick.”

This is bad news for the FTC. Think about it. When it put the words “unfair methods of competition” in Section 5, could Congress have meant “unfair” in the cosmic sense? Could it have intended to grant a bunch of unelected administrators a roving power to “do justice”? Of course not. No, the phrase “unfair methods of competition” descends from the narrow, technical, humdrum common-law concept of “unfair competition.”

The FTC has no special insight into what the term “unfair competition” meant at common law. Figuring that out is judges’ work. That Congress fiddled with things a little does not change this conclusion. Adding the words “methods of” does not rip the words “unfair competition” from their common-law roots and launch them into a semantic void.

It remains the case—as Justice Felix Frankfurter put it—that when “a word is obviously transplanted” from the common law, it “brings the old soil with it.” And an agency, Kisor confirms, “has no comparative expertise” at digging around in that particular dirt.

The FTC lacks expertise not only in understanding the common law, but even in understanding noncompete agreements. Dissenting from the issuance of the NPRM, (soon to be former) Commissioner Christine S. Wilson observed that the agency has no experience prosecuting employee noncompete clauses under Section 5. 

So the FTC cannot get past Chevron step zero. Nor, if it somehow crawled its way there, could the agency satisfy Chevron step one. Chevron directs a court examining a text for a clear meaning to employ the “traditional tools” of construction. Kisor stresses that a court must exhaust those tools. It must “carefully consider the text, structure, history, and purpose” of the regulation (under Auer) or statute (under Chevron). “Doing so,” Kisor assures us, “will resolve many seeming ambiguities.”

The text, structure, history, and purpose of Section 5 make clear that noncompete agreements are not an unfair method of competition. Certainly not as a species. “‘Unfair competition,’ as known to the common law,” the Supreme Court explained in Schechter Poultry v. United States (1935), was “a limited concept.” It was “predicated of acts which lie outside the ordinary course of business and are tainted by fraud, or coercion, or conduct otherwise prohibited by law.” Under the common law, noncompete agreements were generally legal—so we know that they did not constitute “unfair competition.”

And although Section 5 bars “unfair methods of competition,” the altered wording still doesn’t capture conduct that isn’t unfair. The Court has said that the meaning of the phrase is properly “left to judicial determination as controversies arise.” It is to be fleshed out “in particular instances, upon evidence, in the light of particular competitive conditions.” The clear import of these statements is that the FTC may not impose broad prohibitions that sweep in legitimate business conduct.

Yet a blanket ban on noncompete clauses would inevitably erase at least some agreements that are not only not wrongful, but beneficial. “There is evidence,” the FTC itself concedes, “that non-compete clauses increase employee training and other forms of investment.” Under the plain meaning of Section 5, the FTC can’t condemn a practice altogether just because it is sometimes, or even often, unfair. It must, at the very least, do the work of sorting out, “in particular instances,” when the costs outweigh the benefits.

By definition, failure at Chevron step one entails failure at Chevron step two. It is worth noting, though, that even if the FTC reached the final stage, and even if, once there, it convinced a court to disregard the common law and read the word “unfair” in a colloquial sense, it would still not be home free. “Under Chevron,” Kisor states, “the agency’s reading must fall within the bounds of reasonable interpretation.” This requirement is important in light of the “far-reaching influence of agencies and the opportunities such power carries for abuse.”

Even if one assumes (in the teeth of Article I) that Congress could hand an independent agency unfettered authority to stamp out “unfairness” in the economy, that does not mean that Congress, in fact, did so in Section 5. Why did Congress write Section 5 as it did? Largely because it wanted to give the FTC the flexibility to deal with new and unexpected forms of wrongdoing as they arise. As one congressional report concluded, “it is impossible to frame definitions which embrace all unfair practices” in advance. “The purpose of Congress,” wrote Justice Louis Brandeis (who had a hand in drafting the law), was to ensure that the FTC can “prevent” an emergent “unfair method” from taking hold as a “general practice.”

Noncompete agreements are not some startling innovation. They’ve been around—and allowed—for hundreds of years. If Congress simply wanted to ensure that the FTC can nip new threats to competition in the bud, the NPRM is not a proper use of the FTC’s power under Section 5.

In any event, what Congress almost certainly did not intend was to hand the FTC the capacity (as Chair Lina Khan would have it) to “shape[] the distribution of power and opportunity across our economy.” The FTC’s commissioners are not elected, and they cannot be removed (absent misconduct) by the president. They lack the democratic legitimacy or political accountability to restructure the economy.

All the same, nothing about Section 5 suggests that Congress gave the agency such awesome power. What leeway Chevron might give here, common sense takes away. The more the FTC “seeks to break new ground by enjoining otherwise legitimate practices,” a federal court of appeals once declared, “the closer must be our scrutiny upon judicial review.” It falls to the judiciary to ensure that the agency does not “undu[ly] … interfere[]” with “our country’s competitive system.”

We have come full circle. Article I and the “major questions” principle tell us that the FTC cannot use four words in Section 5 of the FTC Act to issue a rule that disrupts contractual relations, tramples federalism, and shifts around many billions of dollars in wealth. And if we march through the Chevron analysis anyway, we find that, even at Chevron step two, the statute still can’t bear the weight. Chevron deference is not a license for the FTC to ignore the separation of powers and micromanage the economy.

In our previous post on Gonzalez v. Google LLC, which will come before the U.S. Supreme Court for oral arguments Feb. 21, Kristian Stout and I argued that, while the U.S. Justice Department (DOJ) got the general analysis right (looking to Roommates.com as the framework for exceptions to the general protections of Section 230), they got the application wrong (saying that algorithmic recommendations should be excepted from immunity).

Now, after reading Google’s brief, as well as the briefs of amici on their side, it is even more clear to me that:

  1. algorithmic recommendations are protected by Section 230 immunity; and
  2. creating an exception for such algorithms would severely damage the internet as we know it.

I address these points in reverse order below.

Google on the Death of the Internet Without Algorithms

The central point that Google makes throughout its brief is that a finding that Section 230’s immunity does not extend to the use of algorithmic recommendations would have potentially catastrophic implications for the internet economy. Google and amici for respondents emphasize the ubiquity of recommendation algorithms:

Recommendation algorithms are what make it possible to find the needles in humanity’s largest haystack. The result of these algorithms is unprecedented access to knowledge, from the lifesaving (“how to perform CPR”) to the mundane (“best pizza near me”). Google Search uses algorithms to recommend top search results. YouTube uses algorithms to share everything from cat videos to Heimlich-maneuver tutorials, algebra problem-solving guides, and opera performances. Services from Yelp to Etsy use algorithms to organize millions of user reviews and ratings, fueling global commerce. And individual users “like” and “share” content millions of times every day. – Brief for Respondent Google, LLC at 2.

The “recommendations” they challenge are implicit, based simply on the manner in which YouTube organizes and displays the multitude of third-party content on its site to help users identify content that is of likely interest to them. But it is impossible to operate an online service without “recommending” content in that sense, just as it is impossible to edit an anthology without “recommending” the story that comes first in the volume. Indeed, since the dawn of the internet, virtually every online service—from news, e-commerce, travel, weather, finance, politics, entertainment, cooking, and sports sites, to government, reference, and educational sites, along with search engines—has had to highlight certain content among the thousands or millions of articles, photographs, videos, reviews, or comments it hosts to help users identify what may be most relevant. Given the sheer volume of content on the internet, efforts to organize, rank, and display content in ways that are useful and attractive to users are indispensable. As a result, exposing online services to liability for the “recommendations” inherent in those organizational choices would expose them to liability for third-party content virtually all the time. – Amicus Brief for Meta Platforms at 3-4.

In other words, if Section 230 were limited in the way that the plaintiffs (and the DOJ) seek, internet platforms’ ability to offer users useful information would be strongly attenuated, if not completely impaired. The resulting legal exposure would lead inexorably to far less of the kinds of algorithmic recommendations upon which the modern internet is built.

This is, in part, why we weren’t able to fully endorse the DOJ’s brief in our previous post. The DOJ’s brief simply goes too far. It would be unreasonable to establish as a categorical rule that use of the ubiquitous auto-discovery algorithms that power so much of the internet would strip a platform of Section 230 protection. The general rule advanced by the DOJ’s brief would have detrimental and far-ranging implications.

Amici on Publishing and Section 230(f)(4)

Google and the amici also make a strong case that algorithmic recommendations are inseparable from publishing. They have a strong textual hook in Section 230(f)(4), which explicitly protects “enabling tools that… filter, screen, allow, or disallow content; pick, choose, analyze or disallow content; or transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content.”

As the amicus brief from a group of internet-law scholars—including my International Center for Law & Economics colleagues Geoffrey Manne and Gus Hurwitz—put it:

Section 230’s text should decide this case. Section 230(c)(1) immunizes the user or provider of an “interactive computer service” from being “treated as the publisher or speaker” of information “provided by another information content provider.” And, as Section 230(f)’s definitions make clear, Congress understood the term “interactive computer service” to include services that “filter,” “screen,” “pick, choose, analyze,” “display, search, subset, organize,” or “reorganize” third-party content. Automated recommendations perform exactly those functions, and are therefore within the express scope of Section 230’s text. – Amicus Brief of Internet Law Scholars at 3-4.

In other words, Section 230 protects not just the conveyance of information, but how that information is displayed. Algorithmic recommendations are a subset of those display tools that allow users to find what they are looking for with ease. Section 230 can’t be reasonably read to exclude them.

Why This Isn’t Really (Just) a Roommates.com Case

This is where the DOJ’s amicus brief (and our previous analysis) misses the point. This is not strictly a Roomates.com case. The case actually turns on whether algorithmic recommendations are separable from publication of third-party content, rather than whether they are design choices akin to what was occurring in that case.

For instance, in our previous post, we argued that:

[T]he DOJ argument then moves onto thinner ice. The DOJ believes that the 230 liability shield in Gonzalez depends on whether an automated “recommendation” rises to the level of development or creation, as the design of filtering criteria in Roommates.com did.

While we thought the DOJ went too far in differentiating algorithmic recommendations from other uses of algorithms, we gave them too much credit in applying the Roomates.com analysis. Section 230 was meant to immunize filtering tools, so long as the information provided is from third parties. Algorithmic recommendations—like the type at issue with YouTube’s “Up Next” feature—are less like the conduct in Roommates.com and much more like a search engine.

The DOJ did, however, have a point regarding algorithmic tools in that they may—like any other tool a platform might use—be employed in a way that transforms the automated promotion into a direct endorsement or original publication. For instance, it’s possible to use algorithms to intentionally amplify certain kinds of content in such a way as to cultivate more of that content.

That’s, after all, what was at the heart of Roommates.com. The site was designed to elicit responses from users that violated the law. Algorithms can do that, but as we observed previously, and as the many amici in Gonzalez observe, there is nothing inherent to the operation of algorithms that match users with content that makes their use categorically incompatible with Section 230’s protections.

Conclusion

After looking at the textual and policy arguments forwarded by both sides in Gonzalez, it appears that Google and amici for respondents have the better of it. As several amici argued, to the extent there are good reasons to reform Section 230, Congress should take the lead. The Supreme Court shouldn’t take this case as an opportunity to significantly change the consensus of the appellate courts on the broad protections of Section 230 immunity.

Later next month, the U.S. Supreme Court will hear oral arguments in Gonzalez v. Google LLC, a case that has drawn significant attention and many bad takes regarding how Section 230 of the Communications Decency Act should be interpreted. Enacted in the mid-1990s, when the Internet as we know it was still in its infancy, Section 230 has grown into a law that offers online platforms a fairly comprehensive shield against liability for the content that third parties post to their services. But the law has also come increasingly under fire, from both the political left and the right. 

At issue in Gonzalez is whether Section 230(c)(1) immunizes Google from a set of claims brought under the Antiterrorism Act of 1990 (ATA). The petitioners are relatives of Nohemi Gonzalez, an American citizen murdered in a 2015 terrorist attack in Paris. They allege that Google, through YouTube, is liable under the ATA for providing assistance to ISIS for four main reasons. They allege that: 

  1. Google allowed ISIS to use YouTube to disseminate videos and messages, thereby recruiting and radicalizing terrorists responsible for the murder.
  2. Google failed to take adequate steps to take down videos and accounts and keep them down.
  3. Google recommends videos of others, both through subscriptions and algorithms.
  4. Google monetizes this content through its AdSense service, with ISIS-affiliated users receiving revenue. 

The 9th U.S. Circuit Court of Appeals dismissed all of the non-revenue-sharing claims as barred by Section 230(c)(1), but allowed the revenue-sharing claim to go forward. 

Highlights of DOJ’s Brief

In an amicus brief, the U.S. Justice Department (DOJ) ultimately asks the Court to vacate the 9th Circuit’s judgment regarding those claims that are based on YouTube’s alleged targeted recommendations of ISIS content. But the DOJ also rejects much of the petitioner’s brief, arguing that Section 230 does rightfully apply to the rest of the claims. 

The crux of the DOJ’s brief concerns when and how design choices can be outside of Section 230 immunity. The lodestar 9th Circuit case that the DOJ brief applies is 2008’s Fair Housing Council of San Fernando Valley v. Roommates.com.

As the DOJ notes, radical theories advanced by the plaintiffs and other amici would go too far in restricting Section 230 immunity based on a platform’s decisions on whether or not to block or remove user content (see, e.g., its discussion on pp. 17-21 of the merits and demerits of Justice Clarence Thomas’s Malwarebytes concurrence).  

At the same time, the DOJ’s brief notes that there is room for a reasonable interpretation of Section 230 that allows for liability to attach when online platforms behave unreasonably in their promotion of users’ content. Applying essentially the 9th Circuit’s Roommates.com standard, the DOJ argues that YouTube’s choice to amplify certain terrorist content through its recommendations algorithm is a design choice, rather than simply the hosting of third-party content, thereby removing it from the scope of  Section 230 immunity.  

While there is much to be said in favor of this approach, it’s important to point out that, although directionally correct, it’s not at all clear that a Roommates.com analysis should ultimately come down as the DOJ recommends in Gonzalez. More broadly, the way the DOJ structures its analysis has important implications for how we should think about the scope of Section 230 reform that attempts to balance accountability for intermediaries with avoiding undue collateral censorship.

Charting a Middle Course on Immunity

The important point on which the DOJ relies from Roommates.com is that intermediaries can be held accountable when their own conduct creates violations of the law, even if it involves third–party content. As the DOJ brief puts it:

Section 230(c)(1) protects an online platform from claims premised on its dissemination of third-party speech, but the statute does not immunize a platform’s other conduct, even if that conduct involves the solicitation or presentation of third-party content. The Ninth Circuit’s Roommates.com decision illustrates the point in the context of a website offering a roommate-matching service… As a condition of using the service, Roommates.com “require[d] each subscriber to disclose his sex, sexual orientation and whether he would bring children to a household,” and to “describe his preferences in roommates with respect to the same three criteria.” Ibid. The plaintiffs alleged that asking those questions violated housing-discrimination laws, and the court of appeals agreed that Section 230(c)(1) did not shield Roommates.com from liability for its “own acts” of “posting the questionnaire and requiring answers to it.” Id. at 1165.

Imposing liability in such circumstances does not treat online platforms as the publishers or speakers of content provided by others. Nor does it obligate them to monitor their platforms to detect objectionable postings, or compel them to choose between “suppressing controversial speech or sustaining prohibitive liability.”… Illustrating that distinction, the Roommates.com court held that although Section 230(c)(1) did not apply to the website’s discriminatory questions, it did shield the website from liability for any discriminatory third-party content that users unilaterally chose to post on the site’s “generic” “Additional Comments” section…

The DOJ proceeds from this basis to analyze what it would take for Google (via YouTube) to no longer benefit from Section 230 immunity by virtue of its own editorial actions, as opposed to its actions as a publisher (which 230 would still protect). For instance, are the algorithmic suggestions of videos simply neutral tools that allow for users to get more of the content they desire, akin to search results? Or are the algorithmic suggestions of new videos a design choice that makes it akin to Roommates?

The DOJ argues that taking steps to better display pre-existing content is not content development or creation, in and of itself. Similarly, it would be a mistake to make intermediaries liable for creating tools that can then be deployed by users:

Interactive websites invariably provide tools that enable users to create, and other users to find and engage with, information. A chatroom might supply topic headings to organize posts; a photo-sharing site might offer a feature for users to signal that they like or dislike a post; a classifieds website might enable users to add photos or maps to their listings. If such features rendered the website a co-developer of all users’ content, Section 230(c)(1) would be a dead letter.

At a high level, this is correct. Unfortunately, the DOJ argument then moves onto thinner ice. The DOJ believes that the 230 liability shield in Gonzalez depends on whether an automated “recommendation” rises to the level of development or creation, as the design of filtering criteria in Roommates.com did. Toward this end, the brief notes that:

The distinction between a recommendation and the recommended content is particularly clear when the recommendation is explicit. If YouTube had placed a selected ISIS video on a user’s homepage alongside a message stating, “You should watch this,” that message would fall outside Section 230(c)(1). Encouraging a user to watch a selected video is conduct distinct from the video’s publication (i.e., hosting). And while YouTube would be the “publisher” of the recommendation message itself, that message would not be “information provided by another information content provider.” 47 U.S.C. 230(c)(1).

An Absence of Immunity Does Not Mean a Presence of Liability

Importantly, the DOJ brief emphasizes throughout that remanding the ATA claims is not the end of the analysis—i.e., it does not mean that the plaintiffs can prove the elements. Moreover, other background law—notably, the First Amendment—can limit the application of liability to intermediaries, as well. As we put it in our paper on Section 230 reform:

It is important to again note that our reasonableness proposal doesn’t change the fact that the underlying elements in any cause of action still need to be proven. It is those underlying laws, whether civil or criminal, that would possibly hold intermediaries liable without Section 230 immunity. Thus, for example, those who complain that FOSTA/SESTA harmed sex workers by foreclosing a safe way for them to transact (illegal) business should really be focused on the underlying laws that make sex work illegal, not the exception to Section 230 immunity that FOSTA/SESTA represents. By the same token, those who assert that Section 230 improperly immunizes “conservative bias” or “misinformation” fail to recognize that, because neither of those is actually illegal (nor could they be under First Amendment law), Section 230 offers no additional immunity from liability for such conduct: There is no underlying liability from which to provide immunity in the first place.

There’s a strong likelihood that, on remand, the court will find there is no violation of the ATA at all. Section 230 immunity need not be stretched beyond all reasonable limits to protect intermediaries from hypothetical harms when underlying laws often don’t apply. 

Conclusion

To date, the contours of Section 230 reform largely have been determined by how courts interpret the statute. There is an emerging consensus that some courts have gone too far in extending Section 230 immunity to intermediaries. The DOJ’s brief is directionally correct, but the Court should not adopt it wholesale. More needs to be done to ensure that the particular facts of Gonzalez are not used to completely gut Section 230 more generally.  

The Federal Trade Commission’s (FTC) Jan. 5 “Notice of Proposed Rulemaking on Non-Compete Clauses” (NPRMNCC) is the first substantive FTC Act Section 6(g) “unfair methods of competition” rulemaking initiative following the release of the FTC’s November 2022 Section 5 Unfair Methods of Competition Policy Statement. Any final rule based on the NPRMNCC stands virtually no chance of survival before the courts. What’s more, this FTC initiative also threatens to have a major negative economic-policy impact. It also poses an institutional threat to the Commission itself. Accordingly, the NPRMNCC should be withdrawn, or as a “second worst” option, substantially pared back and recast.

The NPRMNCC is succinctly described, and its legal risks ably summarized, in a recent commentary by Gibson Dunn attorneys: The proposal is sweeping in its scope. The NPRMNCC states that it “would, among other things, provide that it is an unfair method of competition for an employer to enter into or attempt to enter into a non-compete clause with a worker; to maintain with a worker a non-compete clause; or, under certain circumstances, to represent to a worker that the worker is subject to a non-compete clause.”

The Gibson Dunn commentary adds that it “would require employers to rescind all existing non-compete provisions within 180 days of publication of the final rule, and to provide current and former employees notice of the rescission.‎ If employers comply with these two requirements, the rule would provide a safe harbor from enforcement.”‎

As I have explained previously, any FTC Section 6(g) rulemaking is likely to fail as a matter of law. Specifically, the structure of the FTC Act indicates that Section 6(g) is best understood as authorizing procedural regulations, not substantive rules. What’s more, Section 6(g) rules raise serious questions under the U.S. Supreme Court’s nondelegation and major questions doctrines (given the breadth and ill-defined nature of “unfair methods of competition”) and under administrative law (very broad unfair methods of competition rules may be deemed “arbitrary and capricious” and raise due process concerns). The cumulative weight of these legal concerns “makes it highly improbable that substantive UMC rules will ultimately be upheld.

The legal concerns raised by Section 6(g) rulemaking are particularly acute in the case of the NPRMNCC, which is exceedingly broad and deals with a topic—employment-related noncompete clauses—with which the FTC has almost no experience. FTC Commissioner Christine Wilson highlights this legal vulnerability in her dissenting statement opposing issuance of the NPRMNCC.

As Andrew Mercado and I explained in our commentary on potential FTC noncompete rulemaking: “[a] review of studies conducted in the past two decades yields no uniform, replicable results as to whether such agreements benefit or harm workers.” In a comprehensive literature review made available online at the end of 2019, FTC economist John McAdams concluded that “[t]here is little evidence on the likely effects of broad prohibitions of non-compete agreements.” McAdams also commented on the lack of knowledge regarding the effects that noncompetes may have on ultimate consumers. Given these realities, the FTC would be particularly vulnerable to having a court hold that a final noncompete rule (even assuming that it somehow surmounted other legal obstacles) lacked an adequate factual basis, and thus was arbitrary and capricious.

The poor legal case for proceeding with the NPRMNCC is rendered even weaker by the existence of robust state-law provisions concerning noncompetes in almost every state (see here for a chart comparing state laws). Differences in state jurisprudence may enable “natural experimentation,” whereby changes made to state law that differ across jurisdictions facilitate comparisons of the effects of different approaches to noncompetes. Furthermore, changes to noncompete laws in particular states that are seen to cause harm, or generate benefits, may allow “best practices” to emerge and thereby drive welfare-enhancing reforms in multiple jurisdictions.

The Gibson Dunn commentary points out that, “[a]s a practical matter, the proposed [FTC noncompete] rule would override existing non-compete requirements and practices in the vast majority of states.” Unfortunately, then, the NPRMNCC would largely do away with the potential benefits of competitive federalism in the area of noncompetes. In light of that, federal courts might well ask whether Congress meant to give the FTC preemptive authority over a legal field traditionally left to the states, merely by making a passing reference to “mak[ing] rules and regulations” in Section 6(g) of the FTC Act. Federal judges would likely conclude that the answer to this question is “no.”

Economic Policy Harms

How much economic harm could an FTC rule on noncompetes cause, if the courts almost certainly would strike it down? Plenty.

The affront to competitive federalism, which would prevent optimal noncompete legal regimes from developing (see above), could reduce the efficiency of employment contracts and harm consumer welfare. It would be exceedingly difficult (if not impossible) to measure such harms, however, because there would be no alternative “but-for” worlds with differing rules that could be studied.

The broad ban on noncompetes predictably will prevent—or at least chill—the use of noncompete clauses to protect business-property interests (including trade secrets and other intellectual-property rights) and to protect value-enhancing investments in worker training. (See here for a 2016 U.S. Treasury Department Office of Economic Policy Report that lists some of the potential benefits of noncompetes.) The NPRMNCC fails to account for those and other efficiencies, which may be key to value-generating business-process improvements that help drive dynamic economic growth. Once again, however, it would be difficult to demonstrate the nature or extent of such foregone benefits, in the absence of “but-for” world comparisons.

Business-litigation costs would also inevitably arise, as uncertainties in the language of a final noncompete rule were worked out in court (prior to the rule’s legal demise). The opportunity cost of firm resources directed toward rule-related issues, rather than to business-improvement activities, could be substantial. The opportunity cost of directing FTC resources to wasteful noncompete-related rulemaking work, rather than potential welfare-enhancing endeavors (such as anti-fraud enforcement activity), also should not be neglected.

Finally, the substantial error costs that would attend designing and seeking to enforce a final FTC noncompete rule, and the affront to the rule of law that would result from creating a substantial new gap between FTC and U.S. Justice Department competition-enforcement regimes, merits note (see here for my discussion of these costs in the general context of UMC rulemaking).

Conclusion

What, then, should the FTC do? It should withdraw the NPRMNCC.

If the FTC is concerned about the effects of noncompete clauses, it should commission appropriate economic research, and perhaps conduct targeted FTC Act Section 6(b) studies directed at noncompetes (focused on industries where noncompetes are common or ubiquitous). In light of that research, it might be in position to address legal policy toward noncompetes in competition advocacy before the states, or in testimony before Congress.

If the FTC still wishes to engage in some rulemaking directed at noncompete clauses, it should consider a targeted FTC Act Section 18 consumer-protection rulemaking (see my discussion of this possibility, here). Unlike Section 6(g), the legality of Section 18 substantive rulemaking (which is directed at “unfair or deceptive acts or practices”) is well-established. Categorizing noncompete-clause-related practices as “deceptive” is plainly a nonstarter, so the Commission would have to bases its rulemaking on defining and condemning specified “unfair acts or practices.”

Section 5(n) of the FTC Act specifies that the Commission may not declare an act or practice to be unfair unless it “causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.” This is a cost-benefit test that plainly does not justify a general ban on noncompetes, based on the previous discussion. It probably could, however, justify a properly crafted narrower rule, such as a requirement that an employer notify its employees of a noncompete agreement before they accept a job offer (see my analysis here).  

Should the FTC nonetheless charge forward and release a final competition rule based on the NPRMNCC, it will face serious negative institutional consequences. In the previous Congress, Sens. Mike Lee (R-Utah) and Chuck Grassley (R-Iowa) have introduced legislation that would strip the FTC of its antitrust authority (leaving all federal antitrust enforcement in DOJ hands). Such legislation could gain traction if the FTC were perceived as engaging in massive institutional overreach. An unprecedented Commission effort to regulate one aspect of labor contracts (noncompete clauses) nationwide surely could be viewed by Congress as a prime example of such overreach. The FTC should keep that in mind if it values maintaining its longstanding role in American antitrust-policy development and enforcement.

[The final post in Truth on the Market‘s digital symposium “FTC Rulemaking on Unfair Methods of Competition” comes from Joshua Wright, the executive director of the Global Antitrust Institute at George Mason University and the architect, in his time as a member of the Federal Trade Commission, of the FTC’s prior 2015 UMC statement. You can find all of the posts in this series at the symposium page here.]

The Federal Trade Commission’s (FTC) recently released Policy Statement on unfair methods of competition (UMC) has a number of profound problems, which I will detail below. But first, some praise: if the FTC does indeed plan to bring many lawsuits challenging conduct as a standalone UMC (I am dubious it will), then the public ought to have notice about the change. Providing such notice is good government, and the new Statement surely provides that notice. And providing notice in this way was costly to the FTC: the contents of the statement make surviving judicial review harder, not easier (I will explain my reasons for this view below). Incurring that cost to provide notice deserves some praise.

Now onto the problems. I see four major ones.

First, the Statement seems to exist in a fantasy world; the FTC majority appears to wish away the past problems associated with UMC enforcement. Those problems have not, in fact, gone away and pretending they don’t exist—as this Statement does—is unlikely to help the Commission’s prospects for success in court.

Second, the Statement provides no guidance whatsoever about how a potential respondent might avoid UMC liability, which stands in sharp contrast to other statements and guidance documents issued by the Commission.

Third, the entire foundation of the statement is that, in 1914, Congress intended the FTC Act to have broader coverage than the Sherman Act. Fair enough. But the coverage of the Sherman Act isn’t fixed to what the Supreme Court thought it was in 1914: It’s a moving target that, in fact, has moved dramatically over the last 108 years. Congress in 1914 could not have intended UMC to be broader than how the courts would interpret the Sherman Act in the future (whether that future is 1918, much less 1970 or 2023).

And fourth, Congress has passed other statutes since it passed the FTC Act in 1914, one of which is the Administrative Procedure Act. The APA unambiguously and explicitly directs administrative agencies to engage in reasoned decision making. In a nutshell, this means that the actions of such agencies must be supported by substantial record evidence and can be set aside by a court on judicial review if they are arbitrary and capricious. “Congress intended to give the FTC broad authority in 1914” is not an argument to address the fact that, 32 years later, Congress also intended to limit the FTC’s authority (as well as other agencies’) by requiring reasoned decision making.

Each of these problems on its own would be enough to doom almost any case the Commission might bring to apply the statement. Together, they are a death knell.

A Record of Failure

As I have explained elsewhere, there are a number of reasons the FTC has pursued few standalone UMC cases in recent decades. The late-1970s effort to reinvigorate UMC enforcement via bringing cases was a total failure: the Commission did not lose the game on a last-second buzzer beater; it got blown out by 40 points. According to William Kovacic and Mark Winerman, in each of those UMC cases, “the tribunal recognized that Section 5 allows the FTC to challenge behavior beyond the reach of the other antitrust laws. In each instance, the court found that the Commission had failed to make a compelling case for condemning the conduct in question.”

Since these losses, the Commission hasn’t successfully litigated a UMC case in federal court. This, in my view, is because of a (very plausible) concern that, when presented with such a case, Article III courts would either define the Commission’s UMC authority on their own terms—i.e., restricting the Commission’s authority—or ultimately decide that the space beyond the Sherman Act that Congress in 1914 intended Section 5 to occupy exists only in theory and not in the real world, and declare the two statutes functionally equivalent. Those reasons—and not Chair Lina Khan’s preferred view that the Commission has been feckless, weak, or captured by special interests since 1981—explain why Section 5 has been used so sparingly over the last 40 years (and mostly just to extract settlements from parties under investigation). The majority’s effort to put all its eggs in the “1914 legislative history” basket simply ignores this reality.

Undefined Harms

The second problem is evident when one compares this statement with other policy statements or guidance documents issued by the Commission over the years. On the antitrust side of the house, these include the Horizontal Merger Guidelines, the (now-withdrawn by the FTC) Vertical Merger Guidelines, the Guidelines for Collaboration Among Competitors, the IP Licensing Guidelines, the Health Care Policy Statement, and the Antitrust Guidance for Human Resources Professionals.

Each of these documents is designed (at least in part) to help market participants understand what conduct might or might not violate one or more laws enforced by the FTC, and for that reason, each document provides specific examples of conduct that would violate the law, and conduct that would not.

The new UMC Policy Statement provides no such examples. Instead, we are left with the conclusory statement that, if the Commission can characterize the conduct as “coercive, exploitative, collusive, abusive, deceptive, predatory, or involve[s] the use of economic power” or “otherwise restrictive or exclusionary,” then the conduct can be a UMC.

What does this salad of words mean? I have no idea, and the Commission doesn’t even bother to try and define them. If a lawyer is asked, “based upon the Commission’s new UMC Statement, what conduct might be a violation?” the only defensible advice to give is “anything three Commissioners think.”

Ahistorical Jurisprudence

The third problem is the majority’s fictitious belief that Sherman Act jurisprudence is frozen in 1914—the year Congress passed the FTC Act. The Statement states that “Congress passed the FTC Act to push back against the judiciary’s open-ended rule of reason for analyzing Sherman Act claims” and cites the Supreme Court’s opinion in Standard Oil Co. of New Jersey v. United States from 1911.

It’s easy to understand why Congress in 1914 was dissatisfied with the opinion in Standard Oil; reading Standard Oil in 2022 is also a dissatisfying experience. The opinion takes up 106 pages in the U.S. Reporter, and individual paragraphs are routinely three pages long; it meanders between analyzing Section 1 and Section 2 of the Sherman Act without telling the reader; and is generally inscrutable. I have taught antitrust for almost 20 years and, though we cover Standard Oil because of its historical importance, I don’t teach the opinion, because the opinion does not help modern students understand how to practice antitrust law.

This stands in sharp contrast to Justice Louis Brandeis’s opinion in Chicago Board of Trade (issued four years after Congress passed the FTC Act), which I do teach consistently, because it articulates the beginning of the modern rule of reason. Although the majority of the FTC is on solid ground when it points out that Congress in 1914 intended the FTC’s UMC authority to have broader coverage than the Sherman Act, the coverage of the Sherman Act has changed since 1914.

This point is well-known, of course: Kovacic and Winerman explain that “[p]robably the most important” reason “Section 5 has played so small a role in the development of U.S. competition policy principles” “is that the Sherman Act proved to be a far more flexible tool for setting antitrust rules than Congress expected in the early 20th century.” The 10 pages in the Statement devoted to century-old legislative history just pretend like Sherman Act jurisprudence hasn’t changed in that same amount of time. The federal courts are going to see right through that.

What About the APA?

The fourth problem with the majority’s trip back to 1914 is that, since then, Congress has passed other statutes limiting the Commission’s authority. The most prominent of these is the Administrative Procedure Act, which was passed in 1946 (for those counting, 1946 is more than 30 years after 1914).

There are hundreds of opinions interpreting the APA, and indeed, an entire body of law has developed pursuant to those cases. These cases produce many lessons, but one of them is that it is not enough for an agency to have the legal authority to act: “Congress gave me this power. I am exercising this power. Therefore, my exercise of this power is lawful,” is, by definition, insufficient justification under the APA. An agency has the obligation to engage in reasoned decision making and must base its actions on substantial evidence. Its enforcement efforts will be set aside on judicial review if they are arbitrary and capricious.

By failing to explain how a company can avoid UMC liability—other than by avoiding conduct that is “coercive, exploitative, collusive, abusive, deceptive, predatory, or involve[s] the use of economic power” or “otherwise restrictive or exclusionary,” without defining those terms—the majority is basically shouting to the federal courts that its UMC enforcement program is going to be arbitrary and capricious. That’s going to fail for many reasons. A simple one is that 1946 is later in time than 1914, which is why the Commission putting all its eggs in the 1914 legislative history basket is not going to work once its actions are challenged in federal court.

Conclusion

These problems with the majority’s statement are so significant, so obvious, and so unlikely to be overcome, that I don’t anticipate that the Commission will pursue many UMC enforcement actions. Instead, I suspect UMC rulemaking is on the agenda, which has its own set of problems (not to mention the fact that the 1914 legislative history points away from Congress intending that the Commission has legislative rulemaking authority). Rather, I think the value of this statement is symbolic for Chair Khan and her supporters.

When one considers the record of the Khan Commission—many policy statements, few enforcement actions, and even fewer successful enforcement actions—it all makes more sense. The audience for this Statement is Chair Khan’s friends working on Capitol Hill and at think tanks, as well as her followers on Twitter. They might be impressed by it. The audience she should be concerned about is Article III judges, who surely won’t be. 

[This post is a contribution to Truth on the Market‘s continuing digital symposium “FTC Rulemaking on Unfair Methods of Competition.” You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

The Federal Trade Commission’s (FTC) Nov. 10 Policy Statement Regarding the Scope of Unfair Methods of Competition Under Section 5 of the Federal Trade Commission Act—adopted by a 3-1 vote, with Commissioner Christine Wilson issuing a dissenting statement—holds out the prospect of dramatic new enforcement initiatives going far beyond anything the FTC has done in the past. Of particular note, the statement abandons the antitrust “rule of reason,” rejects the “consumer welfare standard” that has long guided FTC competition cases, rejects economic analysis, rejects relevant precedent, misleadingly discusses legislative history, and cites inapposite and dated case law.

And what is the statement’s aim?  As Commissioner Wilson aptly puts it, the statement “announces that the Commission has the authority summarily to condemn essentially any business conduct it finds distasteful.” This sweeping claim, which extends far beyond the scope of prior Commission pronouncements, might be viewed as mere puffery with no real substantive effect: “a tale told by an idiot, full of sound and fury, signifying nothing.”

Various scholarly commentators have already explored the legal and policy shortcomings of this misbegotten statement (see, for example, here, here, here, here, here, and here). Suffice it to say there is general agreement that, as Gus Hurwitz explains, the statement “is non-precedential and lacks the force of law.”

The statement’s almost certain lack of legal effect, however, does not mean it is of no consequence. Businesses are harmed by legal risk, even if they are eventually likely to prevail in court. Markets react negatively to antitrust lawsuits, and thus firms may be expected to shy away from efficient profitable behavior that may draw the FTC’s ire. The resources firms redirect to less-efficient conduct impose costs on businesses and ultimately consumers. (And when meritless FTC lawsuits still come, wasteful litigation-related costs will be coupled with unwarranted reputational harm to businesses.)

Moreover, as Wilson points out, uncertainty about what the Commission may characterize as unfair “does not allow businesses to structure their conduct to avoid possible liability. . . . [T]he Policy Statement . . . significantly increases uncertainty for businesses[,] which . . . . are left with no navigational tools to map the boundaries of lawful and unlawful conduct.” This will further disincentivize new and innovative (and easily misunderstood) business initiatives. In the perhaps-vain hope that a Commission majority will take note of these harms and have second thoughts about retention of the statement, I will briefly summarize the legal case against the statement’s effectiveness. The FTC actually would be better able to “push the Section 5 envelope” a bit through some carefully tailored innovative enforcement actions if it could jettison the legal baggage that the statement represents. To understand why, a brief review of FTC competition rulemaking and competition enforcement authority is warranted

FTC Competition Rulemaking

As I and others have written at great length (see, for examples, this compilation of essays on FTC rulemaking published by Concurrences), the case for substantive FTC competition rulemaking under Section 6(g) of the FTC Act is exceedingly weak. In particular (see my July 2022 Truth on the Market commentary):

First, the “nondelegation doctrine” suggests that, under section 6(g), Congress did not confer on the FTC the specific statutory authority required to issue rules that address particular competitive practices.

Second, principles of statutory construction strongly indicate that the FTC’s general statutory provision dealing with rulemaking refers to procedural rules of organization, not substantive rules bearing on competition.

Third, even assuming that proposed competition rules survived these initial hurdles, principles of administrative law would raise the risk that competition rules would be struck down as “arbitrary and capricious.”

Fourth, there is a substantial possibility that courts would not defer to the FTC’s construction through rulemaking of its “unfair methods of competition” as authorizing the condemnation of specific competitive practices.

The 2022 statement raises these four problems in spades.

First, the Supreme Court has stated that the non-delegation doctrine requires that a statutory delegation must be supported by an “intelligible principle” guiding its application. There is no such principle that may be drawn from the statement, which emphasizes that unfair business conduct “may be coercive, exploitative, collusive, abusive, deceptive, predatory, or involve the use of economic power of a similar nature.” The conduct also must tend “to negatively affect competitive conditions – whether by affecting consumers, workers, or other market participants.” Those descriptions are so broad and all-encompassing that they are the antithesis of an “intelligible principle.”

Second, the passing nod to rulemaking referenced in Section 6(g) is best understood as an aid to FTC processes and investigations, not a source of substantive policymaking. The Supreme Court’s unanimous April 2021 decision in AMG Capital Management v. FTC (holding that the FTC could not obtain equitable monetary relief under its authority to seek injunctions) embodies a reluctance to read general non-specific language as conferring broad substantive powers on the FTC. This interpretive approach is in line with other Supreme Court case law that rejects finding “elephants in mouseholes.” While multiple federal courts had upheld the FTC’s authority to obtain injunctive monetary relief prior to its loss in the AMG case, only one nearly 50-year-old decision, National Petroleum Refiners, supports substantive competition-rulemaking authority, and its reasoning is badly dated. Nothing in the 2022 statement makes a convincing case for giving substantive import to Section 6(g).   

Third, given the extremely vague terms used to describe unfair method of competition in the 2022 statement (see first point, above), any effort to invoke them to find a source of authority to define new categories of competition-related violations would be sure to raise claims of agency arbitrariness and capriciousness under the Administrative Procedure Act (APA). Admittedly, the “arbitrary and capricious review” standard “has gone through numerous cycles since the enactment of the APA” and currently is subject to some uncertainty. Nevertheless, the statement’s untrammeled breadth and lack of clear definitions for unfair competitive conduct suggests that courts would likely employ a “hard look review,” which would make it relatively easy for novel Section 6(g) rules to be deemed arbitrary (especially in light of the skepticism of broad FTC claims of authority that is implicit in the Supreme Court’s unanimous AMG holding).

Fourth, given the economywide breadth of the phrase “unfair methods of competition,” it is quite possible (in fact, probably quite likely) that the Supreme Court would invoke the “major questions doctrine” and hold that unfair methods of competition rulemaking is “too important” to be left to the FTC. Under this increasingly invoked doctrine, “the Supreme Court has rejected agency claims of regulatory authority when (1) the underlying claim of authority concerns an issue of vast ‘economic and political significance,’ and (2) Congress has not clearly empowered the agency with authority over the issue.”

The fact that the 2022 statement plainly asserts vast authority to condemn a wide range of economically significant practices strengthens the already-strong case for condemning Section 5 competition rulemaking under this doctrine. Application of the doctrine would render moot the question of whether Section 6(g) rules would receive any Chevron deference. In any event, based on the 2022 Statement’s flouting of modern antitrust principles, including such core principles as consumer harm, efficiencies, and economic analysis, it appears unlikely that courts would accord such deference subsequent Section 6(g) rules. As Gus Hurwitz recently explained:

Administrative antitrust is a preferred vehicle for administering antitrust law, not for changing it. Should the FTC use its power aggressively, in ways that disrupt longstanding antitrust principles or seem more grounded in policy better created by Congress, it is likely to find itself on the losing side of the judicial opinion.

FTC Competition-Enforcement Authority

In addition to Section 6(g) competition-rulemaking initiatives, the 2022 statement, of course, aims to inform FTC Act Section 5(a) “unfair methods of competition” (UMC) enforcement actions. The FTC could bring a UMC suit before its own administrative tribunal or, in the alternative, seek to enjoin an alleged unfair method of competition in federal district court, pursuant to its authority under Section 13(b) of the FTC Act. The tenor of the 2022 statement undermines, rather than enhances, the likelihood that the FTC will succeed in “standalone Section 5(a)” lawsuits that challenge conduct falling beyond the boundaries of the Sherman and Clayton Antitrust Acts.

In a June 2019 FTC report to Congress on using standalone Section 5 cases to combat high pharma prices, the FTC explained:

[C]ourts have confirmed that the unilateral exercise of lawfully acquired market power does not violate the antitrust laws. Therefore, the attempted use of standalone Section 5 to address high prices, untethered from accepted theories of antitrust liability under the Sherman Act, is unlikely to find success in the courts.

There have been no jurisprudential changes since 2019 to suggest that a UMC suit challenging the exploitation of lawfully obtained market power by raising prices is likely to find judicial favor. It follows, a fortiori (legalese that I seldom have the opportunity to trot out), that the more “far out” standalone suits implied by the statement’s analysis would likely generate embarrassing FTC judicial losses.

Applying three of the four principles assessed in the analysis of FTC competition rulemaking (the second principle, referring to statutory authority for rulemaking, is inapplicable), the negative influence of the statement on FTC litigation outcomes is laid bare.

First, as is the case with rules, the unconstrained laundry list of “unfair” business practices fails to produce an “intelligible principle” guiding the FTC’s exercise of enforcement discretion. As such, courts could well conclude that, if the statement is to be taken seriously, the non-delegation doctrine applies, and the FTC does not possess delegated UMC authority. Even if such authority were found to have been properly delegated, some courts might separately conclude, on due process grounds, that the UMC prohibition is “void for vagueness” and therefore cannot support an enforcement action. (While the “void for vagueness” doctrine is controversial, related attacks on statutes based on “impossibility of compliance” may have a more solid jurisprudential footing, particularly in the case of civil statutes (see here). The breadth and uncertainty of the statement’s references to disfavored conduct suggests “impossibility of compliance” as a possible alternative critique of novel Section 5 competition cases.) These concerns also apply equally to possible FTC Section 13(b) injunctive actions filed in federal district court.

Second, there is a not insubstantial risk that an appeals court would hold that a final Section 5 competition-enforcement decision by the Commission would be “arbitrary and capricious” if it dealt with behavior far outside the scope of the Sherman or Clayton Acts, based on vague policy pronouncements found in the 2022 statement.

Third, and of greatest risk to FTC litigation prospects, it is likely that appeals courts (and federal district courts in Section 13(b) injunction cases) would give no deference to new far-reaching non-antitrust-based theories alluded to in the statement. As discussed above, this could be based on invocation of the major questions doctrine or, separately, on the (likely) failure to accord Chevron deference to theories that are far removed from recognized antitrust causes of action under modern jurisprudence.

What Should the FTC Do About the Statement?

In sum, the startling breadth and absence of well-defined boundaries that plagues the statement’s discussion of potential Section 5 UMC violations means that the statement’s issuance materially worsens the FTC’s future litigation prospects—both in defending UMC rulemakings and in seeking to affirm case-specific Commission findings of UMC violations.

What, then, should the FTC do?

It should, put simply, withdraw the 2022 statement and craft a new UMC policy statement (NPS) that avoids the major pitfalls inherent in the statement. The NPS should carefully delineate the boundaries of standalone UMC rulemakings and cases, so as (1) to minimize uncertainty in application; and (2) to harmonize UMC actions with the pro-consumer welfare goal (as enunciated by the Supreme Court) of the antitrust laws. In drafting the NPS, the FTC would do well to be mindful of the part of Commissioner Wilson’s dissenting statement that highlights the deficiencies in the 2022 statement that detract from its persuasiveness to courts:

First, . . . the Policy Statement does not provide clear guidance to businesses seeking to comply with the law.

Second, the Policy Statement does not establish an approach for the term “unfair” in the competition context that matches the economic and analytical rigor that Commission policy offers for the same term, “unfair,” in the consumer protection context.

Third, the Policy Statement does not provide a framework that will result in credible enforcement. Instead, Commission actions will be subject to the vicissitudes of prevailing political winds.

Fourth, the Policy Statement does not address the legislative history that both demands economic content for the term “unfair” and cautions against an expansive approach to enforcing Section 5.

Consistent with avoiding these deficiencies, a new PS could carefully identify activities that are beyond the reach of the antitrust laws yet advance the procompetitive consumer-welfare-oriented goal that is the lodestar of antitrust policy. The NPS should also be issued for public comment (as recommended by Commissioner Wilson), an action that could give it additional “due process luster” in the eyes of federal judges.

More specifically, the NPS could state that standalone UMC actions should be directed at private conduct that undermines the competitive process, but is not subject to the reach of the antitrust laws (say, because of the absence of contracts). Such actions might include, for example: (1) invitations to collude; (2)  facilitating practices (“activities that tend to promote interdependence by reducing rivals’ uncertainty or diminishing incentives to deviate from a coordinated strategy”—see here); (3) exchanges of competitively sensitive information among competitors that do not qualify as Sherman Act “agreements” (see here); and (4) materially deceptive conduct (lacking efficiency justifications) that likely contributes to obtaining or increasing market power, as in the standard-setting context (see here); and (5) non-compete clauses in labor employment agreements that lack plausible efficiency justifications (say, clauses in contracts made with low-skill, low-salary workers) or otherwise plainly undermine labor-market competition (say, clauses presented to workers only after they have signed an initial contract, creating a “take-it-or-leave-it scenario” based on asymmetric information).

After promulgating a list of examples, the NPS could explain that additional possible standalone UMC actions would be subject to the same philosophical guardrails: They would involve conduct inconsistent with competition on the merits that is likely to harm consumers and that lacks strong efficiency justifications. 

A revised NPS along the lines suggested would raise the probability of successful UMC judicial outcomes for the Commission. It would do this by strengthening the FTC’s arguments that there is an intelligible principle underlying congressional delegation; that specificity of notice is sufficient to satisfy due process (arbitrariness and capriciousness) concerns; that the Section 5 delegation is insufficiently broad to trigger the major questions doctrine; and that Chevron deference may be accorded determinations stemming from precise NPS guidance.     

In the case of rules, of course, the FTC would still face the substantial risk that a court would deem that Section 6(g) does not apply to substantive rulemakings. And it is far from clear to what extent an NPS along the lines suggested would lead courts to render more FTC-favorable rulings on non-delegation, due process, the major questions doctrine, and Chevron deference. Moreover, even if they entertained UMC suits, the courts could, of course, determine in individual cases that, on the facts, the Commission had failed to show a legal violation. (The FTC has never litigated invitation-to-collude cases, and it lost a variety of facilitating practices cases during the 1980s and 1990s; see here).

Nonetheless, if I were advising the FTC as general counsel, I would tell the commissioners that the choice is between having close to a zero chance of litigation or rulemaking success under the 2022 statement, and some chance of success (greater in the case of litigation than in rulemaking) under the NPS.

Conclusion

The FTC faces a future of total UMC litigation futility if it plows ahead under the 2022 statement. Promulgating an NPS as described would give the FTC at least some chance of success in litigating cases beyond the legal limits of the antitrust laws, assuming suggested principles and guardrails were honored. The outlook for UMC rulemaking (which turns primarily on how the courts view the structure of the FTC Act) remains rather dim, even under a carefully crafted NPS.

If the FTC decides against withdrawing the 2022 statement, it could still show some wisdom by directing more resources to competition advocacy and challenging clearly anticompetitive conduct that falls within the accepted boundaries of the antitrust laws. (Indeed, to my mind, error-cost considerations suggest that the Commission should eschew UMC causes of action that do not also constitute clear antitrust offenses.) It need not undertake almost sure-to-fail UMC initiatives just because it has published the 2022 statement.

In short, treating the 2022 statement as a purely symbolic vehicle to showcase the FTC’s fondest desires—like a new, never-to-be-driven Lamborghini that merely sits in the driveway to win the admiring glances of neighbors—could well be the optimal Commission strategy, given the zeitgeist. That assumes, of course, that the FTC cares about protecting its institutional future and (we also hope) promoting economic well-being.

[This post is a contribution to Truth on the Market‘s continuing digital symposium “FTC Rulemaking on Unfair Methods of Competition.” You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

When Congress created the Federal Trade Commission (FTC) in 1914, it charged the agency with condemning “unfair methods of competition.” That’s not the language Congress used in writing America’s primary antitrust statute, the Sherman Act, which prohibits “monopoliz[ation]” and “restraint[s] of trade.”

Ever since, the question has lingered whether the FTC has the authority to go beyond the Sherman Act to condemn conduct that is unfair, but not necessarily monopolizing or trade-restraining.

According to a new policy statement, the FTC’s current leadership seems to think that the answer is “yes.” But the peculiar strand of progressivism that is currently running the agency lacks the intellectual foundation needed to tell us what conduct that is unfair but not monopolizing might actually be—and misses an opportunity to bring about an expansion of its powers that courts might actually accept.

Better to Keep the Rule of Reason but Eliminate the Monopoly-Power Requirement

The FTC’s policy statement reads like a thesaurus. What is unfair competition? Answer: conduct that is “coercive, exploitative, collusive, abusive, deceptive, predatory, or involve[s] the use of economic power of a similar nature.”

In other words: the FTC has no idea. Presumably, the agency thinks, like Justice Potter Stewart did of obscenity, it will know it when it sees it. Given the courts’ long history of humiliating the FTC by rejecting its cases, even when the agency is able to provide a highly developed account of why challenged conduct is bad for America, one shudders to think of the reception such an approach to fairness will receive.

The one really determinate proposal in the policy statement is to attack bad conduct regardless whether the defendant has monopoly power. “Section 5 does not require a separate showing of market power or market definition when the evidence indicates that such conduct tends to negatively affect competitive conditions,” writes the FTC.

If only the agency had proposed this change alone, instead of cracking open the thesaurus to try to redefine bad conduct as well. Dropping the monopoly-power requirement would, by itself, greatly increase the amount of conduct subject to the FTC’s writ without forcing the agency to answer the metaphysical question: what is fair?

Under the present rule-of-reason approach, the courts let consumers answer the question of what constitutes bad conduct. Or to be precise, the courts assume that the only thing consumers care about is the product—its quality and price—and they try to guess whether consumers prefer the changes that the defendant’s conduct had on products in the market. If a court thinks consumers don’t prefer the changes, then the court condemns the conduct. But only if the defendant happens to have monopoly power in the market for those products.

Preserving this approach to identifying bad conduct would let the courts continue to maintain the pretense that they are doing the bidding of consumers—a role they will no doubt prefer to deciding what is fair as an absolute matter.

The FTC can safely discard the monopoly-power requirement without disturbing the current test for bad conduct because—as I argue in a working paper and as Timothy J. Brennen has long insisted—the monopoly-power requirement is directed at the wrong level of the supply chain: the market in which the defendant has harmed competition rather than the input market through which the defendant causes harm.

Power, not just in markets but in all social life, is rooted in one thing only: control over what others need. Harm to competition depends not on how much a defendant can produce relative to competitors but on whether a defendant controls an input that competitors need, but which the defendant can deny to them.

What others need, they do not buy from the market for which they produce. They buy what they need from other markets: input markets. It follows that the only power that should matter for antitrust—the only power that determines whether a firm can harm competition—is power over input markets, not power in the market in which competition is harmed.

And yet, apart from vertical-merger and contracting cases, where an inquiry into foreclosure of inputs still occasionally makes an appearance, antitrust today never requires systematic proof of power in input markets. The efforts of economists are wasted on the proof of power at the wrong level of the supply chain.

That represents an opportunity for the FTC, which can at one stroke greatly expand its authority to encompass conduct by firms having little power in the markets in which they harm competition.

To be sure, really getting the rule of reason right would require that proof of monopoly power continue to be required, only now at the input level instead of in the downstream market in which competition is harmed. But the courts have traditionally required only informal proof of power over inputs. The FTC could probably eliminate the economics-intensive process of formal proof of monopoly power entirely, instead of merely kicking it up one level in the supply chain.

That is surely an added plus for a current leadership so fearful of computation that it was at pains in the policy statement specifically to forswear “numerical” cost-benefit analysis.

Whatever Happened to No Fault?  

The FTC’s interest in expanding enforcement by throwing off the monopoly-power requirement is a marked departure from progressive antimonopolisms of the past. Mid-20th century radicals did not attack the monopoly-power side of antitrust’s two-part test, but rather the anticompetitive-conduct side.

For more than two decades, progressives mooted establishing a “no-fault” monopolization regime in which the only requirement for liability was size. By contrast, the present movement has sought to focus on conduct, rather than size, its own anti-concentration rhetoric notwithstanding.

Anti-Economism

That might, in part, be a result of the movement’s hostility toward economics. Proof of monopoly power is a famously economics-heavy undertaking.

The origin of contemporary antimonopolism is in activism by journalists against the social-media companies that are outcompeting newspapers for ad revenue, not in academia. As a result, the best traditions of the left, which involve intellectually outflanking opponents by showing how economic theory supports progressive positions, are missing here.

Contemporary antimonopolism has no “Capital” (Karl Marx), no “Progress and Poverty” (Henry George), and no “Freedom through Law” (Robert Hale). The most recent installment in this tradition of left-wing intellectual accomplishment is “Capital in the 21st Century” (Thomas Piketty). Unfortunately for progressive antimonopolists, it states: “pure and perfect competition cannot alter . . . inequality[.]’”

The contrast with the last revolution to sweep antitrust—that of the Chicago School—could not be starker. That movement was born in academia and its triumph was a triumph of ideas, however flawed they may in fact have been.

If one wishes to understand how Chicago School thinking put an end to the push for “no-fault” monopolization, one reads the Airlie House conference volume. In the conversations reproduced therein, one finds the no-faulters slowly being won over by the weight of data and theory deployed against them in support of size.

No equivalent watershed moment exists for contemporary antimonopolism, which bypassed academia (including the many progressive scholars doing excellent work therein) and went straight to the press and the agencies.

There is an ongoing debate about whether recent increases in markups result from monopolization or scarcity. It has not been resolved.

Rather than occupy economics, contemporary antimonopolists—and, perhaps, current FTC leadership—recoil from it. As one prominent antimonopolist lamented to a New York Times reporter, merger cases should be a matter of counting to four, and “[w]e don’t need economists to help us count to four.”

As the policy statement puts it: “The unfair methods of competition framework explicitly contemplates a variety of non-quantifiable harms, and justifications and purported benefits may be unquantifiable as well.”

Moralism

Contemporary antimonopolism’s focus on conduct might also be due to moralism—as reflected in the litany of synonyms for “bad” in the FTC’s policy statement.

For earlier progressives, antitrust was largely a means to an end—a way of ensuring that wages were high, consumer prices were low, and products were safe and of good quality. The fate of individual business entities within markets was of little concern, so long as these outcomes could be achieved.

What mattered were people. While contemporary antimonopolism cares about people, too, it differs from earlier antimonopolisms in that it personifies the firm.

If the firm dies, we are to be sad. If the firm is treated roughly by others, starved of resources or denied room to grow and reach its full potential, we are to be outraged, just as we would be if a child were starved. And, just as in the case of a child, we are to be outraged even if the firm would not have grown up to contribute anything of worth to society.

The irony, apparently lost on antimonopolists, is that the same personification of the firm as a rights-bearing agent, operating in other areas of law, undermines progressive policies.

The firm personified not only has a right to be treated gently by competing firms but also to be treated well by other people. But that means that people no longer come first relative to firms. When the Supreme Court holds that a business firm has a First Amendment right to influence politics, the Court takes personification of the firm to its logical extreme.

The alternative is not to make the market a morality play among firms, but to focus instead on market outcomes that matter to people—wages, prices, and product quality. We should not care whether a firm is “coerc[ed], exploitat[ed], collu[ded against], abus[ed], dece[ived], predate[ed], or [subjected to] economic power of a similar nature” except insofar as such treatment fails to serve people.

If one firm wishes to hire away the talent of another, for example, depriving the target of its lifeblood and killing it, so much the better if the result is better products, lower prices, or higher wages.

Antitrust can help maintain this focus on people only in part—by stopping unfair conduct that degrades products. I have argued elsewhere that the rest is for price regulation, taxation, and direct regulation to undertake.  

Can We Be Fairer and Still Give Product-Improving Conduct a Pass?

The intellectual deficit in contemporary antimonopolism is also evident in the care that the FTC’s policy statement puts into exempting behavior that creates superior products.

For one cannot expand the FTC’s powers to reach bad conduct without condemning product-improving conduct when the major check on enforcement today under the rule of reason (apart from the monopoly-power requirement) is precisely that conduct that improves products is exempt.

Under the rule of reason, bad conduct is a denial of inputs to a competitor that does not help consumers, meaning that the denial degrades the competitor’s products without improving the defendant’s products. Bad conduct is, in other words, unfairness that does not improve products.

If the FTC’s goal is to increase fairness relative to a regime that already pursues it, except when unfairness improves products, the additional fairness must come at the cost of product improvement.

The reference to superior products in the policy statement may be an attempt to compromise with the rule of reason. Unlike the elimination of the monopoly-power requirement, it is not a coherent compromise.

The FTC doesn’t need an economist to grasp this either.  

[This post is a contribution to Truth on the Market‘s continuing digital symposium “FTC Rulemaking on Unfair Methods of Competition.” You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

Federal Trade Commission (FTC) Chair Lina Khan has just sent her holiday wishlist to Santa Claus. It comes in the form of a policy statement on unfair methods of competition (UMC) that the FTC approved last week by a 3-1 vote. If there’s anything to be gleaned from the document, it’s that Khan and the agency’s majority bloc wish they could wield the same powers as Margrethe Vestager does in the European Union. Luckily for consumers, U.S. courts are unlikely to oblige.

Signed by the commission’s three Democratic commissioners, the UMC policy statement contains language that would be completely at home in a decision of the European Commission. It purports to reorient UMC enforcement (under Section 5 of the FTC Act) around typically European concepts, such as “competition on the merits.” This is an unambiguous repudiation of the rule of reason and, with it, the consumer welfare standard.

Unfortunately for its authors, these European-inspired aspirations are likely to fall flat. For a start, the FTC almost certainly does not have the power to enact such sweeping changes. More fundamentally, these concepts have been tried in the EU, where they have proven to be largely unworkable. On the one hand, critics (including the European judiciary) have excoriated the European Commission for its often economically unsound policymaking—enabled by the use of vague standards like “competition on the merits.” On the other hand, the Commission paradoxically believes that its competition powers are insufficient, creating the need for even stronger powers. The recently passed Digital Markets Act (DMA) is designed to fill this need.

As explained below, there is thus every reason to believe the FTC’s UMC statement will ultimately go down as a mistake, brought about by the current leadership’s hubris.

A Statement Is Just That

The first big obstacle to the FTC’s lofty ambitions is that its leadership does not have the power to rewrite either the FTC Act or courts’ interpretation of it. The agency’s leadership understands this much. And with that in mind, they ostensibly couch their statement in the case law of the U.S. Supreme Court:

Consistent with the Supreme Court’s interpretation of the FTC Act in at least twelve decisions, this statement makes clear that Section 5 reaches beyond the Sherman and Clayton Acts to encompass various types of unfair conduct that tend to negatively affect competitive conditions.

It is telling, however, that the cases cited by the agency—in a naked attempt to do away with economic analysis and the consumer welfare standard—are all at least 40 years old. Antitrust and consumer-protection laws have obviously come a long way since then, but none of that is mentioned in the statement. Inconvenient case law is simply shrugged off. To make matters worse, even the cases the FTC cites provide, at best, exceedingly weak support for its proposed policy.

For instance, as Commissioner Christine Wilson aptly notes in her dissenting statement, “the policy statement ignores precedent regarding the need to demonstrate anticompetitive effects.” Chief among these is the Boise Cascade Corp. v. FTC case, where the 9th U.S. Circuit Court of Appeals rebuked the FTC for failing to show actual anticompetitive effects:

In truth, the Commission has provided us with little more than a theory of the likely effect of the challenged pricing practices. While this general observation perhaps summarizes all that follows, we offer  the following specific points in support of our conclusion.

There is a complete absence of meaningful evidence in the record that price levels in the southern plywood industry reflect an anticompetitive effect.

In short, the FTC’s statement is just that—a statement. Gus Hurwitz summarized this best in his post:

Today’s news that the FTC has adopted a new UMC Policy Statement is just that: mere news. It doesn’t change the law. It is non-precedential and lacks the force of law. It receives the benefit of no deference. It is, to use a term from the consumer-protection lexicon, mere puffery.

Lina’s European Dream

But let us imagine, for a moment, that the FTC has its way and courts go along with its policy statement. Would this be good for the American consumer? In order to answer this question, it is worth looking at competition enforcement in the European Union.

There are, indeed, striking similarities between the FTC’s policy statement and European competition law. Consider the resemblance between the following quotes, drawn from the FTC’s policy statement (“A” in each example) and from the European competition sphere (“B” in each example).

Example 1 – Competition on the merits and the protection of competitors:

A. The method of competition must be unfair, meaning that the conduct goes beyond competition on the merits.… This may include, for example, conduct that tends to foreclose or impair the opportunities of market participants, reduce competition between rivals, limit choice, or otherwise harm consumers. (here)

B. The emphasis of the Commission’s enforcement activity… is on safeguarding the competitive process… and ensuring that undertakings which hold a dominant position do not exclude their competitors by other means than competing on the merits… (here)

Example 2 – Proof of anticompetitive harm:

A. “Unfair methods of competition” need not require a showing of current anticompetitive harm or anticompetitive intent in every case. … [T]his inquiry does not turn to whether the conduct directly caused actual harm in the specific instance at issue. (here)

B. The Commission cannot be required… systematically to establish a counterfactual scenario…. That would, moreover, oblige it to demonstrate that the conduct at issue had actual effects, which…  is not required in the case of an abuse of a dominant position, where it is sufficient to establish that there are potential effects. (here)

    Example 3 – Multiple goals:

    A. Given the distinctive goals of Section 5, the inquiry will not focus on the “rule of reason” inquiries more common in cases under the Sherman Act, but will instead focus on stopping unfair methods of competition in their incipiency based on their tendency to harm competitive conditions. (here)

    B. In its assessment the Commission should pursue the objectives of preserving and fostering innovation and the quality of digital products and services, the degree to which prices are fair and competitive, and the degree to which quality or choice for business users and for end users is or remains high. (here)

    Beyond their cosmetic resemblances, these examples reflect a deeper similarity. The FTC is attempting to introduce three core principles that also undergird European competition enforcement. The first is that enforcers should protect “the competitive process” by ensuring firms compete “on the merits,” rather than a more consequentialist goal like the consumer welfare standard (which essentially asks how a given practice affects economic output). The second is that enforcers should not be required to establish that conduct actually harms consumers. Instead, they need only show that such an outcome is (or will be) possible. The third principle is that competition policies pursue multiple, sometimes conflicting, goals.

    In short, the FTC is trying to roll back U.S. enforcement to a bygone era predating the emergence of the consumer welfare standard (which is somewhat ironic for the agency’s progressive leaders). And this vision of enforcement is infused with elements that appear to be drawn directly from European competition law.

    Europe Is Not the Land of Milk and Honey

    All of this might not be so problematic if the European model of competition enforcement that the FTC now seeks to emulate was an unmitigated success, but that could not be further from the truth. As Geoffrey Manne, Sam Bowman, and I argued in a recently published paper, the European model has several shortcomings that militate against emulating it (the following quotes are drawn from that paper). These problems would almost certainly arise if the FTC’s statement was blessed by courts in the United States.

    For a start, the more open-ended nature of European competition law makes it highly vulnerable to political interference. This is notably due to its multiple, vague, and often conflicting goals, such as the protection of the “competitive process”:

    Because EU regulators can call upon a large list of justifications for their enforcement decisions, they are free to pursue cases that best fit within a political agenda, rather than focusing on the limited practices that are most injurious to consumers. In other words, there is largely no definable set of metrics to distinguish strong cases from weak ones under the EU model; what stands in its place is political discretion.

    Politicized antitrust enforcement might seem like a great idea when your party is in power but, as Milton Friedman wisely observed, the mark of a strong system of government is that it operates well with the wrong person in charge. With this in mind, the FTC’s current leadership would do well to consider what their political opponents might do with these broad powers—such as using Section 5 to prevent online platforms from moderating speech.

    A second important problem with the European model is that, because of its competitive-process goal, it does not adequately distinguish between exclusion resulting from superior efficiency and anticompetitive foreclosure:

    By pursuing a competitive process goal, European competition authorities regularly conflate desirable and undesirable forms of exclusion precisely on the basis of their effect on competitors. As a result, the Commission routinely sanctions exclusion that stems from an incumbent’s superior efficiency rather than welfare-reducing strategic behavior, and routinely protects inefficient competitors that would otherwise rightly be excluded from a market.

    This vastly enlarges the scope of potential antitrust liability, leading to risks of false positives that chill innovative behavior and create nearly unwinnable battles for targeted firms, while increasing compliance costs because of reduced legal certainty. Ultimately, this may hamper technological evolution and protect inefficient firms whose eviction from the market is merely a reflection of consumer preferences.

    Finally, the European model results in enforcers having more discretion and enjoying greater deference from the courts:

    [T]he EU process is driven by a number of laterally equivalent, and sometimes mutually exclusive, goals.… [A] large problem exists in the discretion that this fluid arrangement of goals yields.

    The Microsoft case illustrates this problem well. In Microsoft, the Commission could have chosen to base its decision on a number of potential objectives. It notably chose to base its findings on the fact that Microsoft’s behavior reduced “consumer choice”. The Commission, in fact, discounted arguments that economic efficiency may lead to consumer welfare gains because “consumer choice” among a variety of media players was more important.

    In short, the European model sorely lacks limiting principles. This likely explains why the European Court of Justice has started to pare back the commission’s powers in a series of recent cases, including Intel, Post Danmark, Cartes Bancaires, and Servizio Elettrico Nazionale. These rulings appear to be an explicit recognition that overly broad competition enforcement not only fails to benefit consumers but, more fundamentally, is incompatible with the rule of law.

    It is unfortunate that the FTC is trying to emulate a model of competition enforcement that—even in the progressively minded European public sphere—is increasingly questioned and cast aside as a result of its multiple shortcomings.

    [This post is a contribution to Truth on the Market‘s continuing digital symposium “FTC Rulemaking on Unfair Methods of Competition.” You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

    In a 3-2 July 2021 vote, the Federal Trade Commission (FTC) rescinded the nuanced statement it had issued in 2015 concerning the scope of unfair methods of competition under Section 5 of the FTC Act. At the same time, the FTC rejected the applicability of the balancing test set forth in the rule of reason (and with it, several decades of case law, agency guidance, and legal and economic scholarship).

    The July 2021 statement not only rejected these long-established guiding principles for Section 5 enforcement but left in its place nothing but regulatory fiat. In the statement the FTC issued Nov. 10, 2022 (again, by a divided 3-1 vote), the agency has now adopted this “just trust us” approach as a permanent operating principle.

    The November 2022 statement purports to provide a standard under which the agency will identify unfair methods of competition under Section 5. As Commissioner Christine Wilson explains in her dissent, however, it clearly fails to do so. Rather, it delivers a collection of vaguely described principles and pejorative rhetoric that encompass loosely defined harms to competition, competitors, workers and a catch-all group of “other market participants.”  

    The methodology for identifying these harms is comparably vague. The agency not only again rejects the rule of reason but asserts the authority to take action against a variety of “non-quantifiable harms,” all of which can be addressed at the most “incipient” stages. Moreover, and perhaps most remarkably, the statement specifically rejects any form of “net efficiencies” or “numerical cost-benefit analysis” to guide its enforcement decisions or provide even a modicum of predictability to the business community.  

    The November 2022 statement amounts to regulatory fiat on overdrive, presented with a thin veneer of legality derived from a medley of dormant judicial decisions, incomplete characterizations of precedent, and truncated descriptions of legislative history. Under the agency’s dubious understanding of Section 5, Congress in 1914 elected to provide the FTC with the authority to declare any business practice “unfair” subject to no principle other than the agency’s subjective understanding of that term (and, apparently, never to be informed by “numerical cost-benefit analysis”).

    Moreover, any enforcement action that targeted a purportedly “unfair” practice would then be adjudicated within the agency and appealable in the first instance to the very same commissioners who authorized the action. This institutional hall of mirrors would establish the FTC as the national “fairness” arbiter subject to virtually no constraining principles under which the exercise of such powers could ever be deemed to have exceeded its scope. The license for abuse is obvious and the departure from due process inherent.

    The views reflected in the November 2022 statement would almost certainly lead to a legal dead-end.  If the agency takes action under its idiosyncratic understanding of the scope of unfair methods of competition under Section 5, it would elicit a legal challenge that would likely lead to two possible outcomes, both being adverse to the agency. 

    First, it is likely that a judge would reject the agency’s understanding of Section 5, since it is irreconcilable with a well-developed body of case law requiring that the FTC (just like any other administrative agency) act under principles that provide businesses with, as described by the 2nd U.S. Circuit Court of Appeals, at least “an inkling as to what they can lawfully do rather than be left in a state of complete unpredictability.”

    Any legally defensible interpretation of the scope of unfair methods of competition under Section 5 must take into account not only legislative intent at the time the FTC Act was enacted but more than a century’s worth of case law that courts have developed to govern the actions of administrative powers. Contrary to suggestions made in the November 2022 statement, neither the statute nor the relevant body of case law mandates unqualified deference by courts to the presumed wisdom of expert regulators.

    Second, even if a court accepted the agency’s interpretation of the statute (or did so provisionally), there is a strong likelihood that it would then be compelled to strike down Section 5 as an unconstitutional delegation of lawmaking powers from the legislative to the executive branch. Given the concern that a majority of the Supreme Court has increasingly expressed over actions by regulatory agencies—including the FTC, specifically, in AMG Capital Management LLC v. FTC (2021)and now again in the pending case, Axon Enterprise Inc. v. FTCthat do not clearly fall within the legislatively specified scope of an agency’s authority (as in the AMG decision and other recent Court decisions concerning the U.S. Securities and Exchange Commission, the Occupational Safety and Health Administration, the U.S. Environmental Protection Agency, and the United States Patent and Trademark Office), this would seem to be a high-probability outcome.

    In short: any enforcement action taken under the agency’s newly expanded understanding of Section 5 is unlikely to withstand judicial scrutiny, either as a matter of statutory construction or as a matter of constitutional principle. Given this legal forecast, the November 2022 statement could be viewed as mere theatrics that is unlikely to have a long legal life or much practical impact (although, until judicial intervention, it could impose significant costs on firms that must defend against agency-enforcement actions brought under the unilaterally expanded scope of Section 5). 

    Even if that were the case, however, the November 2022 statement and, in particular, its expanded understanding of the harms that the agency is purportedly empowered to target, is nonetheless significant because it should leave little doubt concerning the lack of any meaningful commitment by agency leadership to the FTC’s historical mission to preserve market competition. Rather, it has become increasingly clear that agency leadership seeks to deploy the powerful remedies of the FTC Act (and the rest of the antitrust-enforcement apparatus) to displace a market-driven economy governed by the free play of competitive forces with an administered economy in which regulators continuously intervene to reengineer economic outcomes on grounds of fairness to favored constituencies, rather than to preserve the competitive process.

    Reengineering Section 5 of the FTC Act as a “shadow” antitrust statute that operates outside the rule of reason (or any other constraining objective principle) provides a strategic detour around the inconvenient evidentiary and other legal obstacles that the agency would struggle to overcome when seeking to achieve these policy objectives under the Sherman and Clayton Acts. This intentionally unstructured and inherently politicized approach to antitrust enforcement threatens not only the institutional preconditions for a market economy but ultimately the rule of law itself.

    [TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]

    Much ink has been spilled regarding the potential harm to the economy and to the rule of law that could stem from enactment of the primary federal antitrust legislative proposal, the American Innovation and Choice Online Act (AICOA) (see here). AICOA proponents, of course, would beg to differ, emphasizing the purported procompetitive benefits of limiting the business freedom of “Big Tech monopolists.”

    There is, however, one inescapable reality—as night follows day, passage of AICOA would usher in an extended period of costly litigation over the meaning of a host of AICOA terms. As we will see, this would generate business uncertainty and dampen innovative conduct that might be covered by new AICOA statutory terms. 

    The history of antitrust illustrates the difficulties inherent in clarifying the meaning of novel federal statutory language. It was not until 21 years after passage of the Sherman Antitrust Act that the Supreme Court held that Section 1 of the act’s prohibition on contracts, combinations, and conspiracies “in restraint of trade” only covered unreasonable restraints of trade (see Standard Oil Co. of New Jersey v. United States, 221 U.S. 1 (1911)). Furthermore, courts took decades to clarify that certain types of restraints (for example, hardcore price fixing and horizontal market division) were inherently unreasonable and thus per se illegal, while others would be evaluated on a case-by-case basis under a “rule of reason.”

    In addition, even far more specific terms related to exclusive dealing, tying, and price discrimination found within the Clayton Antitrust Act gave rise to uncertainty over the scope of their application. This uncertainty had to be sorted out through judicial case-law tests developed over many decades.

    Even today, there is no simple, easily applicable test to determine whether conduct in the abstract constitutes illegal monopolization under Section 2 of the Sherman Act. Rather, whether Section 2 has been violated in any particular instance depends upon the application of economic analysis and certain case-law principles to matter-specific facts.

    As is the case with current antitrust law, the precise meaning and scope of AICOA’s terms will have to be fleshed out over many years. Scholarly critiques of AICOA’s language underscore the seriousness of this problem.

    In its April 2022 public comment on AICOA, the American Bar Association (ABA)  Antitrust Law Section explains in some detail the significant ambiguities inherent in specific AICOA language that the courts will have to address. These include “ambiguous terminology … regarding fairness, preferencing, materiality, and harm to competition on covered platforms”; and “specific language establishing affirmative defenses [that] creates significant uncertainty”. The ABA comment further stresses that AICOA’s failure to include harm to the competitive process as a prerequisite for a statutory violation departs from a broad-based consensus understanding within the antitrust community and could have the unintended consequence of disincentivizing efficient conduct. This departure would, of course, create additional interpretive difficulties for federal judges, further complicating the task of developing coherent case-law principles for the new statute.

    Lending support to the ABA’s concerns, Northwestern University professor of economics Dan Spulber notes that AICOA “may have adverse effects on innovation and competition because of imprecise concepts and terminology.”

    In a somewhat similar vein, Stanford Law School Professor (and former acting assistant attorney general for antitrust during the Clinton administration) Douglas Melamed complains that:

    [AICOA] does not include the normal antitrust language (e.g., “competition in the market as a whole,” “market power”) that gives meaning to the idea of harm to competition, nor does it say that the imprecise language it does use is to be construed as that language is construed by the antitrust laws. … The bill could be very harmful if it is construed to require, not increased market power, but simply harm to rivals.

    In sum, ambiguities inherent in AICOA’s new terminology will generate substantial uncertainty among affected businesses. This uncertainty will play out in the courts over a period of years. Moreover, the likelihood that judicial statutory constructions of AICOA language will support “efficiency-promoting” interpretations of behavior is diminished by the fact that AICOA’s structural scheme (which focuses on harm to rivals) does not harmonize with traditional antitrust concerns about promoting a vibrant competitive process.

    Knowing this, the large high-tech firms covered by AICOA will become risk averse and less likely to innovate. (For example, they will be reluctant to improve algorithms in a manner that would increase efficiency and benefit consumers, but that might be seen as disadvantaging rivals.) As such, American innovation will slow, and consumers will suffer. (See here for an estimate of the enormous consumer-welfare gains generated by high tech platforms—gains of a type that AICOA’s enactment may be expected to jeopardize.) It is to be hoped that Congress will take note and consign AICOA to the rubbish heap of disastrous legislative policy proposals.