Archives For section 230

After the oral arguments in Twitter v. Taamneh, Geoffrey Manne, Kristian Stout, and I spilled a lot of ink thinking through the law & economics of intermediary liability and how to draw lines when it comes to social-media companies’ responsibility to prevent online harms stemming from illegal conduct on their platforms. With the Supreme Court’s recent decision in Twitter v. Taamneh, it is worth revisiting that post to see what we got right, as well as what the opinion could mean for future First Amendment cases—particularly those concerning Texas and Florida’s common-carriage laws and other challenges to the bounds of Section 230 more generally.

What We Got Right: Necessary Limitations on Secondary Liability Mean the Case Against Twitter Must be Dismissed

In our earlier post, which built on our previous work on the law & economics of intermediary liability, we argued that the law sometimes does and should allow enforcement against intermediaries when they are the least-cost avoider. This is especially true on social-media sites like Twitter, where information costs may be sufficiently low that effective monitoring and control of end users is possible and pseudonymity makes bringing remedies against end users ineffective. We note, however, that there are also costs to intermediary liability. These manifest particularly in “collateral censorship,” which occurs when social-media companies remove user-generated content in order to avoid liability. Thus, a balance must be struck:

From an economic perspective, liability should be imposed on the party or parties best positioned to deter the harms in question, so long as the social costs incurred by, and as a result of, enforcement do not exceed the social gains realized. In other words, there is a delicate balance that must be struck to determine when intermediary liability makes sense in a given case. On the one hand, we want illicit content to be deterred, and on the other, we want to preserve the open nature of the Internet. The costs generated from the over-deterrence of legal, beneficial speech is why intermediary liability for user-generated content can’t be applied on a strict-liability basis, and why some bad content will always exist in the system.

In particular, we noted the need for limiting principles to intermediary liability. As we put it in our Fleites amicus:

In theory, any sufficiently large firm with a role in the commerce at issue could be deemed liable if all that is required is that its services “allow[]” the alleged principal actors to continue to do business. FedEx, for example, would be liable for continuing to deliver packages to MindGeek’s address. The local waste management company would be liable for continuing to service the building in which MindGeek’s offices are located. And every online search provider and Internet service provider would be liable for continuing to provide service to anyone searching for or viewing legal content on MindGeek’s sites.

The Court struck very similar notes in its Taamneh opinion regarding the need to limit what they call “secondary liability” under the aiding-and-abetting statute. They note that a person may be responsible at common law for a crime or tort if he helps another complete its commission, but that such liability has never been “boundless.” If it were otherwise, Justice Clarence Thomas wrote for a unanimous Court, “aiding-and-abetting liability could sweep in innocent bystanders as well as those who gave only tangential assistance.” Offering the example of a robbery, Thomas argued that if “any assistance of any kind were sufficient to create liability… then anyone who passively watched a robbery could be said to commit aiding and abetting by failing to call the police.” 

Here, the Court found important the common law’s distinction between acts of commission and omission:

[O]ur legal system generally does not impose liability for mere omissions, inactions, or nonfeasance; although inaction can be culpable in the face of some independent duty to act, the law does not impose a generalized duty to rescue… both criminal and tort law typically sanction only “wrongful conduct,” bad acts, and misfeasance… Some level of blameworthiness is therefore ordinarily required. 

If omissions could be held liable in the absence of an independent duty to act, then there would be no limiting principle to prevent the application of liability far beyond what anyone (except for the cop in the final episode of Seinfeld) would believe reasonable: 

[I]f aiding-and-abetting liability were taken too far, then ordinary merchants could become liable for any misuse of their goods and services, no matter how attenuated their relationship with the wrongdoer. And those who merely deliver mail or transmit emails could be liable for the tortious messages contained therein. For these reasons, courts have long recognized the need to cabin aiding-and-abetting liability to cases of truly culpable conduct.

Applying this to Twitter, the Court first outlined the theories of how Twitter “helped” ISIS:

First, ISIS was active on defendants social-media platforms, which are generally available to the internet-using public with little to no front-end screening by defendants. In other words, ISIS was able to upload content to the platforms and connect with third parties, just like everyone else. Second, defendants’ recommendation algorithms matched ISIS-related content to users most likely to be interested in that content—again, just like any other content. And, third, defendants allegedly knew that ISIS was uploading this content to such effect, but took insufficient steps to ensure that ISIS supporters and ISIS-related content were removed from their platforms. Notably, plaintiffs never allege that ISIS used defendants’ platforms to plan or coordinate the Reina attack; in fact, they do not allege that Masharipov himself ever used Facebook, YouTube, or Twitter. 

The Court rejected each of these allegations as insufficient to establish Twitter’s liability in the absence of an independent duty to act, pointing back to the distinction between an act that affirmatively helped to cause harm and an omission:

[T]he only affirmative “conduct” defendants allegedly undertook was creating their platforms and setting up their algorithms to display content relevant to user inputs and user history. Plaintiffs never allege that, after defendants established their platforms, they gave ISIS any special treatment or words of encouragement. Nor is there reason to think that defendants selected or took any action at all with respect to ISIS’ content (except, perhaps, blocking some of it).

In our earlier post on Taamneh, we argued that the plaintiff’s “theory of liability would contain no viable limiting principle” and asked “what in principle would separate a search engine from Twitter, if the search engine linked to an alleged terrorist’s account?” The Court made a similar argument, positing that, while “bad actors like ISIS are able to use platforms like defendants’ for illegal—and sometimes terrible—ends,” the same “could be said of cell phones, email, or the internet generally.” Despite this, “internet or cell service providers [can’t] incur culpability merely for providing their services to the public writ large. Nor do we think that such providers would normally be described as aiding and abetting, for example, illegal drug deals brokered over cell phones—even if the provider’s conference-call or video-call features made the sale easier.” 

The Court concluded:

At bottom, then, the claim here rests less on affirmative misconduct and more on an alleged failure to stop ISIS from using these platforms. But, as noted above, both tort and criminal law have long been leery of imposing aiding-and-abetting liability for mere passive nonfeasance.

In sum, since there was no independent duty to act to be found in statute, Twitter could not be found liable under these allegations.

The First Amendment and Common Carriage

It’s notable that the opinion was written by Justice Thomas, who previously invited states to create common-carriage laws that he believed would be consistent with the First Amendment. In his concurrence to the Court’s dismissal (as moot) of the petition for certification in Biden v. First Amendment Institute, Thomas wrote of the market power allegedly held by social-media companies like Twitter, Facebook, and YouTube that:

If part of the problem is private, concentrated control over online content and platforms available to the public, then part of the solution may be found in doctrines that limit the right of a private company to exclude. Historically, at least two legal doctrines limited a company’s right to exclude.

He proceeded to outline how common-carriage and public-accommodation laws can be used to limit companies from excluding users, suggesting that they would be subject to a lower standard of First Amendment scrutiny under Turner and its progeny.

Among the reasons for imposing common-carriage requirements on social-media companies, Justice Thomas found it important that they are like conduits that carry speech of others:

Though digital instead of physical, they are at bottom communications networks, and they “carry” information from one user to another. A traditional telephone company laid physical wires to create a network connecting people. Digital platforms lay information infrastructure that can be controlled in much the same way. And unlike newspapers, digital platforms hold themselves out as organizations that focus on distributing the speech of the broader public. Federal law dictates that companies cannot “be treated as the publisher or speaker” of information that they merely distribute. 110 Stat. 137, 47 U. S. C. §230(c). 

Thomas also noted the relationship between certain benefits bestowed upon common carriers in exchange for universal service: 

In exchange for regulating transportation and communication industries, governments—both State and Federal— have sometimes given common carriers special government favors. For example, governments have tied restrictions on a carrier’s ability to reject clients to “immunity from certain types of suits” or to regulations that make it more difficult for other companies to compete with the carrier (such as franchise licenses). (internal citations omitted)

While Taamneh is not about the First Amendment, some of the language in Thomas’ opinion would suggest that social-media companies are the types of businesses that may receive conduit liability for third-party conduct in exchange for common-carriage requirements. 

As noted above, the Court found it important for its holding that there was no aiding-and-abetting by Twitter that “there is not even reason to think that defendants carefully screened any content before allowing users to upload it onto their platforms. If anything, the opposite is true: By plaintiffs’ own allegations, these platforms appear to transmit most content without inspecting it.” The Court then compared social-media platforms to “cell phones, email, or the internet generally,” which are classic examples of conduits. In particular, phone service was a common carrier that largely received immunity from liability for its users’ conduct.

Thus, while Taamneh wouldn’t be directly binding in the First Amendment context, this language will likely be cited in the briefs by those supporting the Texas and Florida common-carriage laws when the Supreme Court reviews them.

Section 230 and Neutral Tools

On the other hand—and despite the views Thomas expressed about Section 230 immunity in his Malwarebytes statement—there is much in the Court’s reasoning in Taamneh that would lead one to believe the justices sees algorithmic recommendations as neutral tools that would not, in and of themselves, restrict a finding of immunity for online platforms.

While the Court’s decision in Gonzalez v. Google basically said it didn’t need to reach the Section 230 question because the allegations failed to state a claim under Taamneh’s reasoning, it appears highly likely that a majority would have found the platforms immune under Section 230 despite their use of algorithmic recommendations. For instance, in Taamneh, the Court disagreed with the assertion that recommendation algorithms amounted to substantial assistance, reasoning that:

By plaintiffs’ own telling, their claim is based on defendants’ “provision of the infrastructure which provides material support to ISIS.” Viewed properly, defendants’ “recommendation” algorithms are merely part of that infrastructure. All the content on their platforms is filtered through these algorithms, which allegedly sort the content by information and inputs provided by users and found in the content itself. As presented here, the algorithms appear agnostic as to the nature of the content, matching any content (including ISIS’ content) with any user who is more likely to view that content. The fact that these algorithms matched some ISIS content with some users thus does not convert defendants’ passive assistance into active abetting. Once the platform and sorting-tool algorithms were up and running, defendants at most allegedly stood back and watched; they are not alleged to have taken any further action with respect to ISIS. 

On the other hand, the Court thought it important to its finding that there were no allegations establishing a nexus (due to unusual provision or conscious and selective promotion) between Twitter’s provision of a communications platform and the terrorist activity:

To be sure, we cannot rule out the possibility that some set of allegations involving aid to a known terrorist group would justify holding a secondary defendant liable for all of the group’s actions or perhaps some definable subset of terrorist acts. There may be, for example, situations where the provider of routine services does so in an unusual way or provides such dangerous wares that selling those goods to a terrorist group could constitute aiding and abetting a foreseeable terror attack. Cf. Direct Sales Co. v. United States, 319 U. S. 703, 707, 711–712, 714–715 (1943) (registered morphine distributor could be liable as a coconspirator of an illicit operation to which it mailed morphine far in excess of normal amounts). Or, if a platform consciously and selectively chose to promote content provided by a particular terrorist group, perhaps it could be said to have culpably assisted the terrorist group. Cf. Passaic Daily News v. Blair, 63 N. J. 474, 487–488, 308 A. 2d 649, 656 (1973) (publishing employment advertisements that discriminate on the basis of sex could aid and abet the discrimination).

In other words, this language could suggest that, as long as the algorithms are essentially “neutral tools” (to use the language of Roommates.com and its progeny), social-media platforms are immune for third-party speech that they incidentally promote. But if they design their algorithmic recommendations in such a way that suggests the platforms “consciously and selectively” promote illegal content, then they could lose immunity.

Unless other justices share Thomas’ appetite to limit Section 230 immunity substantially in a future case, this language from Taamneh would likely be used to expand the law’s protections to algorithmic recommendations under a Roommates.com/”neutral tools” analysis.

Conclusion

While the Court did not end up issuing the huge Section 230 decision that some expected, the Taamneh decision will be a big deal going forward for the interconnected issues of online intermediary liability, the First Amendment, and Section 230. Language from Justice Thomas’ opinion will likely be cited in the litigation over the Texas and Florida common-carrier laws, as well as future Section 230 cases.

Legislation to secure children’s safety online is all the rage right now, not only on Capitol Hill, but in state legislatures across the country. One of the favored approaches is to impose on platforms a duty of care to protect teen users.

For example, Sens. Richard Blumenthal (D-Conn.) and Marsha Blackburn (R-Tenn.) have reintroduced the Kid’s Online Safety Act (KOSA), which would require that social-media platforms “prevent or mitigate” a variety of potential harms, including mental-health harms; addiction; online bullying and harassment; sexual exploitation and abuse; promotion of narcotics, tobacco, gambling, or alcohol; and predatory, unfair, or deceptive business practices.

But while bills of this sort would define legal responsibilities that online platforms have to their minor users, this statutory duty of care is more likely to result in the exclusion of teens from online spaces than to promote better care of teens who use them.

Drawing on the previous research that I and my International Center for Law & Economics (ICLE) colleagues have done on the economics of intermediary liability and First Amendment jurisprudence, I will in this post consider the potential costs and benefits of imposing a statutory duty of care similar to that proposed by KOSA.

The Law & Economics of Online Intermediary Liability and the First Amendment (Kids Edition)

Previously (in a law review article, an amicus brief, and a blog post), we at ICLE have argued that there are times when the law rightfully places responsibility on intermediaries to monitor and control what happens on their platforms. From an economic point of view, it makes sense to impose liability on intermediaries when they are the least-cost avoider: i.e., the party that is best positioned to limit harm, even if they aren’t the party committing the harm.

On the other hand, as we have also noted, there are costs to imposing intermediary liability. This is especially true for online platforms with user-generated content. Specifically, there is a risk of “collateral censorship” wherein online platforms remove more speech than is necessary in order to avoid potential liability. For example, imposing a duty of care to “protect” minors, in particular, could result in online platforms limiting teens’ access.

If the social costs that arise from the imposition of intermediary liability are greater than the benefits accrued, then such an arrangement would be welfare-destroying, on net. While we want to deter harmful (illegal) content, we don’t want to do so if we end up deterring access to too much beneficial (legal) content as a result.

The First Amendment often limits otherwise generally applicable laws, on grounds that they impose burdens on speech. From an economic point of view, this could be seen as an implicit subsidy. That subsidy may be justifiable, because information is a public good that would otherwise be underproduced. As Daniel A. Farber put it in 1991:

[B]ecause information is a public good, it is likely to be undervalued by both the market and the political system. Individuals have an incentive to ‘free ride’ because they can enjoy the benefits of public goods without helping to produce those goods. Consequently, neither market demand nor political incentives fully capture the social value of public goods such as information. Our polity responds to this undervaluation of information by providing special constitutional protection for information-related activities. This simple insight explains a surprising amount of First Amendment doctrine.

In particular, the First Amendment provides important limits on how far the law can go in imposing intermediary liability that would chill speech, including when dealing with potential harms to teenage users. These limitations seek the same balance that the economics of intermediary liability would suggest: how to hold online platforms liable for legally cognizable harms without restricting access to too much beneficial content. Below is a summary of some of those relevant limitations.

Speech vs. Conduct

The First Amendment differentiates between speech and conduct. While the line between the two can be messy (and “expressive conduct” has its own standard under the O’Brien test), governmental regulation of some speech acts is permissible. Thus, harassment, terroristic threats, fighting words, and even incitement to violence can be punished by law. On the other hand, the First Amendment does not generally allow the government to regulate “hate speech” or “bullying.” As the 3rd U.S. Circuit Court of Appeals explained it in the context of a school’s anti-harassment policy:

There is of course no question that non-expressive, physically harassing conduct is entirely outside the ambit of the free speech clause. But there is also no question that the free speech clause protects a wide variety of speech that listeners may consider deeply offensive, including statements that impugn another’s race or national origin or that denigrate religious beliefs… When laws against harassment attempt to regulate oral or written expression on such topics, however detestable the views expressed may be, we cannot turn a blind eye to the First Amendment implications.

In other words, while a duty of care could reach harrassing conduct, it is unclear how it could reach pure expression on online platforms without implicating the First Amendment.

Impermissibly Vague

The First Amendment also disallows rules sufficiently vague that they would preclude a person of ordinary intelligence from having fair notice of what is prohibited. For instance, in an order handed down earlier this year in Høeg v. Newsom, the federal district court granted the plaintiffs’ motion to enjoin a California law that would charge medical doctors with sanctionable “unprofessional conduct” if, as part of treatment or advice, they shared with patients “false information that is contradicted by contemporaneous scientific consensus contrary to the standard of care.”

The court found that “contemporary scientific consensus” was so “ill-defined [that] physician plaintiffs are unable to determine if their intended conduct contradicts [it].” The court asked a series of questions relevant to trying to define the phrase:

[W]ho determines whether a consensus exists to begin with? If a consensus does exist, among whom must the consensus exist (for example practicing physicians, or professional organizations, or medical researchers, or public health officials, or perhaps a combination)? In which geographic area must the consensus exist (California, or the United States, or the world)? What level of agreement constitutes a consensus (perhaps a plurality, or a majority, or a supermajority)? How recently in time must the consensus have been established to be considered “contemporary”? And what source or sources should physicians consult to determine what the consensus is at any given time (perhaps peer-reviewed scientific articles, or clinical guidelines from professional organizations, or public health recommendations)?

Thus, any duty of care to limit access to potentially harmful online content must not be defined in a way that is too vague for a person of ordinary intelligence to know what is prohibited.

Liability for Third-Party Speech

The First Amendment limits intermediary liability when dealing with third-party speech. For the purposes of defamation law, the traditional continuum of liability was from publishers to distributors (or secondary publishers) to conduits. Publishers—such as newspapers, book publishers, and television producers—exercised significant editorial control over content. As a result, they could be held liable for defamatory material, because it was seen as their own speech. Conduits—like the telephone company—were on the other end of the spectrum, and could not be held liable for the speech of those who used their services.

As the Court of Appeals of the State of New York put in a 1974 opinion:

In order to be deemed to have published a libel a defendant must have had a direct hand in disseminating the material whether authored by another, or not. We would limit [liability] to media of communications involving the editorial or at least participatory function (newspapers, magazines, radio, television and telegraph)… The telephone company is not part of the “media” which puts forth information after processing it in one way or another. The telephone company is a public utility which is bound to make its equipment available to the public for any legal use to which it can be put…

Distributors—which included booksellers and libraries—were in the middle of this continuum. They had to have some notice that content they distributed was defamatory before they could be held liable.

Courts have long explored the tradeoffs between liability and carriage of third-party speech in this context. For instance, in Smith v. California, the U.S. Supreme Court found that a statute establishing strict liability for selling obscene materials violated the First Amendment because:

By dispensing with any requirement of knowledge of the contents of the book on the part of the seller, the ordinance tends to impose a severe limitation on the public’s access to constitutionally protected matter. For if the bookseller is criminally liable without knowledge of the contents, and the ordinance fulfills its purpose, he will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected as well as obscene literature. It has been well observed of a statute construed as dispensing with any requirement of scienter that: “Every bookseller would be placed under an obligation to make himself aware of the contents of every book in his shop. It would be altogether unreasonable to demand so near an approach to omniscience.” (internal citations omitted)

It’s also worth noting that traditional publisher liability was limited in the case of republication, such as when newspapers republished stories from wire services like the Associated Press. Courts observed the economic costs that would attend imposing a strict-liability standard in such cases:

No newspaper could afford to warrant the absolute authenticity of every item of its news’, nor assume in advance the burden of specially verifying every item of news reported to it by established news gathering agencies, and continue to discharge with efficiency and promptness the demands of modern necessity for prompt publication, if publication is to be had at all.

Over time, the rule was extended, either by common law or statute, from newspapers to radio and television broadcasts, with the treatment of republication of third-party speech eventually resembling conduit liability even more than distributor liability. See Brent Skorup and Jennifer Huddleston’s “The Erosion of Publisher Liability in American Law, Section 230, and the Future of Online Curation” for a more thoroughgoing treatment of the topic.

The thing that pushed the law toward conduit liability when entities carried third-party speech was the implicit economic reasoning. For example, in 1959’s Farmers Educational & Cooperative Union v. WDAY, Inc., the Supreme Court held that a broadcaster could not be found liable for defamation made by a political candidate on the air, arguing that:

The decision a broadcasting station would have to make in censoring libelous discussion by a candidate is far from easy. Whether a statement is defamatory is rarely clear. Whether such a statement is actionably libelous is an even more complex question, involving as it does, consideration of various legal defenses such as “truth” and the privilege of fair comment. Such issues have always troubled courts… if a station were held responsible for the broadcast of libelous material, all remarks evenly faintly objectionable would be excluded out of an excess of caution. Moreover, if any censorship were permissible, a station so inclined could intentionally inhibit a candidate’s legitimate presentation under the guise of lawful censorship of libelous matter. Because of the time limitation inherent in a political campaign, erroneous decisions by a station could not be corrected by the courts promptly enough to permit the candidate to bring improperly excluded matter before the public. It follows from all this that allowing censorship, even of the attenuated type advocated here, would almost inevitably force a candidate to avoid controversial issues during political debates over radio and television, and hence restrict the coverage of consideration relevant to intelligent political decision.

It is clear from the foregoing that imposing duty of care on online platforms to limit speech in ways that would make them strictly liable would be inconsistent with distributor liability. But even a duty of care that more resembled a negligence-based standard could implicate speech interests if online platforms are seen to be akin to newspapers, or to radio and television broadcasters, when they act as republishers of third-party speech. Such cases would appear to require conduit liability.

The First Amendment Applies to Children

The First Amendment has been found to limit what governments can do in the name of protecting children from encountering potentially harmful speech. For example, California in 2005 passed a law prohibiting the sale or rental of “violent video games” to minors. In Brown v. Entertainment Merchants Ass’n, the Supreme Court found the law unconstitutional, finding that:

No doubt [the government] possesses legitimate power to protect children from harm, but that does not include a free-floating power to restrict the ideas to which children may be exposed. “Speech that is neither obscene as to youths nor subject to some other legitimate proscription cannot be suppressed solely to protect the young from ideas or images that a legislative body thinks unsuitable for them.” (internal citations omitted)

The Court did not find it persuasive that the video games were violent (noting that children’s books often depict violence) or that they were interactive (as some children’s books offer choose-your-own-adventure options). In other words, there was nothing special about violent video games that would subject them to a lower level of constitutional protection, even for minors that wished to play them.

The Court also did not find persuasive California’s appeal that the law aided parents in making decisions about what their children could access, stating:

California claims that the Act is justified in aid of parental authority: By requiring that the purchase of violent video games can be made only by adults, the Act ensures that parents can decide what games are appropriate. At the outset, we note our doubts that punishing third parties for conveying protected speech to children just in case their parents disapprove of that speech is a proper governmental means of aiding parental authority.

Justice Samuel Alito’s concurrence in Brown would have found the California law unconstitutionally vague, arguing that constitutional speech would be chilled as a result of the law’s enforcement. The fact its intent was to protect minors didn’t change that analysis.

Limiting the availability of speech to minors in the online world is subject to the same analysis as in the offline world. In Reno v. ACLU, the Supreme Court made clear that the First Amendment applies with equal effect online, stating that “our cases provide no basis for qualifying the level of First Amendment scrutiny that should be applied to this medium.” In Packingham v. North Carolina, the Court went so far as to call social-media platforms “the modern public square.”

Restricting minors’ access to online platforms through age-verification requirements already have been found to violate the First Amendment. In Ashcroft v. ACLU (II), the Supreme Court reviewed provisions of the Children Online Protection Act’s (COPA) that would restrict posting content “harmful to minors” for “commercial purposes.” COPA allowed an affirmative defense if the online platform restricted access by minors through various age-verification devices. The Court found that “[b]locking and filtering software is an alternative that is less restrictive than COPA, and, in addition, likely more effective as a means of restricting children’s access to materials harmful to them” and upheld a preliminary injunction against the law, pending further review of its constitutionality.

On remand, the 3rd Circuit found that “[t]he Supreme Court has disapproved of content-based restrictions that require recipients to identify themselves affirmatively before being granted access to disfavored speech, because such restrictions can have an impermissible chilling effect on those would-be recipients.” The circuit court would eventually uphold the district court’s finding of unconstitutionality and permanently enjoin the statute’s provisions, noting that the age-verification requirements “would deter users from visiting implicated Web sites” and therefore “would chill protected speech.”

A duty of care to protect minors could be unconstitutional if it ends up limiting access to speech that is not illegal for them to access. Age-verification requirements that would likely accompany such a duty could also result in a statute being found unconstitutional.

In sum:

  • A duty of care to prevent or mitigate harassment and bullying has First Amendment implications if it regulates pure expression, such as speech on online platforms.
  • A duty of care to limit access to potentially harmful online speech can’t be defined so vaguely that a person of ordinary intelligence can’t know what is prohibited.
  • A duty of care that establishes a strict-liability standard on online speech platforms would likely be unconstitutional for its chilling effects on legal speech. A duty of care that establishes a negligence standard could similarly lead to “collateral censorship” of third-party speech.
  • A duty of care to protect minors could be unconstitutional if it limits access to legal speech. De facto age-verification requirements could also be found unconstitutional.

The Problems with KOSA: The First Amendment and Limiting Kids’ Access to Online Speech

KOSA would establish a duty of care for covered online platforms to “act in the best interests of a user that the platform knows or reasonably should know is a minor by taking reasonable measures in its design and operation of products and services to prevent and mitigate” a variety of potential harms, including:

  1. Consistent with evidence-informed medical information, the following mental health disorders: anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors.
  2. Patterns of use that indicate or encourage addiction-like behaviors.
  3. Physical violence, online bullying, and harassment of the minor.
  4. Sexual exploitation and abuse.
  5. Promotion and marketing of narcotic drugs (as defined in section 102 of the Controlled Substances Act (21 U.S.C. 802)), tobacco products, gambling, or alcohol.
  6. Predatory, unfair, or deceptive marketing practices, or other financial harms.

There are also a variety of tools and notices that must be made available to users under age 17, as well as to their parents.

Reno and Age Verification

KOSA could be found unconstitutional under the Reno and COPA-line of cases for creating a de facto age-verification requirement. The bill’s drafters appear to be aware of the legal problems that an age-verification requirement would entail. KOSA therefore states that:

Nothing in this Act shall be construed to require—(1) the affirmative collection of any personal data with respect to the age of users that a covered platform is not already collecting in the normal course of business; or (2) a covered platform to implement an age gating or age verification functionality.

But this doesn’t change the fact that, in order to effectuate KOSA’s requirements, online platforms would have to know their users’ ages. KOSA’s duty of care incorporates a constructive-knowledge requirement (i.e., “reasonably should know is a minor”). A duty of care combined with the mandated notices and tools that must be made available to minors makes it “reasonable” that platforms would have to verify the age of each user.

If a court were to agree that KOSA doesn’t require age gating or age verification, this would likely render the act ineffective. As it stands, most of the online platforms that would be covered by KOSA only ask users their age (or birthdate) upon creation of a profile, which is easily evaded by simple lying. While those under age 17 (but at least age 13) at the time of the act’s passage who have already created profiles would be implicated, it would appear the act wouldn’t require platforms to vet whether users who said they were at least 17 when they created new profiles were actually telling the truth.

Vagueness and Protected Speech

Even if KOSA were not found unconstitutional for creating an explicit age-verification scheme, it still likely would lead to kids under 17 being restricted from accessing protected speech. Several of the types of speech the duty of care covers could include legal speech. Moreover, the prohibited speech is defined so vaguely that it likely would lead to chilling effects on access to legal speech.

For example, pictures of photoshopped models are protected speech. If teenage girls want to see such content on their feeds, it isn’t clear that the law can constitutionally stop them, even if it’s done by creating a duty of care to prevent and mitigate harms associated with “anxiety, depression, or eating disorders.”

Moreover, access to content that kids really like to see or hear is still speech, even if they like it so much that an outside observer may think they are addicted to it. Much as the Court said in Brown, the government does not have “a free-floating power to restrict [speech] to which children may be exposed.”

KOSA’s Section 3(A)(1) and 3(A)(2) would also run into problems, as they are so vague that a person of ordinary intelligence would not know what they prohibit. As a result, there would likely be chilling effects on legal speech.

Much like in Høeg, the phrase “consistent with evidence-informed medical information” leads to various questions regarding how an online platform could comply with the law. For instance, it isn’t clear what content or design issue would be implicated by this subsection. Would a platform need to hire mental-health professionals to consult with them on every product-design and content-moderation decision?

Even worse is the requirement to prevent and mitigate “patterns of use that indicate or encourage addiction-like behaviors,” which isn’t defined by reference to “evidence-informed medical information” or to anything else.

Even Bullying May Be Protected Speech

Even KOSA’s duty to prevent and mitigate “physical violence, online bullying, and harassment of the minor” in Section 3(3) could implicate the First Amendment. While physical violence would clearly be outside of the First Amendment’s protections (although it’s unclear how an online platform could prevent or mitigate such violence), online bullying and harassing speech are, nonetheless, speech. As a result, this duty of care could receive constitutional scrutiny regarding whether it effectively limits lawful (though awful) speech directed at minors.

Locking Children Out of Online Spaces

KOSA’s duty of care appears to be based on negligence, in that it requires platforms to take “reasonable measures.” This probably makes it more likely to survive First Amendment scrutiny than a strict-liability regime would.

It could, however, still result in real (and costly) product-design and moderation challenges for online platforms. As a result, there would be significant incentives for those platforms to exclude those they know or reasonably believe are under age 17 from the platforms altogether.

While this is not strictly a First Amendment problem, per se, it nonetheless  illustrates how laws intended to “protect” children’s safety while online can actually lead to their being restricted from using online speech platforms altogether.

Conclusion

Despite its being christened the “Kid’s Online Safety Act,” KOSA will result in real harm for kids if enacted into law. Its likely result would be considerable “collateral censorship,” as online platforms restrict teens’ access in order to avoid liability.

The bill’s duty of care would also either require likely unconstitutional age-verification, or it will be rendered ineffective, as teen users lie about their age in order to access desired content.

Congress shall make no law abridging the freedom of speech, even if it is done in the name of children.

The Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law will host a hearing this afternoon on Gonzalez v. Google, one of two terrorism-related cases currently before the U.S. Supreme Court that implicate Section 230 of the Communications Decency Act of 1996.

We’ve written before about how the Court might and should rule in Gonzalez (see here and here), but less attention has been devoted to the other Section 230 case on the docket: Twitter v. Taamneh. That’s unfortunate, as a thoughtful reading of the dispute at issue in Taamneh could highlight some of the law’s underlying principles. At first blush, alas, it does not appear that the Court is primed to apply that reading.

During the recent oral arguments, the Court considered whether Twitter (and other social-media companies) can be held liable under the Antiterrorism Act for providing a general communications platform that may be used by terrorists. The question under review by the Court is whether Twitter “‘knowingly’ provided substantial assistance [to terrorist groups] under [the statute] merely because it allegedly could have taken more ‘meaningful’ or ‘aggressive’ action to prevent such use.” Plaintiffs’ (respondents before the Court) theory is, essentially, that Twitter aided and abetted terrorism through its inaction.

The oral argument found the justices grappling with where to draw the line between aiding and abetting, and otherwise legal activity that happens to make it somewhat easier for bad actors to engage in illegal conduct. The nearly three-hour discussion between the justices and the attorneys yielded little in the way of a viable test. But a more concrete focus on the law & economics of collateral liability (which we also describe as “intermediary liability”) would have significantly aided the conversation.   

Taamneh presents a complex question of intermediary liability generally that goes beyond the bounds of a (relatively) simpler Section 230 analysis. As we discussed in our amicus brief in Fleites v. Mindgeek (and as briefly described in this blog post), intermediary liability generally cannot be predicated on the mere existence of harmful or illegal content on an online platform that could, conceivably, have been prevented by some action by the platform or other intermediary.

The specific statute may impose other limits (like the “knowing” requirement in the Antiterrorism Act), but intermediary liability makes sense only when a particular intermediary defendant is positioned to control (and thus remedy) the bad conduct in question, and when imposing liability would cause the intermediary to act in such a way that the benefits of its conduct in deterring harm outweigh the costs of impeding its normal functioning as an intermediary.

Had the Court adopted such an approach in its questioning, it could have better honed in on the proper dividing line between parties whose normal conduct might reasonably give rise to liability (without some heightened effort to mitigate harm) and those whose conduct should not entail this sort of elevated responsibility.

Here, the plaintiffs have framed their case in a way that would essentially collapse this analysis into a strict liability standard by simply asking “did something bad happen on a platform that could have been prevented?” As we discuss below, the plaintiffs’ theory goes too far and would overextend intermediary liability to the point that the social costs would outweigh the benefits of deterrence.

The Law & Economics of Intermediary Liability: Who’s Best Positioned to Monitor and Control?

In our amicus brief in Fleites v. MindGeek (as well as our law review article on Section 230 and intermediary liability), we argued that, in limited circumstances, the law should (and does) place responsibility on intermediaries to monitor and control conduct. It is not always sufficient to aim legal sanctions solely at the parties who commit harms directly—e.g., where harms are committed by many pseudonymous individuals dispersed across large online services. In such cases, social costs may be minimized when legal responsibility is placed upon the least-cost avoider: the party in the best position to limit harm, even if it is not the party directly committing the harm.

Thus, in some circumstances, intermediaries (like Twitter) may be the least-cost avoider, such as when information costs are sufficiently low that effective monitoring and control of end users is possible, and when pseudonymity makes remedies against end users ineffective.

But there are costs to imposing such liability—including, importantly, “collateral censorship” of user-generated content by online social-media platforms. This manifests in platforms acting more defensively—taking down more speech, and generally moving in a direction that would make the Internet less amenable to open, public discussion—in an effort to avoid liability. Indeed, a core reason that Section 230 exists in the first place is to reduce these costs. (Whether Section 230 gets the balance correct is another matter, which we take up at length in our law review article linked above).

From an economic perspective, liability should be imposed on the party or parties best positioned to deter the harms in question, so long as the social costs incurred by, and as a result of, enforcement do not exceed the social gains realized. In other words, there is a delicate balance that must be struck to determine when intermediary liability makes sense in a given case. On the one hand, we want illicit content to be deterred, and on the other, we want to preserve the open nature of the Internet. The costs generated from the over-deterrence of legal, beneficial speech is why intermediary liability for user-generated content can’t be applied on a strict-liability basis, and why some bad content will always exist in the system.

The Spectrum of Properly Construed Intermediary Liability: Lessons from Fleites v. Mindgeek

Fleites v. Mindgeek illustrates well that proper application of liability to intermedium exists on a spectrum. Mindgeek—the owner/operator of the website Pornhub—was sued under Racketeer Influenced and Corrupt Organizations Act (RICO) and Victims of Trafficking and Violence Protection Act (TVPA) theories for promoting and profiting from nonconsensual pornography and human trafficking. But the plaintiffs also joined Visa as a defendant, claiming that Visa knowingly provided payment processing for some of Pornhub’s services, making it an aider/abettor.

The “best” defendants, obviously, would be the individuals actually producing the illicit content, but limiting enforcement to direct actors may be insufficient. The statute therefore contemplates bringing enforcement actions against certain intermediaries for aiding and abetting. But there are a host of intermediaries you could theoretically bring into a liability scheme. First, obviously, is Mindgeek, as the platform operator. Plaintiffs felt that Visa was also sufficiently connected to the harm by processing payments for MindGeek users and content posters, and that it should therefore bear liability, as well.

The problem, however, is that there is no limiting principle in the plaintiffs’ theory of the case against Visa. Theoretically, the group of intermediaries “facilitating” the illicit conduct is practically limitless. As we pointed out in our Fleites amicus:

In theory, any sufficiently large firm with a role in the commerce at issue could be deemed liable if all that is required is that its services “allow[]” the alleged principal actors to continue to do business. FedEx, for example, would be liable for continuing to deliver packages to MindGeek’s address. The local waste management company would be liable for continuing to service the building in which MindGeek’s offices are located. And every online search provider and Internet service provider would be liable for continuing to provide service to anyone searching for or viewing legal content on MindGeek’s sites.

Twitter’s attorney in Taamneh, Seth Waxman, made much the same point in responding to Justice Sonia Sotomayor:

…the rule that the 9th Circuit has posited and that the plaintiffs embrace… means that as a matter of course, every time somebody is injured by an act of international terrorism committed, planned, or supported by a foreign terrorist organization, each one of these platforms will be liable in treble damages and so will the telephone companies that provided telephone service, the bus company or the taxi company that allowed the terrorists to move about freely. [emphasis added]

In our Fleites amicus, we argued that a more practical approach is needed; one that tries to draw a sensible line on this liability spectrum. Most importantly, we argued that Visa was not in a position to monitor and control what happened on MindGeek’s platform, and thus was a poor candidate for extending intermediary liability. In that case, because of the complexities of the payment-processing network, Visa had no visibility into what specific content was being purchased, what content was being uploaded to Pornhub, and which individuals may have been uploading illicit content. Worse, the only evidence—if it can be called that—that Visa was aware that anything illicit was happening consisted of news reports in the mainstream media, which may or may not have been accurate, and on which Visa was unable to do any meaningful follow-up investigation.

Our Fleites brief didn’t explicitly consider MindGeek’s potential liability. But MindGeek obviously is in a much better position to monitor and control illicit content. With that said, merely having the ability to monitor and control is not sufficient. Given that content moderation is necessarily an imperfect activity, there will always be some bad content that slips through. Thus, the relevant question is, under the circumstances, did the intermediary act reasonably—e.g., did it comply with best practices—in attempting to identify, remove, and deter illicit content?

In Visa’s case, the answer is not difficult. Given that it had no way to know about or single out transactions as likely to be illegal, its only recourse to reduce harm (and its liability risk) would be to cut off all payment services for Mindgeek. The constraints on perfectly legal conduct that this would entail certainly far outweigh the benefits of reducing illegal activity.

Moreover, such a theory could require Visa to stop processing payments for an enormous swath of legal activity outside of PornHub. For example, purveyors of illegal content on PornHub use ISP services to post their content. A theory of liability that held Visa responsible simply because it plays some small part in facilitating the illegal activity’s existence would presumably also require Visa to shut off payments to ISPs—certainly, that would also curtail the amount of illegal content.

With MindGeek, the answer is a bit more difficult. The anonymous or pseudonymous posting of pornographic content makes it extremely difficult to go after end users. But knowing that human trafficking and nonconsensual pornography are endemic problems, and knowing (as it arguably did) that such content was regularly posted on Pornhub, Mindgeek could be deemed to have acted unreasonably for not having exercised very strict control over its users (at minimum, say, by verifying users’ real identities to facilitate law enforcement against bad actors and deter the posting of illegal content). Indeed, it is worth noting that MindGeek/Pornhub did implement exactly this control, among others, following the public attention arising from news reports of nonconsensual and trafficking-related content on the site. 

But liability for MindGeek is only even plausible given that it might be able to act in such a way that imposes greater burdens on illegal content providers without deterring excessive amounts of legal content. If its only reasonable means of acting would be, say, to shut down PornHub entirely, then just as with Visa, the cost of imposing liability in terms of this “collateral censorship” would surely outweigh the benefits.

Applying the Law & Economics of Collateral Liability to Twitter in Taamneh

Contrast the situation of MindGeek in Fleites with Twitter in Taamneh. Twitter may seem to be a good candidate for intermediary liability. It also has the ability to monitor and control what is posted on its platform. And it faces a similar problem of pseudonymous posting that may make it difficult to go after end users for terrorist activity. But this is not the end of the analysis.

Given that Twitter operates a platform that hosts the general—and overwhelmingly legal—discussions of hundreds of millions of users, posting billions of pieces of content, it would be reasonable to impose a heightened responsibility on Twitter only if it could exercise it without excessively deterring the copious legal content on its platform.

At the same time, Twitter does have active policies to police and remove terrorist content. The relevant question, then, is not whether it should do anything to police such content, but whether a failure to do some unspecified amount more was unreasonable, such that its conduct should constitute aiding and abetting terrorism.

Under the Antiterrorism Act, because the basis of liability is “knowingly providing substantial assistance” to a person who committed an act of international terrorism, “unreasonableness” here would have to mean that the failure to do more transforms its conduct from insubstantial to substantial assistance and/or that the failure to do more constitutes a sort of willful blindness. 

The problem is that doing more—policing its site better and removing more illegal content—would do nothing to alter the extent of assistance it provides to the illegal content that remains. And by the same token, almost by definition, Twitter does not “know” about illegal content it fails to remove. In theory, there is always “more” it could do. But given the inherent imperfections of content moderation at scale, this will always be true, right up to the point that the platform is effectively forced to shut down its service entirely.  

This doesn’t mean that reasonable content moderation couldn’t entail something more than Twitter was doing. But it does mean that the mere existence of illegal content that, in theory, Twitter could have stopped can’t be the basis of liability. And yet the Taamneh plaintiffs make no allegation that acts of terrorism were actually planned on Twitter’s platform, and offer no reasonable basis on which Twitter could have practical knowledge of such activity or practical opportunity to control it.

Nor did plaintiffs point out any examples where Twitter had actual knowledge of such content or users and failed to remove them. Most importantly, the plaintiffs did not demonstrate that any particular content-moderation activities (short of shutting down Twitter entirely) would have resulted in Twitter’s knowledge of or ability to control terrorist activity. Had they done so, it could conceivably constitute a basis for liability. But if the only practical action Twitter can take to avoid liability and prevent harm entails shutting down massive amounts of legal speech, the failure to do so cannot be deemed unreasonable or provide the basis for liability.   

And, again, such a theory of liability would contain no viable limiting principle if it does not consider the practical ability to control harmful conduct without imposing excessively costly collateral damage. Indeed, what in principle would separate a search engine from Twitter, if the search engine linked to an alleged terrorist’s account? Both entities would have access to news reports, and could thus be assumed to have a generalized knowledge that terrorist content might exist on Twitter. The implication of this case, if the plaintiff’s theory is accepted, is that Google would be forced to delist Twitter whenever a news article appears alleging terrorist activity on the service. Obviously, that is untenable for the same reason it’s not tenable to impose an effective obligation on Twitter to remove all terrorist content: the costs of lost legal speech and activity.   

Justice Ketanji Brown Jackson seemingly had the same thought when she pointedly asked whether the plaintiffs’ theory would mean that Linda Hamilton in the Halberstram v. Welch case could have been held liable for aiding and abetting, merely for taking care of Bernard Welch’s kids at home while Welch went out committing burglaries and the murder of Michael Halberstram (instead of the real reason she was held liable, which was for doing Welch’s bookkeeping and helping sell stolen items). As Jackson put it:

…[I]n the Welch case… her taking care of his children [was] assisting him so that he doesn’t have to be at home at night? He’s actually out committing robberies. She would be assisting his… illegal activities, but I understood that what made her liable in this situation is that the assistance that she was providing was… assistance that was directly aimed at the criminal activity. It was not sort of this indirect supporting him so that he can actually engage in the criminal activity.

In sum, the theory propounded by the plaintiffs (and accepted by the 9th U.S. Circuit Court of Appeals) is just too far afield for holding Twitter liable. As Twitter put it in its reply brief, the plaintiffs’ theory (and the 9th Circuit’s holding) is that:

…providers of generally available, generic services can be held responsible for terrorist attacks anywhere in the world that had no specific connection to their offerings, so long as a plaintiff alleges (a) general awareness that terrorist supporters were among the billions who used the services, (b) such use aided the organization’s broader enterprise, though not the specific attack that injured the plaintiffs, and (c) the defendant’s attempts to preclude that use could have been more effective.

Conclusion

If Section 230 immunity isn’t found to apply in Gonzalez v. Google, and the complaint in Taamneh is allowed to go forward, the most likely response of social-media companies will be to reduce the potential for liability by further restricting access to their platforms. This could mean review by some moderator or algorithm of messages or videos before they are posted to ensure that there is no terrorist content. Or it could mean review of users’ profiles before they are able to join the platform to try to ascertain their political leanings or associations with known terrorist groups. Such restrictions would entail copious false negatives, along with considerable costs to users and to open Internet speech.

And in the end, some amount of terrorist content would still get through. If the plaintiffs’ theory leads to liability in Taamneh, it’s hard to see how the same principle wouldn’t entail liability even under these theoretical, heightened practices. Absent a focus on an intermediary defendant’s ability to control harmful content or conduct, without imposing excessive costs on legal content or conduct, the theory of liability has no viable limit.

In sum, to hold Twitter (or other social-media platforms) liable under the facts presented in Taamneh would stretch intermediary liability far beyond its sensible bounds. While Twitter may seem a good candidate for intermediary liability in theory, it should not be held liable for, in effect, simply providing its services.

Perhaps Section 230’s blanket immunity is excessive. Perhaps there is a proper standard that could impose liability on online intermediaries for user-generated content in limited circumstances properly tied to their ability to control harmful actors and the costs of doing so. But liability in the circumstances suggested by the Taamneh plaintiffs—effectively amounting to strict liability—would be an even bigger mistake in the opposite direction.

It seems that large language models (LLMs) are all the rage right now, from Bing’s announcement that it plans to integrate the ChatGPT technology into its search engine to Google’s announcement of its own LLM called “Bard” to Meta’s recent introduction of its Large Language Model Meta AI, or “LLaMA.” Each of these LLMs use artificial intelligence (AI) to create text-based answers to questions.

But it certainly didn’t take long after these innovative new applications were introduced for reports to emerge of LLM models just plain getting facts wrong. Given this, it is worth asking: how will the law deal with AI-created misinformation?

Among the first questions courts will need to grapple with is whether Section 230 immunity applies to content produced by AI. Of course, the U.S. Supreme Court already has a major Section 230 case on its docket with Gonzalez v. Google. Indeed, during oral arguments for that case, Justice Neil Gorsuch appeared to suggest that AI-generated content would not receive Section 230 immunity. And existing case law would appear to support that conclusion, as LLM content is developed by the interactive computer service itself, not by its users.

Another question raised by the technology is what legal avenues would be available to those seeking to challenge the misinformation. Under the First Amendment, the government can only regulate false speech under very limited circumstances. One of those is defamation, which seems like the most logical cause of action to apply. But under defamation law, plaintiffs—especially public figures, who are the most likely litigants and who must prove “malice”—may have a difficult time proving the AI acted with the necessary state of mind to sustain a cause of action.

Section 230 Likely Does Not Apply to Information Developed by an LLM

Section 230(c)(1) states:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

The law defines an interactive computer service as “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.”

The access software provider portion of that definition includes any tool that can “filter, screen, allow, or disallow content; pick, choose, analyze, or digest content; or transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content.”

And finally, an information content provider is “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.”

Taken together, Section 230(c)(1) gives online platforms (“interactive computer services”) broad immunity for user-generated content (“information provided by another information content provider”). This even covers circumstances where the online platform (acting as an “access software provider”) engages in a great deal of curation of the user-generated content.

Section 230(c)(1) does not, however, protect information created by the interactive computer service itself.

There is case law to help determine whether content is created or developed by the interactive computer service. Online platforms applying “neutral tools” to help organize information have not lost immunity. As the 9th U.S. Circuit Court of Appeals put it in Fair Housing Council v. Roommates.com:

Providing neutral tools for navigating websites is fully protected by CDA immunity, absent substantial affirmative conduct on the part of the website creator promoting the use of such tools for unlawful purposes.

On the other hand, online platforms are liable for content they create or develop, which does not include “augmenting the content generally,” but does include “materially contributing to its alleged unlawfulness.” 

The question here is whether the text-based answers provided by LLM apps like Bing’s Sydney or Google’s Bard comprise content created or developed by those online platforms. One could argue that LLMs are neutral tools simply rearranging information from other sources for display. It seems clear, however, that the LLM is synthesizing information to create new content. The use of AI to answer a question, rather than a human agent of Google or Microsoft, doesn’t seem relevant to whether or not it was created or developed by those companies. (Though, as Matt Perault notes, how LLMs are integrated into a product matters. If an LLM just helps “determine which search results to prioritize or which text to highlight from underlying search results,” then it may receive Section 230 protection.)

The technology itself gives text-based answers based on inputs from the questioner. LLMs uses AI-trained engines to guess the next word based on troves of data from the internet. While the information may come from third parties, the creation of the content itself is due to the LLM. As ChatGPT put it in response to my query here:

Proving Defamation by AI

In the absence of Section 230 immunity, there is still the question of how one could hold Google’s Bard or Microsoft’s Sydney accountable for purveying misinformation. There are no laws against false speech in general, nor can there be, since the Supreme Court declared such speech was protected in United States v. Alvarez. There are, however, categories of false speech, like defamation and fraud, which have been found to lie outside of First Amendment protection.

Defamation is the most logical cause of action that could be brought for false information provided by an LLM app. But it is notable that it is highly unlikely that people who have not received significant public recognition will be known by these LLM apps (believe me, I tried to get ChatGPT to tell me something about myself—alas, I’m not famous enough). On top of that, those most likely to have significant damages from their reputations being harmed by falsehoods spread online are those who are in the public eye. This means that, for the purposes of the defamation suit, it is public figures who are most likely to sue.

As an example, if ChatGPT answers the question of whether Johnny Depp is a wife-beater by saying that he is, contrary to one court’s finding (but consistent with another’s), Depp could sue the creators of the service for defamation. He would have to prove that a false statement was publicized to a third party that resulted in damages to him. For the sake of argument, let’s say he can do both. The case still isn’t proven because, as a public figure, he would also have to prove “actual malice.”

Under New York Times v. Sullivan and its progeny, a public figure must prove the defendant acted with “actual malice” when publicizing false information about the plaintiff. Actual malice is defined as “knowledge that [the statement] was false or with reckless disregard of whether it was false or not.”

The question arises whether actual malice can be attributed to a LLM. It seems unlikely that it could be said that the AI’s creators trained it in a way that they “knew” the answers provided would be false. But it may be a more interesting question whether the LLM is giving answers with “reckless disregard” of their truth or falsity. One could argue that these early versions of the technology are exactly that, but the underlying AI is likely to improve over time with feedback. The best time for a plaintiff to sue may be now, when the LLMs are still in their infancy and giving off false answers more often.

It is possible that, given enough context in the query, LLM-empowered apps may be able to recognize private figures, and get things wrong. For instance, when I asked ChatGPT to give a biography of myself, I got no results:

When I added my workplace, I did get a biography, but none of the relevant information was about me. It was instead about my boss, Geoffrey Manne, the president of the International Center for Law & Economics:

While none of this biography is true, it doesn’t harm my reputation, nor does it give rise to damages. But it is at least theoretically possible that an LLM could make a defamatory statement against a private person. In such a case, a lower burden of proof would apply to the plaintiff, that of negligence, i.e., that the defendant published a false statement of fact that a reasonable person would have known was false. This burden would be much easier to meet if the AI had not been sufficiently trained before being released upon the public.

Conclusion

While it is unlikely that a service like ChapGPT would receive Section 230 immunity, it also seems unlikely that a plaintiff would be able to sustain a defamation suit against it for false statements. The most likely type of plaintiff (public figures) would encounter difficulty proving the necessary element of “actual malice.” The best chance for a lawsuit to proceed may be against the early versions of this service—rolled out quickly and to much fanfare, while still being in a beta stage in terms of accuracy—as a colorable argument can be made that they are giving false answers in “reckless disregard” of their truthfulness.

In our previous post on Gonzalez v. Google LLC, which will come before the U.S. Supreme Court for oral arguments Feb. 21, Kristian Stout and I argued that, while the U.S. Justice Department (DOJ) got the general analysis right (looking to Roommates.com as the framework for exceptions to the general protections of Section 230), they got the application wrong (saying that algorithmic recommendations should be excepted from immunity).

Now, after reading Google’s brief, as well as the briefs of amici on their side, it is even more clear to me that:

  1. algorithmic recommendations are protected by Section 230 immunity; and
  2. creating an exception for such algorithms would severely damage the internet as we know it.

I address these points in reverse order below.

Google on the Death of the Internet Without Algorithms

The central point that Google makes throughout its brief is that a finding that Section 230’s immunity does not extend to the use of algorithmic recommendations would have potentially catastrophic implications for the internet economy. Google and amici for respondents emphasize the ubiquity of recommendation algorithms:

Recommendation algorithms are what make it possible to find the needles in humanity’s largest haystack. The result of these algorithms is unprecedented access to knowledge, from the lifesaving (“how to perform CPR”) to the mundane (“best pizza near me”). Google Search uses algorithms to recommend top search results. YouTube uses algorithms to share everything from cat videos to Heimlich-maneuver tutorials, algebra problem-solving guides, and opera performances. Services from Yelp to Etsy use algorithms to organize millions of user reviews and ratings, fueling global commerce. And individual users “like” and “share” content millions of times every day. – Brief for Respondent Google, LLC at 2.

The “recommendations” they challenge are implicit, based simply on the manner in which YouTube organizes and displays the multitude of third-party content on its site to help users identify content that is of likely interest to them. But it is impossible to operate an online service without “recommending” content in that sense, just as it is impossible to edit an anthology without “recommending” the story that comes first in the volume. Indeed, since the dawn of the internet, virtually every online service—from news, e-commerce, travel, weather, finance, politics, entertainment, cooking, and sports sites, to government, reference, and educational sites, along with search engines—has had to highlight certain content among the thousands or millions of articles, photographs, videos, reviews, or comments it hosts to help users identify what may be most relevant. Given the sheer volume of content on the internet, efforts to organize, rank, and display content in ways that are useful and attractive to users are indispensable. As a result, exposing online services to liability for the “recommendations” inherent in those organizational choices would expose them to liability for third-party content virtually all the time. – Amicus Brief for Meta Platforms at 3-4.

In other words, if Section 230 were limited in the way that the plaintiffs (and the DOJ) seek, internet platforms’ ability to offer users useful information would be strongly attenuated, if not completely impaired. The resulting legal exposure would lead inexorably to far less of the kinds of algorithmic recommendations upon which the modern internet is built.

This is, in part, why we weren’t able to fully endorse the DOJ’s brief in our previous post. The DOJ’s brief simply goes too far. It would be unreasonable to establish as a categorical rule that use of the ubiquitous auto-discovery algorithms that power so much of the internet would strip a platform of Section 230 protection. The general rule advanced by the DOJ’s brief would have detrimental and far-ranging implications.

Amici on Publishing and Section 230(f)(4)

Google and the amici also make a strong case that algorithmic recommendations are inseparable from publishing. They have a strong textual hook in Section 230(f)(4), which explicitly protects “enabling tools that… filter, screen, allow, or disallow content; pick, choose, analyze or disallow content; or transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content.”

As the amicus brief from a group of internet-law scholars—including my International Center for Law & Economics colleagues Geoffrey Manne and Gus Hurwitz—put it:

Section 230’s text should decide this case. Section 230(c)(1) immunizes the user or provider of an “interactive computer service” from being “treated as the publisher or speaker” of information “provided by another information content provider.” And, as Section 230(f)’s definitions make clear, Congress understood the term “interactive computer service” to include services that “filter,” “screen,” “pick, choose, analyze,” “display, search, subset, organize,” or “reorganize” third-party content. Automated recommendations perform exactly those functions, and are therefore within the express scope of Section 230’s text. – Amicus Brief of Internet Law Scholars at 3-4.

In other words, Section 230 protects not just the conveyance of information, but how that information is displayed. Algorithmic recommendations are a subset of those display tools that allow users to find what they are looking for with ease. Section 230 can’t be reasonably read to exclude them.

Why This Isn’t Really (Just) a Roommates.com Case

This is where the DOJ’s amicus brief (and our previous analysis) misses the point. This is not strictly a Roomates.com case. The case actually turns on whether algorithmic recommendations are separable from publication of third-party content, rather than whether they are design choices akin to what was occurring in that case.

For instance, in our previous post, we argued that:

[T]he DOJ argument then moves onto thinner ice. The DOJ believes that the 230 liability shield in Gonzalez depends on whether an automated “recommendation” rises to the level of development or creation, as the design of filtering criteria in Roommates.com did.

While we thought the DOJ went too far in differentiating algorithmic recommendations from other uses of algorithms, we gave them too much credit in applying the Roomates.com analysis. Section 230 was meant to immunize filtering tools, so long as the information provided is from third parties. Algorithmic recommendations—like the type at issue with YouTube’s “Up Next” feature—are less like the conduct in Roommates.com and much more like a search engine.

The DOJ did, however, have a point regarding algorithmic tools in that they may—like any other tool a platform might use—be employed in a way that transforms the automated promotion into a direct endorsement or original publication. For instance, it’s possible to use algorithms to intentionally amplify certain kinds of content in such a way as to cultivate more of that content.

That’s, after all, what was at the heart of Roommates.com. The site was designed to elicit responses from users that violated the law. Algorithms can do that, but as we observed previously, and as the many amici in Gonzalez observe, there is nothing inherent to the operation of algorithms that match users with content that makes their use categorically incompatible with Section 230’s protections.

Conclusion

After looking at the textual and policy arguments forwarded by both sides in Gonzalez, it appears that Google and amici for respondents have the better of it. As several amici argued, to the extent there are good reasons to reform Section 230, Congress should take the lead. The Supreme Court shouldn’t take this case as an opportunity to significantly change the consensus of the appellate courts on the broad protections of Section 230 immunity.

Later next month, the U.S. Supreme Court will hear oral arguments in Gonzalez v. Google LLC, a case that has drawn significant attention and many bad takes regarding how Section 230 of the Communications Decency Act should be interpreted. Enacted in the mid-1990s, when the Internet as we know it was still in its infancy, Section 230 has grown into a law that offers online platforms a fairly comprehensive shield against liability for the content that third parties post to their services. But the law has also come increasingly under fire, from both the political left and the right. 

At issue in Gonzalez is whether Section 230(c)(1) immunizes Google from a set of claims brought under the Antiterrorism Act of 1990 (ATA). The petitioners are relatives of Nohemi Gonzalez, an American citizen murdered in a 2015 terrorist attack in Paris. They allege that Google, through YouTube, is liable under the ATA for providing assistance to ISIS for four main reasons. They allege that: 

  1. Google allowed ISIS to use YouTube to disseminate videos and messages, thereby recruiting and radicalizing terrorists responsible for the murder.
  2. Google failed to take adequate steps to take down videos and accounts and keep them down.
  3. Google recommends videos of others, both through subscriptions and algorithms.
  4. Google monetizes this content through its AdSense service, with ISIS-affiliated users receiving revenue. 

The 9th U.S. Circuit Court of Appeals dismissed all of the non-revenue-sharing claims as barred by Section 230(c)(1), but allowed the revenue-sharing claim to go forward. 

Highlights of DOJ’s Brief

In an amicus brief, the U.S. Justice Department (DOJ) ultimately asks the Court to vacate the 9th Circuit’s judgment regarding those claims that are based on YouTube’s alleged targeted recommendations of ISIS content. But the DOJ also rejects much of the petitioner’s brief, arguing that Section 230 does rightfully apply to the rest of the claims. 

The crux of the DOJ’s brief concerns when and how design choices can be outside of Section 230 immunity. The lodestar 9th Circuit case that the DOJ brief applies is 2008’s Fair Housing Council of San Fernando Valley v. Roommates.com.

As the DOJ notes, radical theories advanced by the plaintiffs and other amici would go too far in restricting Section 230 immunity based on a platform’s decisions on whether or not to block or remove user content (see, e.g., its discussion on pp. 17-21 of the merits and demerits of Justice Clarence Thomas’s Malwarebytes concurrence).  

At the same time, the DOJ’s brief notes that there is room for a reasonable interpretation of Section 230 that allows for liability to attach when online platforms behave unreasonably in their promotion of users’ content. Applying essentially the 9th Circuit’s Roommates.com standard, the DOJ argues that YouTube’s choice to amplify certain terrorist content through its recommendations algorithm is a design choice, rather than simply the hosting of third-party content, thereby removing it from the scope of  Section 230 immunity.  

While there is much to be said in favor of this approach, it’s important to point out that, although directionally correct, it’s not at all clear that a Roommates.com analysis should ultimately come down as the DOJ recommends in Gonzalez. More broadly, the way the DOJ structures its analysis has important implications for how we should think about the scope of Section 230 reform that attempts to balance accountability for intermediaries with avoiding undue collateral censorship.

Charting a Middle Course on Immunity

The important point on which the DOJ relies from Roommates.com is that intermediaries can be held accountable when their own conduct creates violations of the law, even if it involves third–party content. As the DOJ brief puts it:

Section 230(c)(1) protects an online platform from claims premised on its dissemination of third-party speech, but the statute does not immunize a platform’s other conduct, even if that conduct involves the solicitation or presentation of third-party content. The Ninth Circuit’s Roommates.com decision illustrates the point in the context of a website offering a roommate-matching service… As a condition of using the service, Roommates.com “require[d] each subscriber to disclose his sex, sexual orientation and whether he would bring children to a household,” and to “describe his preferences in roommates with respect to the same three criteria.” Ibid. The plaintiffs alleged that asking those questions violated housing-discrimination laws, and the court of appeals agreed that Section 230(c)(1) did not shield Roommates.com from liability for its “own acts” of “posting the questionnaire and requiring answers to it.” Id. at 1165.

Imposing liability in such circumstances does not treat online platforms as the publishers or speakers of content provided by others. Nor does it obligate them to monitor their platforms to detect objectionable postings, or compel them to choose between “suppressing controversial speech or sustaining prohibitive liability.”… Illustrating that distinction, the Roommates.com court held that although Section 230(c)(1) did not apply to the website’s discriminatory questions, it did shield the website from liability for any discriminatory third-party content that users unilaterally chose to post on the site’s “generic” “Additional Comments” section…

The DOJ proceeds from this basis to analyze what it would take for Google (via YouTube) to no longer benefit from Section 230 immunity by virtue of its own editorial actions, as opposed to its actions as a publisher (which 230 would still protect). For instance, are the algorithmic suggestions of videos simply neutral tools that allow for users to get more of the content they desire, akin to search results? Or are the algorithmic suggestions of new videos a design choice that makes it akin to Roommates?

The DOJ argues that taking steps to better display pre-existing content is not content development or creation, in and of itself. Similarly, it would be a mistake to make intermediaries liable for creating tools that can then be deployed by users:

Interactive websites invariably provide tools that enable users to create, and other users to find and engage with, information. A chatroom might supply topic headings to organize posts; a photo-sharing site might offer a feature for users to signal that they like or dislike a post; a classifieds website might enable users to add photos or maps to their listings. If such features rendered the website a co-developer of all users’ content, Section 230(c)(1) would be a dead letter.

At a high level, this is correct. Unfortunately, the DOJ argument then moves onto thinner ice. The DOJ believes that the 230 liability shield in Gonzalez depends on whether an automated “recommendation” rises to the level of development or creation, as the design of filtering criteria in Roommates.com did. Toward this end, the brief notes that:

The distinction between a recommendation and the recommended content is particularly clear when the recommendation is explicit. If YouTube had placed a selected ISIS video on a user’s homepage alongside a message stating, “You should watch this,” that message would fall outside Section 230(c)(1). Encouraging a user to watch a selected video is conduct distinct from the video’s publication (i.e., hosting). And while YouTube would be the “publisher” of the recommendation message itself, that message would not be “information provided by another information content provider.” 47 U.S.C. 230(c)(1).

An Absence of Immunity Does Not Mean a Presence of Liability

Importantly, the DOJ brief emphasizes throughout that remanding the ATA claims is not the end of the analysis—i.e., it does not mean that the plaintiffs can prove the elements. Moreover, other background law—notably, the First Amendment—can limit the application of liability to intermediaries, as well. As we put it in our paper on Section 230 reform:

It is important to again note that our reasonableness proposal doesn’t change the fact that the underlying elements in any cause of action still need to be proven. It is those underlying laws, whether civil or criminal, that would possibly hold intermediaries liable without Section 230 immunity. Thus, for example, those who complain that FOSTA/SESTA harmed sex workers by foreclosing a safe way for them to transact (illegal) business should really be focused on the underlying laws that make sex work illegal, not the exception to Section 230 immunity that FOSTA/SESTA represents. By the same token, those who assert that Section 230 improperly immunizes “conservative bias” or “misinformation” fail to recognize that, because neither of those is actually illegal (nor could they be under First Amendment law), Section 230 offers no additional immunity from liability for such conduct: There is no underlying liability from which to provide immunity in the first place.

There’s a strong likelihood that, on remand, the court will find there is no violation of the ATA at all. Section 230 immunity need not be stretched beyond all reasonable limits to protect intermediaries from hypothetical harms when underlying laws often don’t apply. 

Conclusion

To date, the contours of Section 230 reform largely have been determined by how courts interpret the statute. There is an emerging consensus that some courts have gone too far in extending Section 230 immunity to intermediaries. The DOJ’s brief is directionally correct, but the Court should not adopt it wholesale. More needs to be done to ensure that the particular facts of Gonzalez are not used to completely gut Section 230 more generally.  

Twitter has seen a lot of ups and downs since Elon Musk closed on his acquisition of the company in late October and almost immediately set about his initiatives to “reform” the platform’s operations.

One of the stories that has gotten somewhat lost in the ensuing chaos is that, in the short time under Musk, Twitter has made significant inroads—on at least some margins—against the visibility of child sexual abuse material (CSAM) by removing major hashtags that were used to share it, creating a direct reporting option, and removing major purveyors. On the other hand, due to the large reductions in Twitter’s workforce—both voluntary and involuntary—there are now very few human reviewers left to deal with the issue.

Section 230 immunity currently protects online intermediaries from most civil suits for CSAM (a narrow carveout is made under Section 1595 of the Trafficking Victims Protection Act). While the federal government could bring criminal charges if it believes online intermediaries are violating federal CSAM laws, and certain narrow state criminal claims could be brought consistent with federal law, private litigants are largely left without the ability to find redress on their own in the courts.

This, among other reasons, is why there has been a push to amend Section 230 immunity. Our proposal (along with co-author Geoffrey Manne) suggests online intermediaries should have a reasonable duty of care to remove illegal content. But this still requires thinking carefully about what a reasonable duty of care entails.

For instance, one of the big splash moves made by Twitter after Musk’s acquisition was to remove major CSAM distribution hashtags. While this did limit visibility of CSAM for a time, some experts say it doesn’t really solve the problem, as new hashtags will arise. So, would a reasonableness standard require the periodic removal of major hashtags? Perhaps it would. It appears to have been a relatively low-cost way to reduce access to such material, and could theoretically be incorporated into a larger program that uses automated discovery to find and remove future hashtags.

Of course it won’t be perfect, and will be subject to something of a Whac-A-Mole dynamic. But the relevant question isn’t whether it’s a perfect solution, but whether it yields significant benefit relative to its cost, such that it should be regarded as a legally reasonable measure that platforms should broadly implement.

On the flip side, Twitter has lost such a large amount of its workforce that it potentially no longer has enough staff to do the important review of CSAM. As long as Twitter allows adult nudity, and algorithms are unable to effectively distinguish between different types of nudity, human reviewers remain essential. A reasonableness standard might also require sufficient staff and funding dedicated to reviewing posts for CSAM. 

But what does it mean for a platform to behave “reasonably”?

Platforms Should Behave ‘Reasonably’

Rethinking platforms’ safe harbor from liability as governed by a “reasonableness” standard offers a way to more effectively navigate the complexities of these tradeoffs without resorting to the binary of immunity or total liability that typically characterizes discussions of Section 230 reform.

It could be the case that, given the reality that machines can’t distinguish between “good” and “bad” nudity, it is patently unreasonable for an open platform to allow any nudity at all if it is run with the level of staffing that Musk seems to prefer for Twitter.

Consider the situation that MindGeek faced a couple of years ago. It was pressured by financial providers, including PayPal and Visa, to clean up the CSAM and nonconsenual pornography that appeared on its websites. In response, they removed more than 80% of suspected illicit content and required greater authentication for posting.

Notwithstanding efforts to clean up the service, a lawsuit was filed against MindGeek and Visa by victims who asserted that the credit-card company was a knowing conspirator for processing payments to MindGeek’s sites when they were purveying child pornography. Notably, Section 230 issues were dismissed early on in the case, but the remaining claims—rooted in the Racketeer Influenced and Corrupt Organizations Act (RICO) and the Trafficking Victims Protection Act (TVPA)—contained elements that support evaluating the conduct of online intermediaries, including payment providers who support online services, through a reasonableness lens.

In our amicus, we stressed the broader policy implications of failing to appropriately demarcate the bounds of liability. In short, we stressed that deterrence is best encouraged by placing responsibility for control on the party most closely able to monitor the situation—i.e., MindGeek, and not Visa. Underlying this, we believe that an appropriately tuned reasonableness standard should be able to foreclose these sorts of inquiries at early stages of litigation if there is good evidence that an intermediary behaved reasonably under the circumstances.

In this case, we believed the court should have taken seriously the fact that a payment processor needs to balance a number of competing demands— legally, economically, and morally—in a way that enables them to serve their necessary prosocial roles. Here, Visa had to balance its role, on the one hand, as a neutral intermediary responsible for handling millions of daily transactions, with its interests to ensure that it did not facilitate illegal behavior. But it also was operating, essentially, under a veil of ignorance: all of the information it had was derived from news reports, as it was not directly involved in, nor did it have special insight into, the operation of MindGeek’s businesses.

As we stressed in our intermediary-liability paper, there is indeed a valid concern that changes to intermediary-liability policy not invite a flood of ruinous litigation. Instead, there needs to be some ability to determine at the early stages of litigation whether a defendant behaved reasonably under the circumstances. In the MindGeek case, we believed that Visa did.

In essence, much of this approach to intermediary liability boils down to finding socially and economically efficient dividing lines that can broadly demarcate when liability should attach. For example, if Visa is liable as a co-conspirator in MindGeek’s allegedly illegal enterprise for providing a payment network that MindGeek uses by virtue of its relationship with yet other intermediaries (i.e., the banks that actually accept and process the credit-card payments), why isn’t the U.S. Post Office also liable for providing package-delivery services that allow MindGeek to operate? Or its maintenance contractor for cleaning and maintaining its offices?

Twitter implicitly engaged in this sort of analysis when it considered becoming an OnlyFans competitor. Despite having considerable resources—both algorithmic and human—Twitter’s internal team determined they could not “accurately detect child sexual exploitation and non-consensual nudity at scale.” As a result, they abandoned the project. Similarly, Tumblr tried to make many changes, including taking down CSAM hashtags, before finally giving up and removing all pornographic material in order to remain in the App Store for iOS. At root, these firms demonstrated the ability to weigh costs and benefits in ways entirely consistent with a reasonableness analysis. 

Thinking about the MindGeek situation again, it could also be the case that MindGeek did not behave reasonably. Some of MindGeek’s sites encouraged the upload of user-generated pornography. If MindGeek experienced the same limitations in detecting “good” and “bad” pornography (which is likely), it could be that the company behaved recklessly for many years, and only tightened its verification procedures once it was caught. If true, that is behavior that should not be protected by the law with a liability shield, as it is patently unreasonable.

Apple is sometimes derided as an unfair gatekeeper of speech through its App Store. But, ironically, Apple itself has made complex tradeoffs between data security and privacy—through use of encryption, on the one hand, and checking devices for CSAM material, on the other. Prioritizing encryption over scanning devices (especially photos and messages) for CSAM is a choice that could allow for more CSAM to proliferate. But the choice is, again, a difficult one: how much moderation is needed and how do you balance such costs against other values important to users, such as privacy for the vast majority of nonoffending users?

As always, these issues are complex and involve tradeoffs. But it is obvious that more can and needs to be done by online intermediaries to remove CSAM.

But What Is ‘Reasonable’? And How Do We Get There?

The million-dollar legal question is what counts as “reasonable?” We are not unaware of the fact that, particularly when dealing with online platforms that deal with millions of users a day, there is a great deal of surface area exposed to litigation by potentially illicit user-generated conduct. Thus, it is not the case, at least for the foreseeable future, that we need to throw open gates of a full-blown common-law process to determine questions of intermediary liability. What is needed, instead, is a phased-in approach that gets courts in the business of parsing these hard questions and building up a body of principles that, on the one hand, encourage platforms to do more to control illicit content on their services, and on the other, discourages unmeritorious lawsuits by the plaintiffs’ bar.

One of our proposals for Section 230 reform is for a multistakeholder body, overseen by an expert agency like the Federal Trade Commission or National Institute of Standards and Technology, to create certified moderation policies. This would involve online intermediaries working together with a convening federal expert agency to develop a set of best practices for removing CSAM, including thinking through the cost-benefit analysis of more moderation—human or algorithmic—or even wholesale removal of nudity and pornographic content.

Compliance with these standards should, in most cases, operate to foreclose litigation against online service providers at an early stage. If such best practices are followed, a defendant could point to its moderation policies as a “certified answer” to any complaint alleging a cause of action arising out of user-generated content. Compliant practices will merit dismissal of the case, effecting a safe harbor similar to the one currently in place in Section 230.

In litigation, after a defendant answers a complaint with its certified moderation policies, the burden would shift to the plaintiff to adduce sufficient evidence to show that the certified standards were not actually adhered to. Such evidence should be more than mere res ipsa loquitur; it must be sufficient to demonstrate that the online service provider should have been aware of a harm or potential harm, that it had the opportunity to cure or prevent it, and that it failed to do so. Such a claim would need to meet a heightened pleading requirement, as for fraud, requiring particularity. And, periodically, the body overseeing the development of this process would incorporate changes to the best practices standards based on the cases being brought in front of courts.

Online service providers don’t need to be perfect in their content-moderation decisions, but they should behave reasonably. A properly designed duty-of-care standard should be flexible and account for a platform’s scale, the nature and size of its user base, and the costs of compliance, among other considerations. What is appropriate for YouTube, Facebook, or Twitter may not be the same as what’s appropriate for a startup social-media site, a web-infrastructure provider, or an e-commerce platform.

Indeed, this sort of flexibility is a benefit of adopting a “reasonableness” standard, such as is found in common-law negligence. Allowing courts to apply the flexible common-law duty of reasonable care would also enable jurisprudence to evolve with the changing nature of online intermediaries, the problems they pose, and the moderating technologies that become available.

Conclusion

Twitter and other online intermediaries continue to struggle with the best approach to removing CSAM, nonconsensual pornography, and a whole host of other illicit content. There are no easy answers, but there are strong ethical reasons, as well as legal and market pressures, to do more. Section 230 reform is just one part of a complete regulatory framework, but it is an important part of getting intermediary liability incentives right. A reasonableness approach that would hold online platforms accountable in a cost-beneficial way is likely to be a key part of a positive reform agenda for Section 230.

In an expected decision (but with a somewhat unexpected coalition), the U.S. Supreme Court has moved 5 to 4 to vacate an order issued early last month by the 5th U.S. Circuit Court of Appeals, which stayed an earlier December 2021 order from the U.S. District Court for the Western District of Texas enjoining Texas’ attorney general from enforcing the state’s recently enacted social-media law, H.B. 20. The law would bar social-media platforms with more than 50 million active users from engaging in “censorship” based on political viewpoint. 

The shadow-docket order serves to grant the preliminary injunction sought by NetChoice and the Computer & Communications Industry Association to block the law—which they argue is facially unconstitutional—from taking effect. The trade groups also are challenging a similar Florida law, which the 11th U.S. Circuit Court of Appeals last week ruled was “substantially likely” to violate the First Amendment. Both state laws will thus be stayed while challenges on the merits proceed. 

But the element of the Supreme Court’s order drawing the most initial interest is the “strange bedfellows” breakdown that produced it. Chief Justice John Roberts was joined by conservative Justices Brett Kavanaugh and Amy Coney Barrett and liberals Stephen Breyer and Sonia Sotomayor in moving to vacate the 5th Circuit’s stay. Meanwhile, Justice Samuel Alito wrote a dissent that was joined by fellow conservatives Clarence Thomas and Neil Gorsuch, and liberal Justice Elena Kagan also dissented without offering a written justification.

A glance at the recent history, however, reveals why it should not be all that surprising that the justices would not come down along predictable partisan lines. Indeed, when it comes to content moderation and the question of whether to designate platforms as “common carriers,” the one undeniably predictable outcome is that both liberals and conservatives have been remarkably inconsistent.

Both Sides Flip Flop on Common Carriage

Ever since Justice Thomas used his concurrence in 2021’s Biden v. Knight First Amendment Institute to lay out a blueprint for how states could regulate social-media companies as common carriers, states led by conservatives have been working to pass bills to restrict the ability of social media companies to “censor.” 

Forcing common carriage on the Internet was, not long ago, something conservatives opposed. It was progressives who called net neutrality the “21st Century First Amendment.” The actual First Amendment, however, protects the rights of both Internet service providers (ISPs) and social-media companies to decide the rules of the road on their own platforms.

Back in the heady days of 2014, when the Federal Communications Commission (FCC) was still planning its next moves on net neutrality after losing at the U.S. Court of Appeals for the D.C. Circuit the first time around, Geoffrey Manne and I at the International Center for Law & Economics teamed with Berin Szoka and Tom Struble of TechFreedom to write a piece for the First Amendment Law Review arguing that there was no exception that would render broadband ISPs “state actors” subject to the First Amendment. Further, we argued that the right to editorial discretion meant that net-neutrality regulations would be subject to (and likely fail) First Amendment scrutiny under Tornillo or Turner.

After the FCC moved to reclassify broadband as a Title II common carrier in 2015, then-Judge Kavanaugh of the D.C. Circuit dissented from the denial of en banc review, in part on First Amendment grounds. He argued that “the First Amendment bars the Government from restricting the editorial discretion of Internet service providers, absent a showing that an Internet service provider possesses market power in a relevant geographic market.” In fact, Kavanaugh went so far as to link the interests of ISPs and Big Tech (and even traditional media), stating:

If market power need not be shown, the Government could regulate the editorial decisions of Facebook and Google, of MSNBC and Fox, of NYTimes.com and WSJ.com, of YouTube and Twitter. Can the Government really force Facebook and Google and all of those other entities to operate as common carriers? Can the Government really impose forced-carriage or equal-access obligations on YouTube and Twitter? If the Government’s theory in this case were accepted, then the answers would be yes. After all, if the Government could force Internet service providers to carry unwanted content even absent a showing of market power, then it could do the same to all those other entities as well. There is no principled distinction between this case and those hypothetical cases.

This was not a controversial view among free-market, right-of-center types at the time.

An interesting shift started to occur during the presidency of Donald Trump, however, as tensions between social-media companies and many on the right came to a head. Instead of seeing these companies as private actors with strong First Amendment rights, some conservatives began looking either for ways to apply the First Amendment to them directly as “state actors” or to craft regulations that would essentially make social-media companies into common carriers with regard to speech.

But Kavanaugh’s opinion in USTelecom remains the best way forward to understand how the First Amendment applies online today, whether regarding net neutrality or social-media regulation. Given Justice Alito’s view, expressed in his dissent, that it “is not at all obvious how our existing precedents, which predate the age of the internet, should apply to large social media companies,” it is a fair bet that laws like those passed by Texas and Florida will get a hearing before the Court in the not-distant future. If Justice Kavanaugh’s opinion has sway among the conservative bloc of the Supreme Court, or is able to peel off justices from the liberal bloc, the Texas law and others like it (as well as net-neutrality regulations) will be struck down as First Amendment violations.

Kavanaugh’s USTelecom Dissent

In then-Judge Kavanaugh’s dissent, he highlighted two reasons he believed the FCC’s reclassification of broadband as Title II was unlawful. The first was that the reclassification decision was a “major question” that required clear authority delegated by Congress. The second, more important point was that the FCC’s reclassification decision was subject to the Turner standard. Under that standard, since the FCC did not engage—at the very least—in a market-power analysis, the rules could not stand, as they amounted to mandated speech.

The interesting part of this opinion is that it tracks very closely to the analysis of common-carriage requirements for social-media companies. Kavanaugh’s opinion offered important insights into:

  1. the applicability of the First Amendment right to editorial discretion to common carriers;
  2. the “use it or lose it” nature of this right;
  3. whether Turner’s protections depended on scarcity; and 
  4. what would be required to satisfy Turner scrutiny.

Common Carriage and First Amendment Protection

Kavanaugh found unequivocally that common carriers, such as ISPs classified under Title II, were subject to First Amendment protection under the Turner decisions:

The Court’s ultimate conclusion on that threshold First Amendment point was not obvious beforehand. One could have imagined the Court saying that cable operators merely operate the transmission pipes and are not traditional editors. One could have imagined the Court comparing cable operators to electricity providers, trucking companies, and railroads – all entities subject to traditional economic regulation. But that was not the analytical path charted by the Turner Broadcasting Court. Instead, the Court analogized the cable operators to the publishers, pamphleteers, and bookstore owners traditionally protected by the First Amendment. As Turner Broadcasting concluded, the First Amendment’s basic principles “do not vary when a new and different medium for communication appears” – although there of course can be some differences in how the ultimate First Amendment analysis plays out depending on the nature of (and competition in) a particular communications market. Brown v. Entertainment Merchants Association, 564 U.S. 786, 790 (2011) (internal quotation mark omitted).

Here, of course, we deal with Internet service providers, not cable television operators. But Internet service providers and cable operators perform the same kinds of functions in their respective networks. Just like cable operators, Internet service providers deliver content to consumers. Internet service providers may not necessarily generate much content of their own, but they may decide what content they will transmit, just as cable operators decide what content they will transmit. Deciding whether and how to transmit ESPN and deciding whether and how to transmit ESPN.com are not meaningfully different for First Amendment purposes.

Indeed, some of the same entities that provide cable television service – colloquially known as cable companies – provide Internet access over the very same wires. If those entities receive First Amendment protection when they transmit television stations and networks, they likewise receive First Amendment protection when they transmit Internet content. It would be entirely illogical to conclude otherwise. In short, Internet service providers enjoy First Amendment protection of their rights to speak and exercise editorial discretion, just as cable operators do.

‘Use It or Lose It’ Right to Editorial Discretion

Kavanaugh questioned whether the First Amendment right to editorial discretion depends, to some degree, on how much the entity used the right. Ultimately, he rejected the idea forwarded by the FCC that, since ISPs don’t restrict access to any sites, they were essentially holding themselves out to be common carriers:

I find that argument mystifying. The FCC’s “use it or lose it” theory of First Amendment rights finds no support in the Constitution or precedent. The FCC’s theory is circular, in essence saying: “They have no First Amendment rights because they have not been regularly exercising any First Amendment rights and therefore they have no First Amendment rights.” It may be true that some, many, or even most Internet service providers have chosen not to exercise much editorial discretion, and instead have decided to allow most or all Internet content to be transmitted on an equal basis. But that “carry all comers” decision itself is an exercise of editorial discretion. Moreover, the fact that the Internet service providers have not been aggressively exercising their editorial discretion does not mean that they have no right to exercise their editorial discretion. That would be akin to arguing that people lose the right to vote if they sit out a few elections. Or citizens lose the right to protest if they have not protested before. Or a bookstore loses the right to display its favored books if it has not done so recently. That is not how constitutional rights work. The FCC’s “use it or lose it” theory is wholly foreign to the First Amendment.

Employing a similar logic, Kavanaugh also rejected the notion that net-neutrality rules were essentially voluntary, given that ISPs held themselves out as carrying all content.

Relatedly, the FCC claims that, under the net neutrality rule, an Internet service provider supposedly may opt out of the rule by choosing to carry only some Internet content. But even under the FCC’s description of the rule, an Internet service provider that chooses to carry most or all content still is not allowed to favor some content over other content when it comes to price, speed, and availability. That half-baked regulatory approach is just as foreign to the First Amendment. If a bookstore (or Amazon) decides to carry all books, may the Government then force the bookstore (or Amazon) to feature and promote all books in the same manner? If a newsstand carries all newspapers, may the Government force the newsstand to display all newspapers in the same way? May the Government force the newsstand to price them all equally? Of course not. There is no such theory of the First Amendment. Here, either Internet service providers have a right to exercise editorial discretion, or they do not. If they have a right to exercise editorial discretion, the choice of whether and how to exercise that editorial discretion is up to them, not up to the Government.

Think about what the FCC is saying: Under the rule, you supposedly can exercise your editorial discretion to refuse to carry some Internet content. But if you choose to carry most or all Internet content, you cannot exercise your editorial discretion to favor some content over other content. What First Amendment case or principle supports that theory? Crickets.

In a footnote, Kavanugh continued to lambast the theory of “voluntary regulation” forwarded by the concurrence, stating:

The concurrence in the denial of rehearing en banc seems to suggest that the net neutrality rule is voluntary. According to the concurrence, Internet service providers may comply with the net neutrality rule if they want to comply, but can choose not to comply if they do not want to comply. To the concurring judges, net neutrality merely means “if you say it, do it.”…. If that description were really true, the net neutrality rule would be a simple prohibition against false advertising. But that does not appear to be an accurate description of the rule… It would be strange indeed if all of the controversy were over a “rule” that is in fact entirely voluntary and merely proscribes false advertising. In any event, I tend to doubt that Internet service providers can now simply say that they will choose not to comply with any aspects of the net neutrality rule and be done with it. But if that is what the concurrence means to say, that would of course avoid any First Amendment problem: To state the obvious, a supposed “rule” that actually imposes no mandates or prohibitions and need not be followed would not raise a First Amendment issue.

Scarcity and Capacity to Carry Content

The FCC had also argued that there was a difference between ISPs and the cable companies in Turner in that ISPs did not face decisions about scarcity in content carriage. But Kavanaugh rejected this theory as inconsistent with the First Amendment’s right not to be compelled to carry a message or speech.

That argument, too, makes little sense as a matter of basic First Amendment law. First Amendment protection does not go away simply because you have a large communications platform. A large bookstore has the same right to exercise editorial discretion as a small bookstore. Suppose Amazon has capacity to sell every book currently in publication and therefore does not face the scarcity of space that a bookstore does. Could the Government therefore force Amazon to sell, feature, and promote every book on an equal basis, and prohibit Amazon from promoting or recommending particular books or authors? Of course not. And there is no reason for a different result here. Put simply, the Internet’s technological architecture may mean that Internet service providers can provide unlimited content; it does not mean that they must.

Keep in mind, moreover, why that is so. The First Amendment affords editors and speakers the right not to speak and not to carry or favor unwanted speech of others, at least absent sufficient governmental justification for infringing on that right… That foundational principle packs at least as much punch when you have room on your platform to carry a lot of speakers as it does when you have room on your platform to carry only a few speakers.

Turner Scrutiny and Bottleneck Market Power

Finally, Kavanaugh applied Turner scrutiny and found that, at the very least, it requires a finding of “bottleneck market power” that would allow ISPs to harm consumers. 

At the time of the Turner Broadcasting decisions, cable operators exercised monopoly power in the local cable television markets. That monopoly power afforded cable operators the ability to unfairly disadvantage certain broadcast stations and networks. In the absence of a competitive market, a broadcast station had few places to turn when a cable operator declined to carry it. Without Government intervention, cable operators could have disfavored certain broadcasters and indeed forced some broadcasters out of the market altogether. That would diminish the content available to consumers. The Supreme Court concluded that the cable operators’ market-distorting monopoly power justified Government intervention. Because of the cable operators’ monopoly power, the Court ultimately upheld the must-carry statute…

The problem for the FCC in this case is that here, unlike in Turner Broadcasting, the FCC has not shown that Internet service providers possess market power in a relevant geographic market… 

Rather than addressing any problem of market power, the net neutrality rule instead compels private Internet service providers to supply an open platform for all would-be Internet speakers, and thereby diversify and increase the number of voices available on the Internet. The rule forcibly reduces the relative voices of some Internet service and content providers and enhances the relative voices of other Internet content providers.

But except in rare circumstances, the First Amendment does not allow the Government to regulate the content choices of private editors just so that the Government may enhance certain voices and alter the content available to the citizenry… Turner Broadcasting did not allow the Government to satisfy intermediate scrutiny merely by asserting an interest in diversifying or increasing the number of speakers available on cable systems. After all, if that interest sufficed to uphold must-carry regulation without a showing of market power, the Turner Broadcasting litigation would have unfolded much differently. The Supreme Court would have had little or no need to determine whether the cable operators had market power. But the Supreme Court emphasized and relied on the Government’s market power showing when the Court upheld the must-carry requirements… To be sure, the interests in diversifying and increasing content are important governmental interests in the abstract, according to the Supreme Court But absent some market dysfunction, Government regulation of the content carriage decisions of communications service providers is not essential to furthering those interests, as is required to satisfy intermediate scrutiny.

In other words, without a finding of bottleneck market power, there would be no basis for satisfying the government interest prong of Turner.

Applying Kavanaugh’s Dissent to NetChoice v. Paxton

Interestingly, each of these main points arises in the debate over regulating social-media companies as common carriers. Texas’ H.B. 20 attempts to do exactly that, which is at the heart of the litigation in NetChoice v. Paxton.

Common Carriage and First Amendment Protection

To the first point, Texas attempts to claim in its briefs that social-media companies are common carriers subject to lesser First Amendment protection: “Assuming the platforms’ refusals to serve certain customers implicated First Amendment rights, Texas has properly denominated the platforms common carriers. Imposing common-carriage requirements on a business does not offend the First Amendment.”

But much like the cable operators before them in Turner, social-media companies are not simply carriers of persons or things like the classic examples of railroads, telegraphs, and telephones. As TechFreedom put it in its brief: “As its name suggests… ‘common carriage’ is about offering, to the public at large  and on indiscriminate terms, to carry generic stuff from point A to point B. Social media websites fulfill none of these elements.”

In a sense, it’s even clearer that social-media companies are not common carriers than it was in the case of ISPs, because social-media platforms have always had terms of service that limit what can be said and that even allow the platforms to remove users for violations. All social-media platforms curate content for users in ways that ISPs normally do not.

‘Use It or Lose It’ Right to Editorial Discretion

Just as the FCC did in the Title II context, Texas also presses the idea that social-media companies gave up their right to editorial discretion by disclaiming the choice to exercise it, stating: “While the platforms compare their business policies to classic examples of First Amendment speech, such as a newspaper’s decision to include an article in its pages, the platforms have disclaimed any such status over many years and in countless cases. This Court should not accept the platforms’ good-for-this-case-only characterization of their businesses.” Pointing primarily to cases where social-media companies have invoked Section 230 immunity as a defense, Texas argues they have essentially lost the right to editorial discretion.

This, again, flies in the face of First Amendment jurisprudence, as Kavanaugh earlier explained. Moreover, the idea that social-media companies have disclaimed editorial discretion due to Section 230 is inconsistent with what that law actually does. Section 230 allows social-media companies to engage in as much or as little content moderation as they so choose by holding the third-party speakers accountable rather than the platform. Social-media companies do not relinquish their First Amendment rights to editorial discretion because they assert an applicable defense under the law. Moreover, social-media companies have long had rules delineating permissible speech, and they enforce those rules actively.

Interestingly, there has also been an analogue to the idea forwarded in USTelecom that the law’s First Amendment burdens are relatively limited. As noted above, then-Judge Kavanaugh rejected the idea forwarded by the concurrence that net-neutrality rules were essentially voluntary. In the case of H.B. 20, the bill’s original sponsor recently argued on Twitter that the Texas law essentially incorporates Section 230 by reference. If this is true, then the rules would be as pointless as the net-neutrality rules would have been, because social-media companies would be free under Section 230(c)(2) to remove “otherwise objectionable” material under the Texas law.

Scarcity and Capacity to Carry Content

In an earlier brief to the 5th Circuit, Texas attempted to differentiate social-media companies from the cable company in Turner by stating there was no necessary conflict between speakers, stating “[HB 20] does not, for example, pit one group of speakers against another.” But this is just a different way of saying that, since social-media companies don’t face scarcity in their technical capacity to carry speech, they can be required to carry all speech. This is inconsistent with the right Kavanaugh identified not to carry a message or speech, which is not subject to an exception that depends on the platform’s capacity to carry more speech.

Turner Scrutiny and Bottleneck Market Power

Finally, Judge Kavanaugh’s application of Turner to ISPs makes clear that a showing of bottleneck market power is necessary before common-carriage regulation may be applied to social-media companies. In fact, Kavanaugh used a comparison to social-media sites and broadcasters as a reductio ad absurdum for the idea that one could regulate ISPs without a showing of market power. As he put it there:

Consider the implications if the law were otherwise. If market power need not be shown, the Government could regulate the editorial decisions of Facebook and Google, of MSNBC and Fox, of NYTimes.com and WSJ.com, of YouTube and Twitter. Can the Government really force Facebook and Google and all of those other entities to operate as common carriers? Can the Government really impose forced-carriage or equal-access obligations on YouTube and Twitter? If the Government’s theory in this case were accepted, then the answers would be yes. After all, if the Government could force Internet service providers to carry unwanted content even absent a showing of market power, then it could do the same to all those other entities as well. There is no principled distinction between this case and those hypothetical cases.

Much like the FCC with its Open Internet Order, Texas did not make a finding of bottleneck market power in H.B. 20. Instead, Texas basically asked for the opportunity to get to discovery to develop the case that social-media platforms have market power, stating that “[b]ecause the District Court sharply limited discovery before issuing its preliminary injunction, the parties have not yet had the opportunity to develop many factual questions, including whether the platforms possess market power.” This simply won’t fly under Turner, which required a legislative finding of bottleneck market power that simply doesn’t exist in H.B. 20. 

Moreover, bottleneck market power means more than simply “market power” in an antitrust sense. As Judge Kavanaugh put it: “Turner Broadcasting seems to require even more from the Government. The Government apparently must also show that the market power would actually be used to disadvantage certain content providers, thereby diminishing the diversity and amount of content available.” Here, that would mean not only that social-media companies have market power, but they want to use it to disadvantage users in a way that makes less diverse content and less total content available.

The economics of multi-sided markets is probably the best explanation for why platforms have moderation rules. They are used to maximize a platform’s value by keeping as many users engaged and on those platforms as possible. In other words, the effect of moderation rules is to increase the amount of user speech by limiting harassing content that could repel users. This is a much better explanation for these rules than “anti-conservative bias” or a desire to censor for censorship’s sake (though there may be room for debate on the margin when it comes to the moderation of misinformation and hate speech).

In fact, social-media companies, unlike the cable operators in Turner, do not have the type of “physical connection between the television set and the cable network” that would grant them “bottleneck, or gatekeeper, control over” speech in ways that would allow platforms to “silence the voice of competing speakers with a mere flick of the switch.” Cf. Turner, 512 U.S. at 656. Even if they tried, social-media companies simply couldn’t prevent Internet users from accessing content they wish to see online; they inevitably will find such content by going to a different site or app.

Conclusion: The Future of the First Amendment Online

While many on both sides of the partisan aisle appear to see a stark divide between the interests of—and First Amendment protections afforded to—ISPs and social-media companies, Kavanaugh’s opinion in USTelecom shows clearly they are in the same boat. The two rise or fall together. If the government can impose common-carriage requirements on social-media companies in the name of free speech, then they most assuredly can when it comes to ISPs. If the First Amendment protects the editorial discretion of one, then it does for both.

The question then moves to relative market power, and whether the dominant firms in either sector can truly be said to have “bottleneck” market power, which implies the physical control of infrastructure that social-media companies certainly lack.

While it will be interesting to see what the 5th Circuit (and likely, the Supreme Court) ultimately do when reviewing H.B. 20 and similar laws, if now-Justice Kavanaugh’s dissent is any hint, there will be a strong contingent on the Court for finding the First Amendment applies online by protecting the right of private actors (ISPs and social-media companies) to set the rules of the road on their property. As Kavanaugh put it in Manhattan Community Access Corp. v. Halleck: “[t]he Free Speech Clause of the First Amendment constrains governmental actors and protects private actors.” Competition is the best way to protect consumers’ interests, not prophylactic government regulation.

With the 11th Circuit upholding the stay against Florida’s social-media law and the Supreme Court granting the emergency application to vacate the stay of the injunction in NetChoice v. Paxton, the future of the First Amendment appears to be on strong ground. There is no basis to conclude that simply calling private actors “common carriers” reduces their right to editorial discretion under the First Amendment.

The tentatively pending sale of Twitter to Elon Musk has been greeted with celebration by many on the right, along with lamentation by some on the left, regarding what it portends for the platform’s moderation policies. Musk, for his part, has announced that he believes Twitter should be a free-speech haven and that it needs to dial back the (allegedly politically biased) moderation in which it has engaged.

The good news for everyone is that a differentiated product at Twitter could be exactly what the market―and the debate over Big Tech―needs.

The Market for Speech Governance

As I’ve written previously, the First Amendment (bolstered by Section 230 of the Communications Decency Act) protects not only speech itself, but also the private ordering of speech. “Congress shall make no law… abridging the freedom of speech” means that state actors can’t infringe speech, but it also (in most cases) protects private actors’ ability to make such rules free from government regulation. As the Supreme Court has repeatedly held, private actors can make their own rules about speech on their own property.

As Justice Brett Kavanaugh put it on behalf of the Court in Manhattan Community Access Corp. v. Halleck:

[W]hen a private entity provides a forum for speech, the private entity is not ordinarily constrained by the First Amendment because the private entity is not a state actor. The private entity may thus exercise editorial discretion over the speech and speakers in the forum…

In short, merely hosting speech by others is not a traditional, exclusive public function and does not alone transform private entities into state actors subject to First Amendment constraints.

If the rule were otherwise, all private property owners and private lessees who open their property for speech would be subject to First Amendment constraints and would lose the ability to exercise what they deem to be appropriate editorial discretion within that open forum. Private property owners and private lessees would face the unappetizing choice of allowing all comers or closing the platform altogether.

In other words, as much as it protects “the marketplace of ideas,” the First Amendment also protects “the market for speech governance.” Musk’s idea that Twitter should be subject to the First Amendment is simply incoherent, but his vision for Twitter to have less politically biased content moderation could work.

Musk’s Plan for Twitter

There has been much commentary on what Musk intends to do, and whether it is a realistic way to maximize the platform’s value. As a multi-sided platform, Twitter’s revenue is driven by advertisers, who want to reach a mass audience. This means Twitter, much like other social-media platforms, must consider the costs and benefits of speech to its users, and strike a balance that maximizes the value of the platform. The history of social-media content moderation suggests that these platforms have found that rules against harassment, abuse, spam, bots, pornography, and certain hate speech and misinformation are necessary.

For rules pertaining to harassment and abuse, in particular, it is easy to understand how they are necessary to prevent losing users. There seems to be a wide societal consensus that such speech is intolerable. Similarly, spam, bots, and pornographic content, even if legal speech, are largely not what social media users want to see.

But for hate speech and misinformation, however much one agrees in the abstract about their undesirableness, there is significant debate on the margins about what is acceptable or unacceptable discourse, just as there is over what is true or false when it comes to touchpoint social and political issues. It is one thing to ban Nazis due to hate speech; it is arguably quite another to remove a prominent feminist author due to “misgendering” people. It is also one thing to say crazy conspiracy theories like QAnon should be moderated, but quite another to fact-check good-faith questioning of the efficacy of masks or vaccines. It is likely in these areas that Musk will offer an alternative to what is largely seen as biased content moderation from Big Tech companies.

Musk appears to be making a bet that the market for speech governance is currently not well-served by the major competitors in the social-media space. If Twitter could thread the needle by offering a more politically neutral moderation policy that still manages to keep off the site enough of the types of content that repel users, then it could conceivably succeed and even influence the moderation policies of other social-media companies.

Let the Market Decide

The crux of the issue is this: Conservatives who have backed antitrust and regulatory action against Big Tech because of political bias concerns should be willing to back off and allow the market to work. And liberals who have defended the right of private companies to make rules for their platforms should continue to defend that principle. Let the market decide.

[The following post was adapted from the International Center for Law & Economics White Paper “Polluting Words: Is There a Coasean Case to Regulate Offensive Speech?]

Words can wound. They can humiliate, anger, insult.

University students—or, at least, a vociferous minority of them—are keen to prevent this injury by suppressing offensive speech. To ensure campuses are safe places, they militate for the cancellation of talks by speakers with opinions they find offensive, often successfully. And they campaign to get offensive professors fired from their jobs.

Off campus, some want this safety to be extended to the online world and, especially, to the users of social media platforms such as Twitter and Facebook. In the United States, this would mean weakening the legal protections of offensive speech provided by Section 230 of the Communications Decency Act (as President Joe Biden has recommended) or by the First Amendment and. In the United Kingdom, the Online Safety Bill is now before Parliament. If passed, it will give a U.K. government agency the power to dictate the content-moderation policies of social media platforms.

You don’t need to be a woke university student or grandstanding politician to suspect that society suffers from an overproduction of offensive speech. Basic economics provides a reason to suspect it—the reason being that offense is an external cost of speech. The cost is borne not by the speaker but by his audience. And when people do not bear all the costs of an action, they do it too much.

Jack tweets “women don’t have penises.” This offends Jill, who is someone with a penis who considers herself (or himself, if Jack is right) to be a woman. And it offends many others, who agree with Jill that Jack is indulging in ugly transphobic biological essentialism. Lacking Bill Clinton’s facility for feeling the pain of others, Jack does not bear this cost. So, even if it exceeds whatever benefit Jack gets from saying that women don’t have penises, he will still say it. In other words, he will say it even when doing so makes society altogether worse off.

It shouldn’t be allowed!

That’s what we normally say when actions harm others more than they benefit the agent. The law normally conforms to John Stuart Mill’s “Harm Principle” by restricting activities—such as shooting people or treating your neighbours to death metal at 130 decibels at 2 a.m.—with material external costs. Those who seek legal reform to restrict offensive speech are surely doing no more than following an accepted general principle.

But it’s not so simple. As Ronald Coase pointed out in his famous 1960 article “The Problem of Social Cost,” externalities are a reciprocal problem. If Wayne had no neighbors, his playing death metal at 130 decibels at 2 a.m. would have no external costs. Their choice of address is equally a source of the problem. Similarly, if Jill weren’t a Twitter user, she wouldn’t have been offended by Jack’s tweet about who has a penis, since she wouldn’t have encountered it. Externalities are like tangos: they always have at least two perpetrators.

So, the legal question, “who should have a right to what they want?”—Wayne to his loud music or his neighbors to their sleep; Jack to expressing his opinion about women or Jill to not hearing such opinions—cannot be answered by identifying the party who is responsible for the external cost. Both parties are responsible.

How, then, should the question be answered? In the same paper, Coase the showed that, in certain circumstances, who the courts favor will make no difference to what ends up happening, and that what ends up happening will be efficient. Suppose the court says that Wayne cannot bother his neighbors with death metal at 2 a.m. If Wayne would be willing to pay $100,000 to keep doing it and his neighbors, combined, would put up with it for anything more than $95,000, then they should be able to arrive at a mutually beneficial deal whereby Wayne pays them something between $95,000 and $100,000 to forgo their right to stop him making his dreadful noise.

That’s not exactly right. If negotiating a deal would cost more than $5,000, then no mutually beneficial deal is possible and the rights-trading won’t happen. Transaction costs being less than the difference between the two parties’ valuations is the circumstance in which the allocation of legal rights makes no difference to how resources get used, and where efficiency will be achieved, in any event.

But it is an unusual circumstance, especially when the external cost is suffered by many people. When the transaction cost is too high, efficiency does depend on the allocation of rights by courts or legislatures. As Coase argued, when this is so, efficiency will be served if a right to the disputed resource is granted to the party with the higher cost of avoiding the externality.

Given the (implausible) valuations Wayne and his neighbors place on the amount of noise in their environment at 2 a.m., efficiency is served by giving Wayne the right to play his death metal, unless he could soundproof his house or play his music at a much lower volume or take some other avoidance measure that costs him less than the $90,000 cost to his neighbours.

And given that Jack’s tweet about penises offends a large open-ended group of people, with whom Jack therefore cannot negotiate, it looks like they should be given the right not to be offended by Jack’s comment and he should be denied the right to make it. Coasean logic supports the woke censors!          

But, again, it’s not that simple—for two reasons.

The first is that, although those are offended may be harmed by the offending speech, they needn’t necessarily be. Physical pain is usually harmful, but not when experienced by a sexual masochist (in the right circumstances, of course). Similarly, many people take masochistic pleasure in being offended. You can tell they do, because they actively seek out the sources of their suffering. They are genuinely offended, but the offense isn’t harming them, just as the sexual masochist really is in physical pain but isn’t harmed by it. Indeed, real pain and real offense are required, respectively, for the satisfaction of the sexual masochist and the offense masochist.

How many of the offended are offense masochists? Where the offensive speech can be avoided at minimal cost, the answer must be most. Why follow Jordan Peterson on Twitter when you find his opinions offensive unless you enjoy being offended by him? Maybe some are keeping tabs on the dreadful man so that they can better resist him, and they take the pain for that reason rather than for masochistic glee. But how could a legislator or judge know? For all they know, most of those offended by Jordan Peterson are offense masochists and the offense he causes is a positive externality.

The second reason Coasean logic doesn’t support the would-be censors is that social media platforms—the venues of offensive speech that they seek to regulate—are privately owned. To see why this is significant, consider not offensive speech, but an offensive action, such as openly masturbating on a bus.

This is prohibited by law. But it is not the mere act that is illegal. You are allowed to masturbate in the privacy of your bedroom. You may not masturbate on a bus because those who are offended by the sight of it cannot easily avoid it. That’s why it is illegal to express obscenities about Jesus on a billboard erected across the road from a church but not at a meeting of the Angry Atheists Society. The laws that prohibit offensive speech in such circumstances—laws against public nuisance, harassment, public indecency, etc.—are generally efficient. The cost they impose on the offenders is less than the benefits to the offended.

But they are unnecessary when the giving and taking of offense occur within a privately owned place. Suppose no law prohibited masturbating on a bus. It still wouldn’t be allowed on buses owned by a profit-seeker. Few people want to masturbate on buses and most people who ride on buses seek trips that are masturbation-free. A prohibition on masturbation will gain the owner more customers than it loses him. The prohibition is simply another feature of the product offered by the bus company. Nice leather seats, punctual departures, and no wankers (literally). There is no more reason to believe that the bus company’s passenger-conduct rules will be inefficient than that its other product features will be and, therefore, no more reason to legally stipulate them.

The same goes for the content-moderation policies of social media platforms. They are just another product feature offered by a profit-seeking firm. If they repel more customers than they attract (or, more accurately, if they repel more advertising revenue than they attract), they would be inefficient. But then, of course, the company would not adopt them.

Of course, the owner of a social media platform might not be a pure profit-maximiser. For example, he might forgo $10 million in advertising revenue for the sake of banning speakers he personally finds offensive. But the outcome is still efficient. Allowing the speech would have cost more by way of the owner’s unhappiness than the lost advertising would have been worth.  And such powerful feelings in the owner of a platform create an opportunity for competitors who do not share his feelings. They can offer a platform that does not ban the offensive speakers and, if enough people want to hear what they have to say, attract users and the advertising revenue that comes with them. 

If efficiency is your concern, there is no problem for the authorities to solve. Indeed, the idea that the authorities would do a better job of deciding content-moderation rules is not merely absurd, but alarming. Politicians and the bureaucrats who answer to them or are appointed by them would use the power not to promote efficiency, but to promote agendas congenial to them. Jurisprudence in liberal democracies—and, especially, in America—has been suspicious of governmental control of what may be said. Nothing about social media provides good reason to become any less suspicious.

In recent years, a diverse cross-section of advocates and politicians have leveled criticisms at Section 230 of the Communications Decency Act and its grant of legal immunity to interactive computer services. Proposed legislative changes to the law have been put forward by both Republicans and Democrats.

It remains unclear whether Congress (or the courts) will amend Section 230, but any changes are bound to expand the scope, uncertainty, and expense of content risks. That’s why it’s important that such changes be developed and implemented in ways that minimize their potential to significantly disrupt and harm online activity. This piece focuses on those insurable content risks that most frequently result in litigation and considers the effect of the direct and indirect costs caused by frivolous suits and lawfare, not just the ultimate potential for a court to find liability. The experience of the 1980s asbestos-litigation crisis offers a warning of what could go wrong.

Enacted in 1996, Section 230 was intended to promote the Internet as a diverse medium for discourse, cultural development, and intellectual activity by shielding interactive computer services from legal liability when blocking or filtering access to obscene, harassing, or otherwise objectionable content. Absent such immunity, a platform hosting content produced by third parties could be held equally responsible as the creator for claims alleging defamation or invasion of privacy.

In the current legislative debates, Section 230’s critics on the left argue that the law does not go far enough to combat hate speech and misinformation. Critics on the right claim the law protects censorship of dissenting opinions. Legal challenges to the current wording of Section 230 arise primarily from what constitutes an “interactive computer service,” “good faith” restriction of content, and the grant of legal immunity, regardless of whether the restricted material is constitutionally protected. 

While Congress and various stakeholders debate various alternate statutory frameworks, several test cases simultaneously have been working their way through the judicial system and some states have either passed or are considering legislation to address complaints with Section 230. Some have suggested passing new federal legislation classifying online platforms as common carriers as an alternate approach that does not involve amending or repealing Section 230. Regardless of the form it may take, change to the status quo is likely to increase the risk of litigation and liability for those hosting or publishing third-party content.

The Nature of Content Risk

The class of individuals and organizations exposed to content risk has never been broader. Any information, content, or communication that is created, gathered, compiled, or amended can be considered “material” which, when disseminated to third parties, may be deemed “publishing.” Liability can arise from any step in that process. Those who republish material are generally held to the same standard of liability as if they were the original publisher. (See, e.g., Rest. (2d) of Torts § 578 with respect to defamation.)

Digitization has simultaneously reduced the cost and expertise required to publish material and increased the potential reach of that material. Where it was once limited to books, newspapers, and periodicals, “publishing” now encompasses such activities as creating and updating a website; creating a podcast or blog post; or even posting to social media. Much of this activity is performed by individuals and businesses who have only limited experience with the legal risks associated with publishing.

This is especially true regarding the use of third-party material, which is used extensively by both sophisticated and unsophisticated platforms. Platforms that host third-party-generated content—e.g., social media or websites with comment sections—have historically engaged in only limited vetting of that content, although this is changing. When combined with the potential to reach consumers far beyond the original platform and target audience—lasting digital traces that are difficult to identify and remove—and the need to comply with privacy and other statutory requirements, the potential for all manner of “publishers” to incur legal liability has never been higher.

Even sophisticated legacy publishers struggle with managing the litigation that arises from these risks. There are a limited number of specialist counsel, which results in higher hourly rates. Oversight of legal bills is not always effective, as internal counsel often have limited resources to manage their daily responsibilities and litigation. As a result, legal fees often make up as much as two-thirds of the average claims cost. Accordingly, defense spending and litigation management are indirect, but important, risks associated with content claims.

Effective risk management is any publisher’s first line of defense. The type and complexity of content risk management varies significantly by organization, based on its size, resources, activities, risk appetite, and sophistication. Traditional publishers typically have a formal set of editorial guidelines specifying policies governing the creation of content, pre-publication review, editorial-approval authority, and referral to internal and external legal counsel. They often maintain a library of standardized contracts; have a process to periodically review and update those wordings; and a process to verify the validity of a potential licensor’s rights. Most have formal controls to respond to complaints and to retraction/takedown requests.

Insuring Content Risks

Insurance is integral to most publishers’ risk-management plans. Content coverage is present, to some degree, in most general liability policies (i.e., for “advertising liability”). Specialized coverage—commonly referred to as “media” or “media E&O”—is available on a standalone basis or may be packaged with cyber-liability coverage. Terms of specialized coverage can vary significantly, but generally provides at least basic coverage for the three primary content risks of defamation, copyright infringement, and invasion of privacy.

Insureds typically retain the first dollar loss up to a specific dollar threshold. They may also retain a coinsurance percentage of every dollar thereafter in partnership with their insurer. For example, an insured may be responsible for the first $25,000 of loss, and for 10% of loss above that threshold. Such coinsurance structures often are used by insurers as a non-monetary tool to help control legal spending and to incentivize an organization to employ effective oversight of counsel’s billing practices.

The type and amount of loss retained will depend on the insured’s size, resources, risk profile, risk appetite, and insurance budget. Generally, but not always, increases in an insured’s retention or an insurer’s attachment (e.g., raising the threshold to $50,000, or raising the insured’s coinsurance to 15%) will result in lower premiums. Most insureds will seek the smallest retention feasible within their budget. 

Contract limits (the maximum coverage payout available) will vary based on the same factors. Larger policyholders often build a “tower” of insurance made up of multiple layers of the same or similar coverage issued by different insurers. Two or more insurers may partner on the same “quota share” layer and split any loss incurred within that layer on a pre-agreed proportional basis.  

Navigating the strategic choices involved in developing an insurance program can be complex, depending on an organization’s risks. Policyholders often use commercial brokers to aide them in developing an appropriate risk-management and insurance strategy that maximizes coverage within their budget and to assist with claims recoveries. This is particularly important for small and mid-sized insureds who may lack the sophistication or budget of larger organizations. Policyholders and brokers try to minimize the gaps in coverage between layers and among quota-share participants, but such gaps can occur, leaving a policyholder partially self-insured.

An organization’s options to insure its content risk may also be influenced by the dynamics of the overall insurance market or within specific content lines. Underwriters are not all created equal; it is a challenging responsibility requiring a level of prediction, and some underwriters may fail to adequately identify and account for certain risks. It can also be challenging to accurately measure risk aggregation and set appropriate reserves. An insurer’s appetite for certain lines and the availability of supporting reinsurance can fluctuate based on trends in the general capital markets. Specialty media/content coverage is a small niche within the global commercial insurance market, which makes insurers in this line more sensitive to these general trends.

Litigation Risks from Changes to Section 230

A full repeal or judicial invalidation of Section 230 generally would make every platform responsible for all the content they disseminate, regardless of who created the material requiring at least some additional editorial review. This would significantly disadvantage those platforms that host a significant volume of third-party content. Internet service providers, cable companies, social media, and product/service review companies would be put under tremendous strain, given the daily volume of content produced. To reduce the risk that they serve as a “deep pocket” target for plaintiffs, they would likely adopt more robust pre-publication screening of content and authorized third-parties; limit public interfaces; require registration before a user may publish content; employ more reactive complaint response/takedown policies; and ban problem users more frequently. Small and mid-sized enterprises (SMEs), as well as those not focused primarily on the business of publishing, would likely avoid many interactive functions altogether. 

A full repeal would be, in many ways, a blunderbuss approach to dealing with criticisms of Section 230, and would cause as many or more problems as it solves. In the current polarized environment, it also appears unlikely that Congress will reach bipartisan agreement on amended language for Section 230, or to classify interactive computer services as common carriers, given that the changes desired by the political left and right are so divergent. What may be more likely is that courts encounter a test case that prompts them to clarify the application of the existing statutory language—i.e., whether an entity was acting as a neutral platform or a content creator, whether its conduct was in “good faith,” and whether the material is “objectionable” within the meaning of the statute.

A relatively greater frequency of litigation is almost inevitable in the wake of any changes to the status quo, whether made by Congress or the courts. Major litigation would likely focus on those social-media platforms at the center of the Section 230 controversy, such as Facebook and Twitter, given their active role in these issues, deep pockets and, potentially, various admissions against interest helpful to plaintiffs regarding their level of editorial judgment. SMEs could also be affected in the immediate wake of a change to the statute or its interpretation. While SMEs are likely to be implicated on a smaller scale, the impact of litigation could be even more damaging to their viability if they are not adequately insured.

Over time, the boundaries of an amended Section 230’s application and any consequential effects should become clearer as courts develop application criteria and precedent is established for different fact patterns. Exposed platforms will likely make changes to their activities and risk-management strategies consistent with such developments. Operationally, some interactive features—such as comment sections or product and service reviews—may become less common.

In the short and medium term, however, a period of increased and unforeseen litigation to resolve these issues is likely to prove expensive and damaging. Insurers of content risks are likely to bear the brunt of any changes to Section 230, because these risks and their financial costs would be new, uncertain, and not incorporated into historical pricing of content risk. 

Remembering the Asbestos Crisis

The introduction of a new exposure or legal risk can have significant financial effects on commercial insurance carriers. New and revised risks must be accounted for in the assumptions, probabilities, and load factors used in insurance pricing and reserving models. Even small changes in those values can have large aggregate effects, which may undermine confidence in those models, complicate obtaining reinsurance, or harm an insurer’s overall financial health.

For example, in the 1980s, certain courts adopted the triple-trigger and continuous trigger methods[1] of determining when a policyholder could access coverage under an “occurrence” policy for asbestos claims. As a result, insurers paid claims under policies dating back to the early 1900s and, in some cases, under all policies from that date until the date of the claim. Such policies were written when mesothelioma related to asbestos was unknown and not incorporated into the policy pricing.

Insurers had long-since released reserves from the decades-old policy years, so those resources were not available to pay claims. Nor could underwriters retroactively increase premiums for the intervening years and smooth out the cost of these claims. This created extreme financial stress for impacted insurers and reinsurers, with some ultimately rendered insolvent. Surviving carriers responded by drastically reducing coverage and increasing prices, which resulted in a major capacity shortage that resolved only after the creation of the Bermuda insurance and reinsurance market. 

The asbestos-related liability crisis represented a perfect storm that is unlikely to be replicated. Given the ubiquitous nature of digital content, however, any drastic or misconceived changes to Section 230 protections could still cause significant disruption to the commercial insurance market. 

Content risk is covered, at least in part, by general liability and many cyber policies, but it is not currently a primary focus for underwriters. Specialty media underwriters are more likely to be monitoring Section 230 risk, but the highly competitive market will make it difficult for them to respond to any changes with significant price increases. In addition, the current market environment for U.S. property and casualty insurance generally is in the midst of correcting for years of inadequate pricing, expanding coverage, developing exposures, and claims inflation. It would be extremely difficult to charge an adequate premium increase if the potential severity of content risk were to increase suddenly.

In the face of such risk uncertainty and challenges to adequately increasing premiums, underwriters would likely seek to reduce their exposure to online content risks, i.e., by reducing the scope of coverage, reducing limits, and increasing retentions. How these changes would manifest, and the pain for all involved, would likely depend on how quickly such changes in policyholders’ risk profiles manifest. 

Small or specialty carriers caught unprepared could be forced to exit the market if they experienced a sharp spike in claims or unexpected increase in needed reserves. Larger, multiline carriers may respond by voluntarily reducing or withdrawing their participation in this space. Insurers exposed to ancillary content risk may simply exclude it from cover if adequate price increases are impractical. Such reactions could result in content coverage becoming harder to obtain or unavailable altogether. This, in turn, would incentivize organizations to limit or avoid certain digital activities.

Finding a More Thoughtful Approach

The tension between calls for reform of Section 230 and the potential for disrupting online activity does not mean that political leaders and courts should ignore these issues. Rather, it means that what’s required is a thoughtful, clear, and predictable approach to any changes, with the goal of maximizing the clarity of the changes and their application and minimizing any resulting litigation. Regardless of whether accomplished through legislation or the judicial process, addressing the following issues could minimize the duration and severity of any period of harmful disruption regarding content-risk:

  1. Presumptive immunity – Including an express statement in the definition of “interactive computer service,” or inferring one judicially, to clarify that platforms hosting third-party content enjoy a rebuttable presumption that statutory immunity applies would discourage frivolous litigation as courts establish precedent defining the applicability of any other revisions. 
  1. Specify the grounds for losing immunity – Clarify, at a minimum, what constitutes “good faith” with respect to content restrictions and further clarify what material is or is not “objectionable,” as it relates to newsworthy content or actions that trigger loss of immunity.
  1. Specify the scope and duration of any loss of immunity – Clarify whether the loss of immunity is total, categorical, or specific to the situation under review and the duration of that loss of immunity, if applicable.
  1. Reinstatement of immunity, subject to burden-shifting – Clarify what a platform must do to reinstate statutory immunity on a go-forward basis and clarify that it bears the burden of proving its go-forward conduct entitled it to statutory protection.
  1. Address associated issues – Any clarification or interpretation should address other issues likely to arise, such as the effect and weight to be given to a platform’s application of its community standards, adherence to neutral takedown/complain procedures, etc. Care should be taken to avoid overcorrecting and creating a “heckler’s veto.” 
  1. Deferred effect – If change is made legislatively, the effective date should be deferred for a reasonable time to allow platforms sufficient opportunity to adjust their current risk-management policies, contractual arrangements, content publishing and storage practices, and insurance arrangements in a thoughtful, orderly fashion that accounts for the new rules.

Ultimately, legislative and judicial stakeholders will chart their own course to address the widespread dissatisfaction with Section 230. More important than any of these specific policy suggestions is the principle underpins them: that any changes incorporate due consideration for the potential direct and downstream harm that can be caused if policy is not clear, comprehensive, and designed to minimize unnecessary litigation. 

It is no surprise that, in the years since Section 230 of the Communications Decency Act was passed, the environment and risks associated with digital platforms have evolved or that those changes have created a certain amount of friction in the law’s application. Policymakers should employ a holistic approach when evaluating their legislative and judicial options to revise or clarify the application of Section 230. Doing so in a targeted, predictable fashion should help to mitigate or avoid the risk of increased litigation and other unintended consequences that might otherwise prove harmful to online platforms in the commercial insurance market.

Aaron Tilley is a senior insurance executive with more than 16 years of commercial insurance experience in executive management, underwriting, legal, and claims working in or with the U.S., Bermuda, and London markets. He has served as chief underwriting officer of a specialty media E&O and cyber-liability insurer and as coverage counsel representing international insurers with respect to a variety of E&O and advertising liability claims


[1] The triple-trigger method allowed a policy to be accessed based on the date of the injury-in-fact, manifestation of injury, or exposure to substances known to cause injury. The continuous trigger allowed all policies issued by an insurer, not just one, to be accessed if a triggering event could be established during the policy period.

In what has become regularly scheduled programming on Capitol Hill, Facebook CEO Mark Zuckerberg, Twitter CEO Jack Dorsey, and Google CEO Sundar Pichai will be subject to yet another round of congressional grilling—this time, about the platforms’ content-moderation policies—during a March 25 joint hearing of two subcommittees of the House Energy and Commerce Committee.

The stated purpose of this latest bit of political theatre is to explore, as made explicit in the hearing’s title, “social media’s role in promoting extremism and misinformation.” Specific topics are expected to include proposed changes to Section 230 of the Communications Decency Act, heightened scrutiny by the Federal Trade Commission, and misinformation about COVID-19—the subject of new legislation introduced by Rep. Jennifer Wexton (D-Va.) and Sen. Mazie Hirono (D-Hawaii).

But while many in the Democratic majority argue that social media companies have not done enough to moderate misinformation or hate speech, it is a problem with no realistic legal fix. Any attempt to mandate removal of speech on grounds that it is misinformation or hate speech, either directly or indirectly, would run afoul of the First Amendment.

Much of the recent focus has been on misinformation spread on social media about the 2020 election and the COVID-19 pandemic. The memorandum for the March 25 hearing sums it up:

Facebook, Google, and Twitter have long come under fire for their role in the dissemination and amplification of misinformation and extremist content. For instance, since the beginning of the coronavirus disease of 2019 (COVID-19) pandemic, all three platforms have spread substantial amounts of misinformation about COVID-19. At the outset of the COVID-19 pandemic, disinformation regarding the severity of the virus and the effectiveness of alleged cures for COVID-19 was widespread. More recently, COVID-19 disinformation has misrepresented the safety and efficacy of COVID-19 vaccines.

Facebook, Google, and Twitter have also been distributors for years of election disinformation that appeared to be intended either to improperly influence or undermine the outcomes of free and fair elections. During the November 2016 election, social media platforms were used by foreign governments to disseminate information to manipulate public opinion. This trend continued during and after the November 2020 election, often fomented by domestic actors, with rampant disinformation about voter fraud, defective voting machines, and premature declarations of victory.

It is true that, despite social media companies’ efforts to label and remove false content and bar some of the biggest purveyors, there remains a considerable volume of false information on social media. But U.S. Supreme Court precedent consistently has limited government regulation of false speech to distinct categories like defamation, perjury, and fraud.

The Case of Stolen Valor

The court’s 2011 decision in United States v. Alvarez struck down as unconstitutional the Stolen Valor Act of 2005, which made it a federal crime to falsely claim to have earned a military medal. A four-justice plurality opinion written by Justice Anthony Kennedy, along with a two-justice concurrence, both agreed that a statement being false did not, by itself, exclude it from First Amendment protection. 

Kennedy’s opinion noted that while the government may impose penalties for false speech connected with the legal process (perjury or impersonating a government official); with receiving a benefit (fraud); or with harming someone’s reputation (defamation); the First Amendment does not sanction penalties for false speech, in and of itself. The plurality exhibited particular skepticism toward the notion that government actors could be entrusted as a “Ministry of Truth,” empowered to determine what categories of false speech should be made illegal:

Permitting the government to decree this speech to be a criminal offense, whether shouted from the rooftops or made in a barely audible whisper, would endorse government authority to compile a list of subjects about which false statements are punishable. That governmental power has no clear limiting principle. Our constitutional tradition stands against the idea that we need Oceania’s Ministry of Truth… Were this law to be sustained, there could be an endless list of subjects the National Government or the States could single out… Were the Court to hold that the interest in truthful discourse alone is sufficient to sustain a ban on speech, absent any evidence that the speech was used to gain a material advantage, it would give government a broad censorial power unprecedented in this Court’s cases or in our constitutional tradition. The mere potential for the exercise of that power casts a chill, a chill the First Amendment cannot permit if free speech, thought, and discourse are to remain a foundation of our freedom. [EMPHASIS ADDED]

As noted in the opinion, declaring false speech illegal constitutes a content-based restriction subject to “exacting scrutiny.” Applying that standard, the court found “the link between the Government’s interest in protecting the integrity of the military honors system and the Act’s restriction on the false claims of liars like respondent has not been shown.” 

While finding that the government “has not shown, and cannot show, why counterspeech would not suffice to achieve its interest,” the plurality suggested a more narrowly tailored solution could be simply to publish Medal of Honor recipients in an online database. In other words, the government could overcome the problem of false speech by promoting true speech. 

In 2012, President Barack Obama signed an updated version of the Stolen Valor Act that limited its penalties to situations where a misrepresentation is shown to result in receipt of some kind of benefit. That places the false speech in the category of fraud, consistent with the Alvarez opinion.

A Social Media Ministry of Truth

Applying the Alvarez standard to social media, the government could (and already does) promote its interest in public health or election integrity by publishing true speech through official channels. But there is little reason to believe the government at any level could regulate access to misinformation. Anything approaching an outright ban on accessing speech deemed false by the government not only would not be the most narrowly tailored way to deal with such speech, but it is bound to have chilling effects even on true speech.

The analysis doesn’t change if the government instead places Big Tech itself in the position of Ministry of Truth. Some propose making changes to Section 230, which currently immunizes social media companies from liability for user speech (with limited exceptions), regardless what moderation policies the platform adopts. A hypothetical change might condition Section 230’s liability shield on platforms agreeing to moderate certain categories of misinformation. But that would still place the government in the position of coercing platforms to take down speech. 

Even the “fix” of making social media companies liable for user speech they amplify through promotions on the platform, as proposed by Sen. Mark Warner’s (D-Va.) SAFE TECH Act, runs into First Amendment concerns. The aim of the bill is to regard sponsored content as constituting speech made by the platform, thus opening the platform to liability for the underlying misinformation. But any such liability also would be limited to categories of speech that fall outside First Amendment protection, like fraud or defamation. This would not appear to include most of the types of misinformation on COVID-19 or election security that animate the current legislative push.

There is no way for the government to regulate misinformation, in and of itself, consistent with the First Amendment. Big Tech companies are free to develop their own policies against misinformation, but the government may not force them to do so. 

Extremely Limited Room to Regulate Extremism

The Big Tech CEOs are also almost certain to be grilled about the use of social media to spread “hate speech” or “extremist content.” The memorandum for the March 25 hearing sums it up like this:

Facebook executives were repeatedly warned that extremist content was thriving on their platform, and that Facebook’s own algorithms and recommendation tools were responsible for the appeal of extremist groups and divisive content. Similarly, since 2015, videos from extremists have proliferated on YouTube; and YouTube’s algorithm often guides users from more innocuous or alternative content to more fringe channels and videos. Twitter has been criticized for being slow to stop white nationalists from organizing, fundraising, recruiting and spreading propaganda on Twitter.

Social media has often played host to racist, sexist, and other types of vile speech. While social media companies have community standards and other policies that restrict “hate speech” in some circumstances, there is demand from some public officials that they do more. But under a First Amendment analysis, regulating hate speech on social media would fare no better than the regulation of misinformation.

The First Amendment doesn’t allow for the regulation of “hate speech” as its own distinct category. Hate speech is, in fact, as protected as any other type of speech. There are some limited exceptions, as the First Amendment does not protect incitement, true threats of violence, or “fighting words.” Some of these flatly do not apply in the online context. “Fighting words,” for instance, applies only in face-to-face situations to “those personally abusive epithets which, when addressed to the ordinary citizen, are, as a matter of common knowledge, inherently likely to provoke violent reaction.”

One relevant precedent is the court’s 1992 decision in R.A.V. v. St. Paul, which considered a local ordinance in St. Paul, Minnesota, prohibiting public expressions that served to cause “outrage, alarm, or anger with respect to racial, gender or religious intolerance.” A juvenile was charged with violating the ordinance when he created a makeshift cross and lit it on fire in front of a black family’s home. The court unanimously struck down the ordinance as a violation of the First Amendment, finding it an impermissible content-based restraint that was not limited to incitement or true threats.

By contrast, in 2003’s Virginia v. Black, the Supreme Court upheld a Virginia law outlawing cross burnings done with the intent to intimidate. The court’s opinion distinguished R.A.V. on grounds that the Virginia statute didn’t single out speech regarding disfavored topics. Instead, it was aimed at speech that had the intent to intimidate regardless of the victim’s race, gender, religion, or other characteristic. But the court was careful to limit government regulation of hate speech to instances that involve true threats or incitement.

When it comes to incitement, the legal standard was set by the court’s landmark Brandenberg v. Ohio decision in 1969, which laid out that:

the constitutional guarantees of free speech and free press do not permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action. [EMPHASIS ADDED]

In other words, while “hate speech” is protected by the First Amendment, specific types of speech that convey true threats or fit under the related doctrine of incitement are not. The government may regulate those types of speech. And they do. In fact, social media users can be, and often are, charged with crimes for threats made online. But the government can’t issue a per se ban on hate speech or “extremist content.”

Just as with misinformation, the government also can’t condition Section 230 immunity on platforms removing hate speech. Insofar as speech is protected under the First Amendment, the government can’t specifically condition a government benefit on its removal. Even the SAFE TECH Act’s model for holding platforms accountable for amplifying hate speech or extremist content would have to be limited to speech that amounts to true threats or incitement. This is a far narrower category of hateful speech than the examples that concern legislators. 

Social media companies do remain free under the law to moderate hateful content as they see fit under their terms of service. Section 230 immunity is not dependent on whether companies do or don’t moderate such content, or on how they define hate speech. But government efforts to step in and define hate speech would likely run into First Amendment problems unless they stay focused on unprotected threats and incitement.

What Can the Government Do?

One may fairly ask what it is that governments can do to combat misinformation and hate speech online. The answer may be a law that requires takedowns by court order of speech after it is declared illegal, as proposed by the PACT Act, sponsored in the last session by Sens. Brian Schatz (D-Hawaii) and John Thune (R-S.D.). Such speech may, in some circumstances, include misinformation or hate speech.

But as outlined above, the misinformation that the government can regulate is limited to situations like fraud or defamation, while the hate speech it can regulate is limited to true threats and incitement. A narrowly tailored law that looked to address those specific categories may or may not be a good idea, but it would likely survive First Amendment scrutiny, and may even prove a productive line of discussion with the tech CEOs.