Archives For intermediary liability

After the oral arguments in Twitter v. Taamneh, Geoffrey Manne, Kristian Stout, and I spilled a lot of ink thinking through the law & economics of intermediary liability and how to draw lines when it comes to social-media companies’ responsibility to prevent online harms stemming from illegal conduct on their platforms. With the Supreme Court’s recent decision in Twitter v. Taamneh, it is worth revisiting that post to see what we got right, as well as what the opinion could mean for future First Amendment cases—particularly those concerning Texas and Florida’s common-carriage laws and other challenges to the bounds of Section 230 more generally.

What We Got Right: Necessary Limitations on Secondary Liability Mean the Case Against Twitter Must be Dismissed

In our earlier post, which built on our previous work on the law & economics of intermediary liability, we argued that the law sometimes does and should allow enforcement against intermediaries when they are the least-cost avoider. This is especially true on social-media sites like Twitter, where information costs may be sufficiently low that effective monitoring and control of end users is possible and pseudonymity makes bringing remedies against end users ineffective. We note, however, that there are also costs to intermediary liability. These manifest particularly in “collateral censorship,” which occurs when social-media companies remove user-generated content in order to avoid liability. Thus, a balance must be struck:

From an economic perspective, liability should be imposed on the party or parties best positioned to deter the harms in question, so long as the social costs incurred by, and as a result of, enforcement do not exceed the social gains realized. In other words, there is a delicate balance that must be struck to determine when intermediary liability makes sense in a given case. On the one hand, we want illicit content to be deterred, and on the other, we want to preserve the open nature of the Internet. The costs generated from the over-deterrence of legal, beneficial speech is why intermediary liability for user-generated content can’t be applied on a strict-liability basis, and why some bad content will always exist in the system.

In particular, we noted the need for limiting principles to intermediary liability. As we put it in our Fleites amicus:

In theory, any sufficiently large firm with a role in the commerce at issue could be deemed liable if all that is required is that its services “allow[]” the alleged principal actors to continue to do business. FedEx, for example, would be liable for continuing to deliver packages to MindGeek’s address. The local waste management company would be liable for continuing to service the building in which MindGeek’s offices are located. And every online search provider and Internet service provider would be liable for continuing to provide service to anyone searching for or viewing legal content on MindGeek’s sites.

The Court struck very similar notes in its Taamneh opinion regarding the need to limit what they call “secondary liability” under the aiding-and-abetting statute. They note that a person may be responsible at common law for a crime or tort if he helps another complete its commission, but that such liability has never been “boundless.” If it were otherwise, Justice Clarence Thomas wrote for a unanimous Court, “aiding-and-abetting liability could sweep in innocent bystanders as well as those who gave only tangential assistance.” Offering the example of a robbery, Thomas argued that if “any assistance of any kind were sufficient to create liability… then anyone who passively watched a robbery could be said to commit aiding and abetting by failing to call the police.” 

Here, the Court found important the common law’s distinction between acts of commission and omission:

[O]ur legal system generally does not impose liability for mere omissions, inactions, or nonfeasance; although inaction can be culpable in the face of some independent duty to act, the law does not impose a generalized duty to rescue… both criminal and tort law typically sanction only “wrongful conduct,” bad acts, and misfeasance… Some level of blameworthiness is therefore ordinarily required. 

If omissions could be held liable in the absence of an independent duty to act, then there would be no limiting principle to prevent the application of liability far beyond what anyone (except for the cop in the final episode of Seinfeld) would believe reasonable: 

[I]f aiding-and-abetting liability were taken too far, then ordinary merchants could become liable for any misuse of their goods and services, no matter how attenuated their relationship with the wrongdoer. And those who merely deliver mail or transmit emails could be liable for the tortious messages contained therein. For these reasons, courts have long recognized the need to cabin aiding-and-abetting liability to cases of truly culpable conduct.

Applying this to Twitter, the Court first outlined the theories of how Twitter “helped” ISIS:

First, ISIS was active on defendants social-media platforms, which are generally available to the internet-using public with little to no front-end screening by defendants. In other words, ISIS was able to upload content to the platforms and connect with third parties, just like everyone else. Second, defendants’ recommendation algorithms matched ISIS-related content to users most likely to be interested in that content—again, just like any other content. And, third, defendants allegedly knew that ISIS was uploading this content to such effect, but took insufficient steps to ensure that ISIS supporters and ISIS-related content were removed from their platforms. Notably, plaintiffs never allege that ISIS used defendants’ platforms to plan or coordinate the Reina attack; in fact, they do not allege that Masharipov himself ever used Facebook, YouTube, or Twitter. 

The Court rejected each of these allegations as insufficient to establish Twitter’s liability in the absence of an independent duty to act, pointing back to the distinction between an act that affirmatively helped to cause harm and an omission:

[T]he only affirmative “conduct” defendants allegedly undertook was creating their platforms and setting up their algorithms to display content relevant to user inputs and user history. Plaintiffs never allege that, after defendants established their platforms, they gave ISIS any special treatment or words of encouragement. Nor is there reason to think that defendants selected or took any action at all with respect to ISIS’ content (except, perhaps, blocking some of it).

In our earlier post on Taamneh, we argued that the plaintiff’s “theory of liability would contain no viable limiting principle” and asked “what in principle would separate a search engine from Twitter, if the search engine linked to an alleged terrorist’s account?” The Court made a similar argument, positing that, while “bad actors like ISIS are able to use platforms like defendants’ for illegal—and sometimes terrible—ends,” the same “could be said of cell phones, email, or the internet generally.” Despite this, “internet or cell service providers [can’t] incur culpability merely for providing their services to the public writ large. Nor do we think that such providers would normally be described as aiding and abetting, for example, illegal drug deals brokered over cell phones—even if the provider’s conference-call or video-call features made the sale easier.” 

The Court concluded:

At bottom, then, the claim here rests less on affirmative misconduct and more on an alleged failure to stop ISIS from using these platforms. But, as noted above, both tort and criminal law have long been leery of imposing aiding-and-abetting liability for mere passive nonfeasance.

In sum, since there was no independent duty to act to be found in statute, Twitter could not be found liable under these allegations.

The First Amendment and Common Carriage

It’s notable that the opinion was written by Justice Thomas, who previously invited states to create common-carriage laws that he believed would be consistent with the First Amendment. In his concurrence to the Court’s dismissal (as moot) of the petition for certification in Biden v. First Amendment Institute, Thomas wrote of the market power allegedly held by social-media companies like Twitter, Facebook, and YouTube that:

If part of the problem is private, concentrated control over online content and platforms available to the public, then part of the solution may be found in doctrines that limit the right of a private company to exclude. Historically, at least two legal doctrines limited a company’s right to exclude.

He proceeded to outline how common-carriage and public-accommodation laws can be used to limit companies from excluding users, suggesting that they would be subject to a lower standard of First Amendment scrutiny under Turner and its progeny.

Among the reasons for imposing common-carriage requirements on social-media companies, Justice Thomas found it important that they are like conduits that carry speech of others:

Though digital instead of physical, they are at bottom communications networks, and they “carry” information from one user to another. A traditional telephone company laid physical wires to create a network connecting people. Digital platforms lay information infrastructure that can be controlled in much the same way. And unlike newspapers, digital platforms hold themselves out as organizations that focus on distributing the speech of the broader public. Federal law dictates that companies cannot “be treated as the publisher or speaker” of information that they merely distribute. 110 Stat. 137, 47 U. S. C. §230(c). 

Thomas also noted the relationship between certain benefits bestowed upon common carriers in exchange for universal service: 

In exchange for regulating transportation and communication industries, governments—both State and Federal— have sometimes given common carriers special government favors. For example, governments have tied restrictions on a carrier’s ability to reject clients to “immunity from certain types of suits” or to regulations that make it more difficult for other companies to compete with the carrier (such as franchise licenses). (internal citations omitted)

While Taamneh is not about the First Amendment, some of the language in Thomas’ opinion would suggest that social-media companies are the types of businesses that may receive conduit liability for third-party conduct in exchange for common-carriage requirements. 

As noted above, the Court found it important for its holding that there was no aiding-and-abetting by Twitter that “there is not even reason to think that defendants carefully screened any content before allowing users to upload it onto their platforms. If anything, the opposite is true: By plaintiffs’ own allegations, these platforms appear to transmit most content without inspecting it.” The Court then compared social-media platforms to “cell phones, email, or the internet generally,” which are classic examples of conduits. In particular, phone service was a common carrier that largely received immunity from liability for its users’ conduct.

Thus, while Taamneh wouldn’t be directly binding in the First Amendment context, this language will likely be cited in the briefs by those supporting the Texas and Florida common-carriage laws when the Supreme Court reviews them.

Section 230 and Neutral Tools

On the other hand—and despite the views Thomas expressed about Section 230 immunity in his Malwarebytes statement—there is much in the Court’s reasoning in Taamneh that would lead one to believe the justices sees algorithmic recommendations as neutral tools that would not, in and of themselves, restrict a finding of immunity for online platforms.

While the Court’s decision in Gonzalez v. Google basically said it didn’t need to reach the Section 230 question because the allegations failed to state a claim under Taamneh’s reasoning, it appears highly likely that a majority would have found the platforms immune under Section 230 despite their use of algorithmic recommendations. For instance, in Taamneh, the Court disagreed with the assertion that recommendation algorithms amounted to substantial assistance, reasoning that:

By plaintiffs’ own telling, their claim is based on defendants’ “provision of the infrastructure which provides material support to ISIS.” Viewed properly, defendants’ “recommendation” algorithms are merely part of that infrastructure. All the content on their platforms is filtered through these algorithms, which allegedly sort the content by information and inputs provided by users and found in the content itself. As presented here, the algorithms appear agnostic as to the nature of the content, matching any content (including ISIS’ content) with any user who is more likely to view that content. The fact that these algorithms matched some ISIS content with some users thus does not convert defendants’ passive assistance into active abetting. Once the platform and sorting-tool algorithms were up and running, defendants at most allegedly stood back and watched; they are not alleged to have taken any further action with respect to ISIS. 

On the other hand, the Court thought it important to its finding that there were no allegations establishing a nexus (due to unusual provision or conscious and selective promotion) between Twitter’s provision of a communications platform and the terrorist activity:

To be sure, we cannot rule out the possibility that some set of allegations involving aid to a known terrorist group would justify holding a secondary defendant liable for all of the group’s actions or perhaps some definable subset of terrorist acts. There may be, for example, situations where the provider of routine services does so in an unusual way or provides such dangerous wares that selling those goods to a terrorist group could constitute aiding and abetting a foreseeable terror attack. Cf. Direct Sales Co. v. United States, 319 U. S. 703, 707, 711–712, 714–715 (1943) (registered morphine distributor could be liable as a coconspirator of an illicit operation to which it mailed morphine far in excess of normal amounts). Or, if a platform consciously and selectively chose to promote content provided by a particular terrorist group, perhaps it could be said to have culpably assisted the terrorist group. Cf. Passaic Daily News v. Blair, 63 N. J. 474, 487–488, 308 A. 2d 649, 656 (1973) (publishing employment advertisements that discriminate on the basis of sex could aid and abet the discrimination).

In other words, this language could suggest that, as long as the algorithms are essentially “neutral tools” (to use the language of Roommates.com and its progeny), social-media platforms are immune for third-party speech that they incidentally promote. But if they design their algorithmic recommendations in such a way that suggests the platforms “consciously and selectively” promote illegal content, then they could lose immunity.

Unless other justices share Thomas’ appetite to limit Section 230 immunity substantially in a future case, this language from Taamneh would likely be used to expand the law’s protections to algorithmic recommendations under a Roommates.com/”neutral tools” analysis.

Conclusion

While the Court did not end up issuing the huge Section 230 decision that some expected, the Taamneh decision will be a big deal going forward for the interconnected issues of online intermediary liability, the First Amendment, and Section 230. Language from Justice Thomas’ opinion will likely be cited in the litigation over the Texas and Florida common-carrier laws, as well as future Section 230 cases.

Legislation to secure children’s safety online is all the rage right now, not only on Capitol Hill, but in state legislatures across the country. One of the favored approaches is to impose on platforms a duty of care to protect teen users.

For example, Sens. Richard Blumenthal (D-Conn.) and Marsha Blackburn (R-Tenn.) have reintroduced the Kid’s Online Safety Act (KOSA), which would require that social-media platforms “prevent or mitigate” a variety of potential harms, including mental-health harms; addiction; online bullying and harassment; sexual exploitation and abuse; promotion of narcotics, tobacco, gambling, or alcohol; and predatory, unfair, or deceptive business practices.

But while bills of this sort would define legal responsibilities that online platforms have to their minor users, this statutory duty of care is more likely to result in the exclusion of teens from online spaces than to promote better care of teens who use them.

Drawing on the previous research that I and my International Center for Law & Economics (ICLE) colleagues have done on the economics of intermediary liability and First Amendment jurisprudence, I will in this post consider the potential costs and benefits of imposing a statutory duty of care similar to that proposed by KOSA.

The Law & Economics of Online Intermediary Liability and the First Amendment (Kids Edition)

Previously (in a law review article, an amicus brief, and a blog post), we at ICLE have argued that there are times when the law rightfully places responsibility on intermediaries to monitor and control what happens on their platforms. From an economic point of view, it makes sense to impose liability on intermediaries when they are the least-cost avoider: i.e., the party that is best positioned to limit harm, even if they aren’t the party committing the harm.

On the other hand, as we have also noted, there are costs to imposing intermediary liability. This is especially true for online platforms with user-generated content. Specifically, there is a risk of “collateral censorship” wherein online platforms remove more speech than is necessary in order to avoid potential liability. For example, imposing a duty of care to “protect” minors, in particular, could result in online platforms limiting teens’ access.

If the social costs that arise from the imposition of intermediary liability are greater than the benefits accrued, then such an arrangement would be welfare-destroying, on net. While we want to deter harmful (illegal) content, we don’t want to do so if we end up deterring access to too much beneficial (legal) content as a result.

The First Amendment often limits otherwise generally applicable laws, on grounds that they impose burdens on speech. From an economic point of view, this could be seen as an implicit subsidy. That subsidy may be justifiable, because information is a public good that would otherwise be underproduced. As Daniel A. Farber put it in 1991:

[B]ecause information is a public good, it is likely to be undervalued by both the market and the political system. Individuals have an incentive to ‘free ride’ because they can enjoy the benefits of public goods without helping to produce those goods. Consequently, neither market demand nor political incentives fully capture the social value of public goods such as information. Our polity responds to this undervaluation of information by providing special constitutional protection for information-related activities. This simple insight explains a surprising amount of First Amendment doctrine.

In particular, the First Amendment provides important limits on how far the law can go in imposing intermediary liability that would chill speech, including when dealing with potential harms to teenage users. These limitations seek the same balance that the economics of intermediary liability would suggest: how to hold online platforms liable for legally cognizable harms without restricting access to too much beneficial content. Below is a summary of some of those relevant limitations.

Speech vs. Conduct

The First Amendment differentiates between speech and conduct. While the line between the two can be messy (and “expressive conduct” has its own standard under the O’Brien test), governmental regulation of some speech acts is permissible. Thus, harassment, terroristic threats, fighting words, and even incitement to violence can be punished by law. On the other hand, the First Amendment does not generally allow the government to regulate “hate speech” or “bullying.” As the 3rd U.S. Circuit Court of Appeals explained it in the context of a school’s anti-harassment policy:

There is of course no question that non-expressive, physically harassing conduct is entirely outside the ambit of the free speech clause. But there is also no question that the free speech clause protects a wide variety of speech that listeners may consider deeply offensive, including statements that impugn another’s race or national origin or that denigrate religious beliefs… When laws against harassment attempt to regulate oral or written expression on such topics, however detestable the views expressed may be, we cannot turn a blind eye to the First Amendment implications.

In other words, while a duty of care could reach harrassing conduct, it is unclear how it could reach pure expression on online platforms without implicating the First Amendment.

Impermissibly Vague

The First Amendment also disallows rules sufficiently vague that they would preclude a person of ordinary intelligence from having fair notice of what is prohibited. For instance, in an order handed down earlier this year in Høeg v. Newsom, the federal district court granted the plaintiffs’ motion to enjoin a California law that would charge medical doctors with sanctionable “unprofessional conduct” if, as part of treatment or advice, they shared with patients “false information that is contradicted by contemporaneous scientific consensus contrary to the standard of care.”

The court found that “contemporary scientific consensus” was so “ill-defined [that] physician plaintiffs are unable to determine if their intended conduct contradicts [it].” The court asked a series of questions relevant to trying to define the phrase:

[W]ho determines whether a consensus exists to begin with? If a consensus does exist, among whom must the consensus exist (for example practicing physicians, or professional organizations, or medical researchers, or public health officials, or perhaps a combination)? In which geographic area must the consensus exist (California, or the United States, or the world)? What level of agreement constitutes a consensus (perhaps a plurality, or a majority, or a supermajority)? How recently in time must the consensus have been established to be considered “contemporary”? And what source or sources should physicians consult to determine what the consensus is at any given time (perhaps peer-reviewed scientific articles, or clinical guidelines from professional organizations, or public health recommendations)?

Thus, any duty of care to limit access to potentially harmful online content must not be defined in a way that is too vague for a person of ordinary intelligence to know what is prohibited.

Liability for Third-Party Speech

The First Amendment limits intermediary liability when dealing with third-party speech. For the purposes of defamation law, the traditional continuum of liability was from publishers to distributors (or secondary publishers) to conduits. Publishers—such as newspapers, book publishers, and television producers—exercised significant editorial control over content. As a result, they could be held liable for defamatory material, because it was seen as their own speech. Conduits—like the telephone company—were on the other end of the spectrum, and could not be held liable for the speech of those who used their services.

As the Court of Appeals of the State of New York put in a 1974 opinion:

In order to be deemed to have published a libel a defendant must have had a direct hand in disseminating the material whether authored by another, or not. We would limit [liability] to media of communications involving the editorial or at least participatory function (newspapers, magazines, radio, television and telegraph)… The telephone company is not part of the “media” which puts forth information after processing it in one way or another. The telephone company is a public utility which is bound to make its equipment available to the public for any legal use to which it can be put…

Distributors—which included booksellers and libraries—were in the middle of this continuum. They had to have some notice that content they distributed was defamatory before they could be held liable.

Courts have long explored the tradeoffs between liability and carriage of third-party speech in this context. For instance, in Smith v. California, the U.S. Supreme Court found that a statute establishing strict liability for selling obscene materials violated the First Amendment because:

By dispensing with any requirement of knowledge of the contents of the book on the part of the seller, the ordinance tends to impose a severe limitation on the public’s access to constitutionally protected matter. For if the bookseller is criminally liable without knowledge of the contents, and the ordinance fulfills its purpose, he will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected as well as obscene literature. It has been well observed of a statute construed as dispensing with any requirement of scienter that: “Every bookseller would be placed under an obligation to make himself aware of the contents of every book in his shop. It would be altogether unreasonable to demand so near an approach to omniscience.” (internal citations omitted)

It’s also worth noting that traditional publisher liability was limited in the case of republication, such as when newspapers republished stories from wire services like the Associated Press. Courts observed the economic costs that would attend imposing a strict-liability standard in such cases:

No newspaper could afford to warrant the absolute authenticity of every item of its news’, nor assume in advance the burden of specially verifying every item of news reported to it by established news gathering agencies, and continue to discharge with efficiency and promptness the demands of modern necessity for prompt publication, if publication is to be had at all.

Over time, the rule was extended, either by common law or statute, from newspapers to radio and television broadcasts, with the treatment of republication of third-party speech eventually resembling conduit liability even more than distributor liability. See Brent Skorup and Jennifer Huddleston’s “The Erosion of Publisher Liability in American Law, Section 230, and the Future of Online Curation” for a more thoroughgoing treatment of the topic.

The thing that pushed the law toward conduit liability when entities carried third-party speech was the implicit economic reasoning. For example, in 1959’s Farmers Educational & Cooperative Union v. WDAY, Inc., the Supreme Court held that a broadcaster could not be found liable for defamation made by a political candidate on the air, arguing that:

The decision a broadcasting station would have to make in censoring libelous discussion by a candidate is far from easy. Whether a statement is defamatory is rarely clear. Whether such a statement is actionably libelous is an even more complex question, involving as it does, consideration of various legal defenses such as “truth” and the privilege of fair comment. Such issues have always troubled courts… if a station were held responsible for the broadcast of libelous material, all remarks evenly faintly objectionable would be excluded out of an excess of caution. Moreover, if any censorship were permissible, a station so inclined could intentionally inhibit a candidate’s legitimate presentation under the guise of lawful censorship of libelous matter. Because of the time limitation inherent in a political campaign, erroneous decisions by a station could not be corrected by the courts promptly enough to permit the candidate to bring improperly excluded matter before the public. It follows from all this that allowing censorship, even of the attenuated type advocated here, would almost inevitably force a candidate to avoid controversial issues during political debates over radio and television, and hence restrict the coverage of consideration relevant to intelligent political decision.

It is clear from the foregoing that imposing duty of care on online platforms to limit speech in ways that would make them strictly liable would be inconsistent with distributor liability. But even a duty of care that more resembled a negligence-based standard could implicate speech interests if online platforms are seen to be akin to newspapers, or to radio and television broadcasters, when they act as republishers of third-party speech. Such cases would appear to require conduit liability.

The First Amendment Applies to Children

The First Amendment has been found to limit what governments can do in the name of protecting children from encountering potentially harmful speech. For example, California in 2005 passed a law prohibiting the sale or rental of “violent video games” to minors. In Brown v. Entertainment Merchants Ass’n, the Supreme Court found the law unconstitutional, finding that:

No doubt [the government] possesses legitimate power to protect children from harm, but that does not include a free-floating power to restrict the ideas to which children may be exposed. “Speech that is neither obscene as to youths nor subject to some other legitimate proscription cannot be suppressed solely to protect the young from ideas or images that a legislative body thinks unsuitable for them.” (internal citations omitted)

The Court did not find it persuasive that the video games were violent (noting that children’s books often depict violence) or that they were interactive (as some children’s books offer choose-your-own-adventure options). In other words, there was nothing special about violent video games that would subject them to a lower level of constitutional protection, even for minors that wished to play them.

The Court also did not find persuasive California’s appeal that the law aided parents in making decisions about what their children could access, stating:

California claims that the Act is justified in aid of parental authority: By requiring that the purchase of violent video games can be made only by adults, the Act ensures that parents can decide what games are appropriate. At the outset, we note our doubts that punishing third parties for conveying protected speech to children just in case their parents disapprove of that speech is a proper governmental means of aiding parental authority.

Justice Samuel Alito’s concurrence in Brown would have found the California law unconstitutionally vague, arguing that constitutional speech would be chilled as a result of the law’s enforcement. The fact its intent was to protect minors didn’t change that analysis.

Limiting the availability of speech to minors in the online world is subject to the same analysis as in the offline world. In Reno v. ACLU, the Supreme Court made clear that the First Amendment applies with equal effect online, stating that “our cases provide no basis for qualifying the level of First Amendment scrutiny that should be applied to this medium.” In Packingham v. North Carolina, the Court went so far as to call social-media platforms “the modern public square.”

Restricting minors’ access to online platforms through age-verification requirements already have been found to violate the First Amendment. In Ashcroft v. ACLU (II), the Supreme Court reviewed provisions of the Children Online Protection Act’s (COPA) that would restrict posting content “harmful to minors” for “commercial purposes.” COPA allowed an affirmative defense if the online platform restricted access by minors through various age-verification devices. The Court found that “[b]locking and filtering software is an alternative that is less restrictive than COPA, and, in addition, likely more effective as a means of restricting children’s access to materials harmful to them” and upheld a preliminary injunction against the law, pending further review of its constitutionality.

On remand, the 3rd Circuit found that “[t]he Supreme Court has disapproved of content-based restrictions that require recipients to identify themselves affirmatively before being granted access to disfavored speech, because such restrictions can have an impermissible chilling effect on those would-be recipients.” The circuit court would eventually uphold the district court’s finding of unconstitutionality and permanently enjoin the statute’s provisions, noting that the age-verification requirements “would deter users from visiting implicated Web sites” and therefore “would chill protected speech.”

A duty of care to protect minors could be unconstitutional if it ends up limiting access to speech that is not illegal for them to access. Age-verification requirements that would likely accompany such a duty could also result in a statute being found unconstitutional.

In sum:

  • A duty of care to prevent or mitigate harassment and bullying has First Amendment implications if it regulates pure expression, such as speech on online platforms.
  • A duty of care to limit access to potentially harmful online speech can’t be defined so vaguely that a person of ordinary intelligence can’t know what is prohibited.
  • A duty of care that establishes a strict-liability standard on online speech platforms would likely be unconstitutional for its chilling effects on legal speech. A duty of care that establishes a negligence standard could similarly lead to “collateral censorship” of third-party speech.
  • A duty of care to protect minors could be unconstitutional if it limits access to legal speech. De facto age-verification requirements could also be found unconstitutional.

The Problems with KOSA: The First Amendment and Limiting Kids’ Access to Online Speech

KOSA would establish a duty of care for covered online platforms to “act in the best interests of a user that the platform knows or reasonably should know is a minor by taking reasonable measures in its design and operation of products and services to prevent and mitigate” a variety of potential harms, including:

  1. Consistent with evidence-informed medical information, the following mental health disorders: anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors.
  2. Patterns of use that indicate or encourage addiction-like behaviors.
  3. Physical violence, online bullying, and harassment of the minor.
  4. Sexual exploitation and abuse.
  5. Promotion and marketing of narcotic drugs (as defined in section 102 of the Controlled Substances Act (21 U.S.C. 802)), tobacco products, gambling, or alcohol.
  6. Predatory, unfair, or deceptive marketing practices, or other financial harms.

There are also a variety of tools and notices that must be made available to users under age 17, as well as to their parents.

Reno and Age Verification

KOSA could be found unconstitutional under the Reno and COPA-line of cases for creating a de facto age-verification requirement. The bill’s drafters appear to be aware of the legal problems that an age-verification requirement would entail. KOSA therefore states that:

Nothing in this Act shall be construed to require—(1) the affirmative collection of any personal data with respect to the age of users that a covered platform is not already collecting in the normal course of business; or (2) a covered platform to implement an age gating or age verification functionality.

But this doesn’t change the fact that, in order to effectuate KOSA’s requirements, online platforms would have to know their users’ ages. KOSA’s duty of care incorporates a constructive-knowledge requirement (i.e., “reasonably should know is a minor”). A duty of care combined with the mandated notices and tools that must be made available to minors makes it “reasonable” that platforms would have to verify the age of each user.

If a court were to agree that KOSA doesn’t require age gating or age verification, this would likely render the act ineffective. As it stands, most of the online platforms that would be covered by KOSA only ask users their age (or birthdate) upon creation of a profile, which is easily evaded by simple lying. While those under age 17 (but at least age 13) at the time of the act’s passage who have already created profiles would be implicated, it would appear the act wouldn’t require platforms to vet whether users who said they were at least 17 when they created new profiles were actually telling the truth.

Vagueness and Protected Speech

Even if KOSA were not found unconstitutional for creating an explicit age-verification scheme, it still likely would lead to kids under 17 being restricted from accessing protected speech. Several of the types of speech the duty of care covers could include legal speech. Moreover, the prohibited speech is defined so vaguely that it likely would lead to chilling effects on access to legal speech.

For example, pictures of photoshopped models are protected speech. If teenage girls want to see such content on their feeds, it isn’t clear that the law can constitutionally stop them, even if it’s done by creating a duty of care to prevent and mitigate harms associated with “anxiety, depression, or eating disorders.”

Moreover, access to content that kids really like to see or hear is still speech, even if they like it so much that an outside observer may think they are addicted to it. Much as the Court said in Brown, the government does not have “a free-floating power to restrict [speech] to which children may be exposed.”

KOSA’s Section 3(A)(1) and 3(A)(2) would also run into problems, as they are so vague that a person of ordinary intelligence would not know what they prohibit. As a result, there would likely be chilling effects on legal speech.

Much like in Høeg, the phrase “consistent with evidence-informed medical information” leads to various questions regarding how an online platform could comply with the law. For instance, it isn’t clear what content or design issue would be implicated by this subsection. Would a platform need to hire mental-health professionals to consult with them on every product-design and content-moderation decision?

Even worse is the requirement to prevent and mitigate “patterns of use that indicate or encourage addiction-like behaviors,” which isn’t defined by reference to “evidence-informed medical information” or to anything else.

Even Bullying May Be Protected Speech

Even KOSA’s duty to prevent and mitigate “physical violence, online bullying, and harassment of the minor” in Section 3(3) could implicate the First Amendment. While physical violence would clearly be outside of the First Amendment’s protections (although it’s unclear how an online platform could prevent or mitigate such violence), online bullying and harassing speech are, nonetheless, speech. As a result, this duty of care could receive constitutional scrutiny regarding whether it effectively limits lawful (though awful) speech directed at minors.

Locking Children Out of Online Spaces

KOSA’s duty of care appears to be based on negligence, in that it requires platforms to take “reasonable measures.” This probably makes it more likely to survive First Amendment scrutiny than a strict-liability regime would.

It could, however, still result in real (and costly) product-design and moderation challenges for online platforms. As a result, there would be significant incentives for those platforms to exclude those they know or reasonably believe are under age 17 from the platforms altogether.

While this is not strictly a First Amendment problem, per se, it nonetheless  illustrates how laws intended to “protect” children’s safety while online can actually lead to their being restricted from using online speech platforms altogether.

Conclusion

Despite its being christened the “Kid’s Online Safety Act,” KOSA will result in real harm for kids if enacted into law. Its likely result would be considerable “collateral censorship,” as online platforms restrict teens’ access in order to avoid liability.

The bill’s duty of care would also either require likely unconstitutional age-verification, or it will be rendered ineffective, as teen users lie about their age in order to access desired content.

Congress shall make no law abridging the freedom of speech, even if it is done in the name of children.

The Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law will host a hearing this afternoon on Gonzalez v. Google, one of two terrorism-related cases currently before the U.S. Supreme Court that implicate Section 230 of the Communications Decency Act of 1996.

We’ve written before about how the Court might and should rule in Gonzalez (see here and here), but less attention has been devoted to the other Section 230 case on the docket: Twitter v. Taamneh. That’s unfortunate, as a thoughtful reading of the dispute at issue in Taamneh could highlight some of the law’s underlying principles. At first blush, alas, it does not appear that the Court is primed to apply that reading.

During the recent oral arguments, the Court considered whether Twitter (and other social-media companies) can be held liable under the Antiterrorism Act for providing a general communications platform that may be used by terrorists. The question under review by the Court is whether Twitter “‘knowingly’ provided substantial assistance [to terrorist groups] under [the statute] merely because it allegedly could have taken more ‘meaningful’ or ‘aggressive’ action to prevent such use.” Plaintiffs’ (respondents before the Court) theory is, essentially, that Twitter aided and abetted terrorism through its inaction.

The oral argument found the justices grappling with where to draw the line between aiding and abetting, and otherwise legal activity that happens to make it somewhat easier for bad actors to engage in illegal conduct. The nearly three-hour discussion between the justices and the attorneys yielded little in the way of a viable test. But a more concrete focus on the law & economics of collateral liability (which we also describe as “intermediary liability”) would have significantly aided the conversation.   

Taamneh presents a complex question of intermediary liability generally that goes beyond the bounds of a (relatively) simpler Section 230 analysis. As we discussed in our amicus brief in Fleites v. Mindgeek (and as briefly described in this blog post), intermediary liability generally cannot be predicated on the mere existence of harmful or illegal content on an online platform that could, conceivably, have been prevented by some action by the platform or other intermediary.

The specific statute may impose other limits (like the “knowing” requirement in the Antiterrorism Act), but intermediary liability makes sense only when a particular intermediary defendant is positioned to control (and thus remedy) the bad conduct in question, and when imposing liability would cause the intermediary to act in such a way that the benefits of its conduct in deterring harm outweigh the costs of impeding its normal functioning as an intermediary.

Had the Court adopted such an approach in its questioning, it could have better honed in on the proper dividing line between parties whose normal conduct might reasonably give rise to liability (without some heightened effort to mitigate harm) and those whose conduct should not entail this sort of elevated responsibility.

Here, the plaintiffs have framed their case in a way that would essentially collapse this analysis into a strict liability standard by simply asking “did something bad happen on a platform that could have been prevented?” As we discuss below, the plaintiffs’ theory goes too far and would overextend intermediary liability to the point that the social costs would outweigh the benefits of deterrence.

The Law & Economics of Intermediary Liability: Who’s Best Positioned to Monitor and Control?

In our amicus brief in Fleites v. MindGeek (as well as our law review article on Section 230 and intermediary liability), we argued that, in limited circumstances, the law should (and does) place responsibility on intermediaries to monitor and control conduct. It is not always sufficient to aim legal sanctions solely at the parties who commit harms directly—e.g., where harms are committed by many pseudonymous individuals dispersed across large online services. In such cases, social costs may be minimized when legal responsibility is placed upon the least-cost avoider: the party in the best position to limit harm, even if it is not the party directly committing the harm.

Thus, in some circumstances, intermediaries (like Twitter) may be the least-cost avoider, such as when information costs are sufficiently low that effective monitoring and control of end users is possible, and when pseudonymity makes remedies against end users ineffective.

But there are costs to imposing such liability—including, importantly, “collateral censorship” of user-generated content by online social-media platforms. This manifests in platforms acting more defensively—taking down more speech, and generally moving in a direction that would make the Internet less amenable to open, public discussion—in an effort to avoid liability. Indeed, a core reason that Section 230 exists in the first place is to reduce these costs. (Whether Section 230 gets the balance correct is another matter, which we take up at length in our law review article linked above).

From an economic perspective, liability should be imposed on the party or parties best positioned to deter the harms in question, so long as the social costs incurred by, and as a result of, enforcement do not exceed the social gains realized. In other words, there is a delicate balance that must be struck to determine when intermediary liability makes sense in a given case. On the one hand, we want illicit content to be deterred, and on the other, we want to preserve the open nature of the Internet. The costs generated from the over-deterrence of legal, beneficial speech is why intermediary liability for user-generated content can’t be applied on a strict-liability basis, and why some bad content will always exist in the system.

The Spectrum of Properly Construed Intermediary Liability: Lessons from Fleites v. Mindgeek

Fleites v. Mindgeek illustrates well that proper application of liability to intermedium exists on a spectrum. Mindgeek—the owner/operator of the website Pornhub—was sued under Racketeer Influenced and Corrupt Organizations Act (RICO) and Victims of Trafficking and Violence Protection Act (TVPA) theories for promoting and profiting from nonconsensual pornography and human trafficking. But the plaintiffs also joined Visa as a defendant, claiming that Visa knowingly provided payment processing for some of Pornhub’s services, making it an aider/abettor.

The “best” defendants, obviously, would be the individuals actually producing the illicit content, but limiting enforcement to direct actors may be insufficient. The statute therefore contemplates bringing enforcement actions against certain intermediaries for aiding and abetting. But there are a host of intermediaries you could theoretically bring into a liability scheme. First, obviously, is Mindgeek, as the platform operator. Plaintiffs felt that Visa was also sufficiently connected to the harm by processing payments for MindGeek users and content posters, and that it should therefore bear liability, as well.

The problem, however, is that there is no limiting principle in the plaintiffs’ theory of the case against Visa. Theoretically, the group of intermediaries “facilitating” the illicit conduct is practically limitless. As we pointed out in our Fleites amicus:

In theory, any sufficiently large firm with a role in the commerce at issue could be deemed liable if all that is required is that its services “allow[]” the alleged principal actors to continue to do business. FedEx, for example, would be liable for continuing to deliver packages to MindGeek’s address. The local waste management company would be liable for continuing to service the building in which MindGeek’s offices are located. And every online search provider and Internet service provider would be liable for continuing to provide service to anyone searching for or viewing legal content on MindGeek’s sites.

Twitter’s attorney in Taamneh, Seth Waxman, made much the same point in responding to Justice Sonia Sotomayor:

…the rule that the 9th Circuit has posited and that the plaintiffs embrace… means that as a matter of course, every time somebody is injured by an act of international terrorism committed, planned, or supported by a foreign terrorist organization, each one of these platforms will be liable in treble damages and so will the telephone companies that provided telephone service, the bus company or the taxi company that allowed the terrorists to move about freely. [emphasis added]

In our Fleites amicus, we argued that a more practical approach is needed; one that tries to draw a sensible line on this liability spectrum. Most importantly, we argued that Visa was not in a position to monitor and control what happened on MindGeek’s platform, and thus was a poor candidate for extending intermediary liability. In that case, because of the complexities of the payment-processing network, Visa had no visibility into what specific content was being purchased, what content was being uploaded to Pornhub, and which individuals may have been uploading illicit content. Worse, the only evidence—if it can be called that—that Visa was aware that anything illicit was happening consisted of news reports in the mainstream media, which may or may not have been accurate, and on which Visa was unable to do any meaningful follow-up investigation.

Our Fleites brief didn’t explicitly consider MindGeek’s potential liability. But MindGeek obviously is in a much better position to monitor and control illicit content. With that said, merely having the ability to monitor and control is not sufficient. Given that content moderation is necessarily an imperfect activity, there will always be some bad content that slips through. Thus, the relevant question is, under the circumstances, did the intermediary act reasonably—e.g., did it comply with best practices—in attempting to identify, remove, and deter illicit content?

In Visa’s case, the answer is not difficult. Given that it had no way to know about or single out transactions as likely to be illegal, its only recourse to reduce harm (and its liability risk) would be to cut off all payment services for Mindgeek. The constraints on perfectly legal conduct that this would entail certainly far outweigh the benefits of reducing illegal activity.

Moreover, such a theory could require Visa to stop processing payments for an enormous swath of legal activity outside of PornHub. For example, purveyors of illegal content on PornHub use ISP services to post their content. A theory of liability that held Visa responsible simply because it plays some small part in facilitating the illegal activity’s existence would presumably also require Visa to shut off payments to ISPs—certainly, that would also curtail the amount of illegal content.

With MindGeek, the answer is a bit more difficult. The anonymous or pseudonymous posting of pornographic content makes it extremely difficult to go after end users. But knowing that human trafficking and nonconsensual pornography are endemic problems, and knowing (as it arguably did) that such content was regularly posted on Pornhub, Mindgeek could be deemed to have acted unreasonably for not having exercised very strict control over its users (at minimum, say, by verifying users’ real identities to facilitate law enforcement against bad actors and deter the posting of illegal content). Indeed, it is worth noting that MindGeek/Pornhub did implement exactly this control, among others, following the public attention arising from news reports of nonconsensual and trafficking-related content on the site. 

But liability for MindGeek is only even plausible given that it might be able to act in such a way that imposes greater burdens on illegal content providers without deterring excessive amounts of legal content. If its only reasonable means of acting would be, say, to shut down PornHub entirely, then just as with Visa, the cost of imposing liability in terms of this “collateral censorship” would surely outweigh the benefits.

Applying the Law & Economics of Collateral Liability to Twitter in Taamneh

Contrast the situation of MindGeek in Fleites with Twitter in Taamneh. Twitter may seem to be a good candidate for intermediary liability. It also has the ability to monitor and control what is posted on its platform. And it faces a similar problem of pseudonymous posting that may make it difficult to go after end users for terrorist activity. But this is not the end of the analysis.

Given that Twitter operates a platform that hosts the general—and overwhelmingly legal—discussions of hundreds of millions of users, posting billions of pieces of content, it would be reasonable to impose a heightened responsibility on Twitter only if it could exercise it without excessively deterring the copious legal content on its platform.

At the same time, Twitter does have active policies to police and remove terrorist content. The relevant question, then, is not whether it should do anything to police such content, but whether a failure to do some unspecified amount more was unreasonable, such that its conduct should constitute aiding and abetting terrorism.

Under the Antiterrorism Act, because the basis of liability is “knowingly providing substantial assistance” to a person who committed an act of international terrorism, “unreasonableness” here would have to mean that the failure to do more transforms its conduct from insubstantial to substantial assistance and/or that the failure to do more constitutes a sort of willful blindness. 

The problem is that doing more—policing its site better and removing more illegal content—would do nothing to alter the extent of assistance it provides to the illegal content that remains. And by the same token, almost by definition, Twitter does not “know” about illegal content it fails to remove. In theory, there is always “more” it could do. But given the inherent imperfections of content moderation at scale, this will always be true, right up to the point that the platform is effectively forced to shut down its service entirely.  

This doesn’t mean that reasonable content moderation couldn’t entail something more than Twitter was doing. But it does mean that the mere existence of illegal content that, in theory, Twitter could have stopped can’t be the basis of liability. And yet the Taamneh plaintiffs make no allegation that acts of terrorism were actually planned on Twitter’s platform, and offer no reasonable basis on which Twitter could have practical knowledge of such activity or practical opportunity to control it.

Nor did plaintiffs point out any examples where Twitter had actual knowledge of such content or users and failed to remove them. Most importantly, the plaintiffs did not demonstrate that any particular content-moderation activities (short of shutting down Twitter entirely) would have resulted in Twitter’s knowledge of or ability to control terrorist activity. Had they done so, it could conceivably constitute a basis for liability. But if the only practical action Twitter can take to avoid liability and prevent harm entails shutting down massive amounts of legal speech, the failure to do so cannot be deemed unreasonable or provide the basis for liability.   

And, again, such a theory of liability would contain no viable limiting principle if it does not consider the practical ability to control harmful conduct without imposing excessively costly collateral damage. Indeed, what in principle would separate a search engine from Twitter, if the search engine linked to an alleged terrorist’s account? Both entities would have access to news reports, and could thus be assumed to have a generalized knowledge that terrorist content might exist on Twitter. The implication of this case, if the plaintiff’s theory is accepted, is that Google would be forced to delist Twitter whenever a news article appears alleging terrorist activity on the service. Obviously, that is untenable for the same reason it’s not tenable to impose an effective obligation on Twitter to remove all terrorist content: the costs of lost legal speech and activity.   

Justice Ketanji Brown Jackson seemingly had the same thought when she pointedly asked whether the plaintiffs’ theory would mean that Linda Hamilton in the Halberstram v. Welch case could have been held liable for aiding and abetting, merely for taking care of Bernard Welch’s kids at home while Welch went out committing burglaries and the murder of Michael Halberstram (instead of the real reason she was held liable, which was for doing Welch’s bookkeeping and helping sell stolen items). As Jackson put it:

…[I]n the Welch case… her taking care of his children [was] assisting him so that he doesn’t have to be at home at night? He’s actually out committing robberies. She would be assisting his… illegal activities, but I understood that what made her liable in this situation is that the assistance that she was providing was… assistance that was directly aimed at the criminal activity. It was not sort of this indirect supporting him so that he can actually engage in the criminal activity.

In sum, the theory propounded by the plaintiffs (and accepted by the 9th U.S. Circuit Court of Appeals) is just too far afield for holding Twitter liable. As Twitter put it in its reply brief, the plaintiffs’ theory (and the 9th Circuit’s holding) is that:

…providers of generally available, generic services can be held responsible for terrorist attacks anywhere in the world that had no specific connection to their offerings, so long as a plaintiff alleges (a) general awareness that terrorist supporters were among the billions who used the services, (b) such use aided the organization’s broader enterprise, though not the specific attack that injured the plaintiffs, and (c) the defendant’s attempts to preclude that use could have been more effective.

Conclusion

If Section 230 immunity isn’t found to apply in Gonzalez v. Google, and the complaint in Taamneh is allowed to go forward, the most likely response of social-media companies will be to reduce the potential for liability by further restricting access to their platforms. This could mean review by some moderator or algorithm of messages or videos before they are posted to ensure that there is no terrorist content. Or it could mean review of users’ profiles before they are able to join the platform to try to ascertain their political leanings or associations with known terrorist groups. Such restrictions would entail copious false negatives, along with considerable costs to users and to open Internet speech.

And in the end, some amount of terrorist content would still get through. If the plaintiffs’ theory leads to liability in Taamneh, it’s hard to see how the same principle wouldn’t entail liability even under these theoretical, heightened practices. Absent a focus on an intermediary defendant’s ability to control harmful content or conduct, without imposing excessive costs on legal content or conduct, the theory of liability has no viable limit.

In sum, to hold Twitter (or other social-media platforms) liable under the facts presented in Taamneh would stretch intermediary liability far beyond its sensible bounds. While Twitter may seem a good candidate for intermediary liability in theory, it should not be held liable for, in effect, simply providing its services.

Perhaps Section 230’s blanket immunity is excessive. Perhaps there is a proper standard that could impose liability on online intermediaries for user-generated content in limited circumstances properly tied to their ability to control harmful actors and the costs of doing so. But liability in the circumstances suggested by the Taamneh plaintiffs—effectively amounting to strict liability—would be an even bigger mistake in the opposite direction.

It seems that large language models (LLMs) are all the rage right now, from Bing’s announcement that it plans to integrate the ChatGPT technology into its search engine to Google’s announcement of its own LLM called “Bard” to Meta’s recent introduction of its Large Language Model Meta AI, or “LLaMA.” Each of these LLMs use artificial intelligence (AI) to create text-based answers to questions.

But it certainly didn’t take long after these innovative new applications were introduced for reports to emerge of LLM models just plain getting facts wrong. Given this, it is worth asking: how will the law deal with AI-created misinformation?

Among the first questions courts will need to grapple with is whether Section 230 immunity applies to content produced by AI. Of course, the U.S. Supreme Court already has a major Section 230 case on its docket with Gonzalez v. Google. Indeed, during oral arguments for that case, Justice Neil Gorsuch appeared to suggest that AI-generated content would not receive Section 230 immunity. And existing case law would appear to support that conclusion, as LLM content is developed by the interactive computer service itself, not by its users.

Another question raised by the technology is what legal avenues would be available to those seeking to challenge the misinformation. Under the First Amendment, the government can only regulate false speech under very limited circumstances. One of those is defamation, which seems like the most logical cause of action to apply. But under defamation law, plaintiffs—especially public figures, who are the most likely litigants and who must prove “malice”—may have a difficult time proving the AI acted with the necessary state of mind to sustain a cause of action.

Section 230 Likely Does Not Apply to Information Developed by an LLM

Section 230(c)(1) states:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

The law defines an interactive computer service as “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.”

The access software provider portion of that definition includes any tool that can “filter, screen, allow, or disallow content; pick, choose, analyze, or digest content; or transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content.”

And finally, an information content provider is “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.”

Taken together, Section 230(c)(1) gives online platforms (“interactive computer services”) broad immunity for user-generated content (“information provided by another information content provider”). This even covers circumstances where the online platform (acting as an “access software provider”) engages in a great deal of curation of the user-generated content.

Section 230(c)(1) does not, however, protect information created by the interactive computer service itself.

There is case law to help determine whether content is created or developed by the interactive computer service. Online platforms applying “neutral tools” to help organize information have not lost immunity. As the 9th U.S. Circuit Court of Appeals put it in Fair Housing Council v. Roommates.com:

Providing neutral tools for navigating websites is fully protected by CDA immunity, absent substantial affirmative conduct on the part of the website creator promoting the use of such tools for unlawful purposes.

On the other hand, online platforms are liable for content they create or develop, which does not include “augmenting the content generally,” but does include “materially contributing to its alleged unlawfulness.” 

The question here is whether the text-based answers provided by LLM apps like Bing’s Sydney or Google’s Bard comprise content created or developed by those online platforms. One could argue that LLMs are neutral tools simply rearranging information from other sources for display. It seems clear, however, that the LLM is synthesizing information to create new content. The use of AI to answer a question, rather than a human agent of Google or Microsoft, doesn’t seem relevant to whether or not it was created or developed by those companies. (Though, as Matt Perault notes, how LLMs are integrated into a product matters. If an LLM just helps “determine which search results to prioritize or which text to highlight from underlying search results,” then it may receive Section 230 protection.)

The technology itself gives text-based answers based on inputs from the questioner. LLMs uses AI-trained engines to guess the next word based on troves of data from the internet. While the information may come from third parties, the creation of the content itself is due to the LLM. As ChatGPT put it in response to my query here:

Proving Defamation by AI

In the absence of Section 230 immunity, there is still the question of how one could hold Google’s Bard or Microsoft’s Sydney accountable for purveying misinformation. There are no laws against false speech in general, nor can there be, since the Supreme Court declared such speech was protected in United States v. Alvarez. There are, however, categories of false speech, like defamation and fraud, which have been found to lie outside of First Amendment protection.

Defamation is the most logical cause of action that could be brought for false information provided by an LLM app. But it is notable that it is highly unlikely that people who have not received significant public recognition will be known by these LLM apps (believe me, I tried to get ChatGPT to tell me something about myself—alas, I’m not famous enough). On top of that, those most likely to have significant damages from their reputations being harmed by falsehoods spread online are those who are in the public eye. This means that, for the purposes of the defamation suit, it is public figures who are most likely to sue.

As an example, if ChatGPT answers the question of whether Johnny Depp is a wife-beater by saying that he is, contrary to one court’s finding (but consistent with another’s), Depp could sue the creators of the service for defamation. He would have to prove that a false statement was publicized to a third party that resulted in damages to him. For the sake of argument, let’s say he can do both. The case still isn’t proven because, as a public figure, he would also have to prove “actual malice.”

Under New York Times v. Sullivan and its progeny, a public figure must prove the defendant acted with “actual malice” when publicizing false information about the plaintiff. Actual malice is defined as “knowledge that [the statement] was false or with reckless disregard of whether it was false or not.”

The question arises whether actual malice can be attributed to a LLM. It seems unlikely that it could be said that the AI’s creators trained it in a way that they “knew” the answers provided would be false. But it may be a more interesting question whether the LLM is giving answers with “reckless disregard” of their truth or falsity. One could argue that these early versions of the technology are exactly that, but the underlying AI is likely to improve over time with feedback. The best time for a plaintiff to sue may be now, when the LLMs are still in their infancy and giving off false answers more often.

It is possible that, given enough context in the query, LLM-empowered apps may be able to recognize private figures, and get things wrong. For instance, when I asked ChatGPT to give a biography of myself, I got no results:

When I added my workplace, I did get a biography, but none of the relevant information was about me. It was instead about my boss, Geoffrey Manne, the president of the International Center for Law & Economics:

While none of this biography is true, it doesn’t harm my reputation, nor does it give rise to damages. But it is at least theoretically possible that an LLM could make a defamatory statement against a private person. In such a case, a lower burden of proof would apply to the plaintiff, that of negligence, i.e., that the defendant published a false statement of fact that a reasonable person would have known was false. This burden would be much easier to meet if the AI had not been sufficiently trained before being released upon the public.

Conclusion

While it is unlikely that a service like ChapGPT would receive Section 230 immunity, it also seems unlikely that a plaintiff would be able to sustain a defamation suit against it for false statements. The most likely type of plaintiff (public figures) would encounter difficulty proving the necessary element of “actual malice.” The best chance for a lawsuit to proceed may be against the early versions of this service—rolled out quickly and to much fanfare, while still being in a beta stage in terms of accuracy—as a colorable argument can be made that they are giving false answers in “reckless disregard” of their truthfulness.

In our previous post on Gonzalez v. Google LLC, which will come before the U.S. Supreme Court for oral arguments Feb. 21, Kristian Stout and I argued that, while the U.S. Justice Department (DOJ) got the general analysis right (looking to Roommates.com as the framework for exceptions to the general protections of Section 230), they got the application wrong (saying that algorithmic recommendations should be excepted from immunity).

Now, after reading Google’s brief, as well as the briefs of amici on their side, it is even more clear to me that:

  1. algorithmic recommendations are protected by Section 230 immunity; and
  2. creating an exception for such algorithms would severely damage the internet as we know it.

I address these points in reverse order below.

Google on the Death of the Internet Without Algorithms

The central point that Google makes throughout its brief is that a finding that Section 230’s immunity does not extend to the use of algorithmic recommendations would have potentially catastrophic implications for the internet economy. Google and amici for respondents emphasize the ubiquity of recommendation algorithms:

Recommendation algorithms are what make it possible to find the needles in humanity’s largest haystack. The result of these algorithms is unprecedented access to knowledge, from the lifesaving (“how to perform CPR”) to the mundane (“best pizza near me”). Google Search uses algorithms to recommend top search results. YouTube uses algorithms to share everything from cat videos to Heimlich-maneuver tutorials, algebra problem-solving guides, and opera performances. Services from Yelp to Etsy use algorithms to organize millions of user reviews and ratings, fueling global commerce. And individual users “like” and “share” content millions of times every day. – Brief for Respondent Google, LLC at 2.

The “recommendations” they challenge are implicit, based simply on the manner in which YouTube organizes and displays the multitude of third-party content on its site to help users identify content that is of likely interest to them. But it is impossible to operate an online service without “recommending” content in that sense, just as it is impossible to edit an anthology without “recommending” the story that comes first in the volume. Indeed, since the dawn of the internet, virtually every online service—from news, e-commerce, travel, weather, finance, politics, entertainment, cooking, and sports sites, to government, reference, and educational sites, along with search engines—has had to highlight certain content among the thousands or millions of articles, photographs, videos, reviews, or comments it hosts to help users identify what may be most relevant. Given the sheer volume of content on the internet, efforts to organize, rank, and display content in ways that are useful and attractive to users are indispensable. As a result, exposing online services to liability for the “recommendations” inherent in those organizational choices would expose them to liability for third-party content virtually all the time. – Amicus Brief for Meta Platforms at 3-4.

In other words, if Section 230 were limited in the way that the plaintiffs (and the DOJ) seek, internet platforms’ ability to offer users useful information would be strongly attenuated, if not completely impaired. The resulting legal exposure would lead inexorably to far less of the kinds of algorithmic recommendations upon which the modern internet is built.

This is, in part, why we weren’t able to fully endorse the DOJ’s brief in our previous post. The DOJ’s brief simply goes too far. It would be unreasonable to establish as a categorical rule that use of the ubiquitous auto-discovery algorithms that power so much of the internet would strip a platform of Section 230 protection. The general rule advanced by the DOJ’s brief would have detrimental and far-ranging implications.

Amici on Publishing and Section 230(f)(4)

Google and the amici also make a strong case that algorithmic recommendations are inseparable from publishing. They have a strong textual hook in Section 230(f)(4), which explicitly protects “enabling tools that… filter, screen, allow, or disallow content; pick, choose, analyze or disallow content; or transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content.”

As the amicus brief from a group of internet-law scholars—including my International Center for Law & Economics colleagues Geoffrey Manne and Gus Hurwitz—put it:

Section 230’s text should decide this case. Section 230(c)(1) immunizes the user or provider of an “interactive computer service” from being “treated as the publisher or speaker” of information “provided by another information content provider.” And, as Section 230(f)’s definitions make clear, Congress understood the term “interactive computer service” to include services that “filter,” “screen,” “pick, choose, analyze,” “display, search, subset, organize,” or “reorganize” third-party content. Automated recommendations perform exactly those functions, and are therefore within the express scope of Section 230’s text. – Amicus Brief of Internet Law Scholars at 3-4.

In other words, Section 230 protects not just the conveyance of information, but how that information is displayed. Algorithmic recommendations are a subset of those display tools that allow users to find what they are looking for with ease. Section 230 can’t be reasonably read to exclude them.

Why This Isn’t Really (Just) a Roommates.com Case

This is where the DOJ’s amicus brief (and our previous analysis) misses the point. This is not strictly a Roomates.com case. The case actually turns on whether algorithmic recommendations are separable from publication of third-party content, rather than whether they are design choices akin to what was occurring in that case.

For instance, in our previous post, we argued that:

[T]he DOJ argument then moves onto thinner ice. The DOJ believes that the 230 liability shield in Gonzalez depends on whether an automated “recommendation” rises to the level of development or creation, as the design of filtering criteria in Roommates.com did.

While we thought the DOJ went too far in differentiating algorithmic recommendations from other uses of algorithms, we gave them too much credit in applying the Roomates.com analysis. Section 230 was meant to immunize filtering tools, so long as the information provided is from third parties. Algorithmic recommendations—like the type at issue with YouTube’s “Up Next” feature—are less like the conduct in Roommates.com and much more like a search engine.

The DOJ did, however, have a point regarding algorithmic tools in that they may—like any other tool a platform might use—be employed in a way that transforms the automated promotion into a direct endorsement or original publication. For instance, it’s possible to use algorithms to intentionally amplify certain kinds of content in such a way as to cultivate more of that content.

That’s, after all, what was at the heart of Roommates.com. The site was designed to elicit responses from users that violated the law. Algorithms can do that, but as we observed previously, and as the many amici in Gonzalez observe, there is nothing inherent to the operation of algorithms that match users with content that makes their use categorically incompatible with Section 230’s protections.

Conclusion

After looking at the textual and policy arguments forwarded by both sides in Gonzalez, it appears that Google and amici for respondents have the better of it. As several amici argued, to the extent there are good reasons to reform Section 230, Congress should take the lead. The Supreme Court shouldn’t take this case as an opportunity to significantly change the consensus of the appellate courts on the broad protections of Section 230 immunity.

In Fleites v. MindGeek—currently before the U.S. District Court for the District of Central California, Southern Division—plaintiffs seek to hold MindGeek subsidiary PornHub liable for alleged instances of human trafficking under the Racketeer Influenced and Corrupt Organizations (RICO) and the Trafficking Victims Protection Reauthorization Act (TVPRA). Writing for the International Center for Law & Economics (ICLE), we have filed a motion for leave to submit an amicus brief regarding whether it is valid to treat co-defendant Visa Inc. as a proper party under principles of collateral liability.

The proposed brief draws on our previous work on the law & economics of collateral liability, and argues that holding Visa liable as a participant under RICO or TVPRA would amount to stretching collateral liability far beyond what is reasonable. Such a move, we posit, would “generate a massive amount of social cost that would outweigh the potential deterrent or compensatory gains sought.”

Collateral liability can make sense when intermediaries are in a position to effectively monitor and control potential harms. That is, it can be appropriate to apply collateral liability to parties who are what is often referred to as a “least cost avoider.” As we write:

In some circumstances it is indeed proper to hold third parties liable even though they are not primary actors directly implicated in wrongdoing. Most significantly, such liability may be appropriate when a collateral actor stands in a relationship to the wrongdoing (or wrongdoers or victims) such that the threat of liability can incentivize it to take action (or refrain from taking action) to prevent or mitigate the wrongdoing. That is to say, collateral liability may be appropriate when the third party has a significant enough degree of control over the primary actors such that its actions can cause them to reduce the risk of harm at reasonable cost. Importantly, however, such liability is appropriate only when direct deterrence is insufficient and/or the third party can prevent harm at lower cost or more effectively than direct enforcement… From an economic perspective, liability should be imposed upon the party or parties best positioned to deter the harms in question, such that the costs of enforcement do not exceed the social gains realized.

The law of negligence under the common law, as well as contributory infringement under copyright law, both help illustrate this principle. Under the common law, collateral actors have a duty in only limited circumstances, when the harms are “reasonably foreseeable” and the actor has special access to particularized information about the victims or the perpetrators, as well as a special ability to control harmful conditions. Under copyright law, collateral liability is similarly limited to circumstances where collateral actors are best positioned to prevent the harm, and the benefits of holding such actors liable exceed the harms. 

Neither of these conditions are true in Fleites v. MindGeek: Visa is not the type of collateral actor that has any access to specialized information or the ability to control actual bad actors. Visa, as a card-payment network, simply processes payments. The only tool at the disposal of Visa is a giant sledgehammer: it can foreclose all transactions to particular sites that run over its network. There is no dispute that the vast majority of content hosted on sites like MindGeek is lawful, however awful one may believe pornography to be. Holding card networks liable here would create incentives to avoid processing payments for such sites altogether in order to avoid legal consequences. 

The potential costs of the theory of liability asserted here stretch far beyond Visa or this particular case. The plaintiffs’ theory would hold anyone liable who provides services that “allow[] the alleged principal actors to continue to do business.” This would mean that Federal Express, for example, would be liable for continuing to deliver packages to MindGeek’s address or that a waste-management company could be liable for providing custodial services to the building where MindGeek has an office. 

According to the plaintiffs, even the mere existence of a newspaper article alleging a company is doing something illegal is sufficient to find that professionals who have provided services to that company “participate” in a conspiracy. This would have ripple effects for professionals from many other industries—from accountants to bankers to insurance—who all would see significantly increased risk of liability.

To read the rest of the brief, see here.

In recent years, a diverse cross-section of advocates and politicians have leveled criticisms at Section 230 of the Communications Decency Act and its grant of legal immunity to interactive computer services. Proposed legislative changes to the law have been put forward by both Republicans and Democrats.

It remains unclear whether Congress (or the courts) will amend Section 230, but any changes are bound to expand the scope, uncertainty, and expense of content risks. That’s why it’s important that such changes be developed and implemented in ways that minimize their potential to significantly disrupt and harm online activity. This piece focuses on those insurable content risks that most frequently result in litigation and considers the effect of the direct and indirect costs caused by frivolous suits and lawfare, not just the ultimate potential for a court to find liability. The experience of the 1980s asbestos-litigation crisis offers a warning of what could go wrong.

Enacted in 1996, Section 230 was intended to promote the Internet as a diverse medium for discourse, cultural development, and intellectual activity by shielding interactive computer services from legal liability when blocking or filtering access to obscene, harassing, or otherwise objectionable content. Absent such immunity, a platform hosting content produced by third parties could be held equally responsible as the creator for claims alleging defamation or invasion of privacy.

In the current legislative debates, Section 230’s critics on the left argue that the law does not go far enough to combat hate speech and misinformation. Critics on the right claim the law protects censorship of dissenting opinions. Legal challenges to the current wording of Section 230 arise primarily from what constitutes an “interactive computer service,” “good faith” restriction of content, and the grant of legal immunity, regardless of whether the restricted material is constitutionally protected. 

While Congress and various stakeholders debate various alternate statutory frameworks, several test cases simultaneously have been working their way through the judicial system and some states have either passed or are considering legislation to address complaints with Section 230. Some have suggested passing new federal legislation classifying online platforms as common carriers as an alternate approach that does not involve amending or repealing Section 230. Regardless of the form it may take, change to the status quo is likely to increase the risk of litigation and liability for those hosting or publishing third-party content.

The Nature of Content Risk

The class of individuals and organizations exposed to content risk has never been broader. Any information, content, or communication that is created, gathered, compiled, or amended can be considered “material” which, when disseminated to third parties, may be deemed “publishing.” Liability can arise from any step in that process. Those who republish material are generally held to the same standard of liability as if they were the original publisher. (See, e.g., Rest. (2d) of Torts § 578 with respect to defamation.)

Digitization has simultaneously reduced the cost and expertise required to publish material and increased the potential reach of that material. Where it was once limited to books, newspapers, and periodicals, “publishing” now encompasses such activities as creating and updating a website; creating a podcast or blog post; or even posting to social media. Much of this activity is performed by individuals and businesses who have only limited experience with the legal risks associated with publishing.

This is especially true regarding the use of third-party material, which is used extensively by both sophisticated and unsophisticated platforms. Platforms that host third-party-generated content—e.g., social media or websites with comment sections—have historically engaged in only limited vetting of that content, although this is changing. When combined with the potential to reach consumers far beyond the original platform and target audience—lasting digital traces that are difficult to identify and remove—and the need to comply with privacy and other statutory requirements, the potential for all manner of “publishers” to incur legal liability has never been higher.

Even sophisticated legacy publishers struggle with managing the litigation that arises from these risks. There are a limited number of specialist counsel, which results in higher hourly rates. Oversight of legal bills is not always effective, as internal counsel often have limited resources to manage their daily responsibilities and litigation. As a result, legal fees often make up as much as two-thirds of the average claims cost. Accordingly, defense spending and litigation management are indirect, but important, risks associated with content claims.

Effective risk management is any publisher’s first line of defense. The type and complexity of content risk management varies significantly by organization, based on its size, resources, activities, risk appetite, and sophistication. Traditional publishers typically have a formal set of editorial guidelines specifying policies governing the creation of content, pre-publication review, editorial-approval authority, and referral to internal and external legal counsel. They often maintain a library of standardized contracts; have a process to periodically review and update those wordings; and a process to verify the validity of a potential licensor’s rights. Most have formal controls to respond to complaints and to retraction/takedown requests.

Insuring Content Risks

Insurance is integral to most publishers’ risk-management plans. Content coverage is present, to some degree, in most general liability policies (i.e., for “advertising liability”). Specialized coverage—commonly referred to as “media” or “media E&O”—is available on a standalone basis or may be packaged with cyber-liability coverage. Terms of specialized coverage can vary significantly, but generally provides at least basic coverage for the three primary content risks of defamation, copyright infringement, and invasion of privacy.

Insureds typically retain the first dollar loss up to a specific dollar threshold. They may also retain a coinsurance percentage of every dollar thereafter in partnership with their insurer. For example, an insured may be responsible for the first $25,000 of loss, and for 10% of loss above that threshold. Such coinsurance structures often are used by insurers as a non-monetary tool to help control legal spending and to incentivize an organization to employ effective oversight of counsel’s billing practices.

The type and amount of loss retained will depend on the insured’s size, resources, risk profile, risk appetite, and insurance budget. Generally, but not always, increases in an insured’s retention or an insurer’s attachment (e.g., raising the threshold to $50,000, or raising the insured’s coinsurance to 15%) will result in lower premiums. Most insureds will seek the smallest retention feasible within their budget. 

Contract limits (the maximum coverage payout available) will vary based on the same factors. Larger policyholders often build a “tower” of insurance made up of multiple layers of the same or similar coverage issued by different insurers. Two or more insurers may partner on the same “quota share” layer and split any loss incurred within that layer on a pre-agreed proportional basis.  

Navigating the strategic choices involved in developing an insurance program can be complex, depending on an organization’s risks. Policyholders often use commercial brokers to aide them in developing an appropriate risk-management and insurance strategy that maximizes coverage within their budget and to assist with claims recoveries. This is particularly important for small and mid-sized insureds who may lack the sophistication or budget of larger organizations. Policyholders and brokers try to minimize the gaps in coverage between layers and among quota-share participants, but such gaps can occur, leaving a policyholder partially self-insured.

An organization’s options to insure its content risk may also be influenced by the dynamics of the overall insurance market or within specific content lines. Underwriters are not all created equal; it is a challenging responsibility requiring a level of prediction, and some underwriters may fail to adequately identify and account for certain risks. It can also be challenging to accurately measure risk aggregation and set appropriate reserves. An insurer’s appetite for certain lines and the availability of supporting reinsurance can fluctuate based on trends in the general capital markets. Specialty media/content coverage is a small niche within the global commercial insurance market, which makes insurers in this line more sensitive to these general trends.

Litigation Risks from Changes to Section 230

A full repeal or judicial invalidation of Section 230 generally would make every platform responsible for all the content they disseminate, regardless of who created the material requiring at least some additional editorial review. This would significantly disadvantage those platforms that host a significant volume of third-party content. Internet service providers, cable companies, social media, and product/service review companies would be put under tremendous strain, given the daily volume of content produced. To reduce the risk that they serve as a “deep pocket” target for plaintiffs, they would likely adopt more robust pre-publication screening of content and authorized third-parties; limit public interfaces; require registration before a user may publish content; employ more reactive complaint response/takedown policies; and ban problem users more frequently. Small and mid-sized enterprises (SMEs), as well as those not focused primarily on the business of publishing, would likely avoid many interactive functions altogether. 

A full repeal would be, in many ways, a blunderbuss approach to dealing with criticisms of Section 230, and would cause as many or more problems as it solves. In the current polarized environment, it also appears unlikely that Congress will reach bipartisan agreement on amended language for Section 230, or to classify interactive computer services as common carriers, given that the changes desired by the political left and right are so divergent. What may be more likely is that courts encounter a test case that prompts them to clarify the application of the existing statutory language—i.e., whether an entity was acting as a neutral platform or a content creator, whether its conduct was in “good faith,” and whether the material is “objectionable” within the meaning of the statute.

A relatively greater frequency of litigation is almost inevitable in the wake of any changes to the status quo, whether made by Congress or the courts. Major litigation would likely focus on those social-media platforms at the center of the Section 230 controversy, such as Facebook and Twitter, given their active role in these issues, deep pockets and, potentially, various admissions against interest helpful to plaintiffs regarding their level of editorial judgment. SMEs could also be affected in the immediate wake of a change to the statute or its interpretation. While SMEs are likely to be implicated on a smaller scale, the impact of litigation could be even more damaging to their viability if they are not adequately insured.

Over time, the boundaries of an amended Section 230’s application and any consequential effects should become clearer as courts develop application criteria and precedent is established for different fact patterns. Exposed platforms will likely make changes to their activities and risk-management strategies consistent with such developments. Operationally, some interactive features—such as comment sections or product and service reviews—may become less common.

In the short and medium term, however, a period of increased and unforeseen litigation to resolve these issues is likely to prove expensive and damaging. Insurers of content risks are likely to bear the brunt of any changes to Section 230, because these risks and their financial costs would be new, uncertain, and not incorporated into historical pricing of content risk. 

Remembering the Asbestos Crisis

The introduction of a new exposure or legal risk can have significant financial effects on commercial insurance carriers. New and revised risks must be accounted for in the assumptions, probabilities, and load factors used in insurance pricing and reserving models. Even small changes in those values can have large aggregate effects, which may undermine confidence in those models, complicate obtaining reinsurance, or harm an insurer’s overall financial health.

For example, in the 1980s, certain courts adopted the triple-trigger and continuous trigger methods[1] of determining when a policyholder could access coverage under an “occurrence” policy for asbestos claims. As a result, insurers paid claims under policies dating back to the early 1900s and, in some cases, under all policies from that date until the date of the claim. Such policies were written when mesothelioma related to asbestos was unknown and not incorporated into the policy pricing.

Insurers had long-since released reserves from the decades-old policy years, so those resources were not available to pay claims. Nor could underwriters retroactively increase premiums for the intervening years and smooth out the cost of these claims. This created extreme financial stress for impacted insurers and reinsurers, with some ultimately rendered insolvent. Surviving carriers responded by drastically reducing coverage and increasing prices, which resulted in a major capacity shortage that resolved only after the creation of the Bermuda insurance and reinsurance market. 

The asbestos-related liability crisis represented a perfect storm that is unlikely to be replicated. Given the ubiquitous nature of digital content, however, any drastic or misconceived changes to Section 230 protections could still cause significant disruption to the commercial insurance market. 

Content risk is covered, at least in part, by general liability and many cyber policies, but it is not currently a primary focus for underwriters. Specialty media underwriters are more likely to be monitoring Section 230 risk, but the highly competitive market will make it difficult for them to respond to any changes with significant price increases. In addition, the current market environment for U.S. property and casualty insurance generally is in the midst of correcting for years of inadequate pricing, expanding coverage, developing exposures, and claims inflation. It would be extremely difficult to charge an adequate premium increase if the potential severity of content risk were to increase suddenly.

In the face of such risk uncertainty and challenges to adequately increasing premiums, underwriters would likely seek to reduce their exposure to online content risks, i.e., by reducing the scope of coverage, reducing limits, and increasing retentions. How these changes would manifest, and the pain for all involved, would likely depend on how quickly such changes in policyholders’ risk profiles manifest. 

Small or specialty carriers caught unprepared could be forced to exit the market if they experienced a sharp spike in claims or unexpected increase in needed reserves. Larger, multiline carriers may respond by voluntarily reducing or withdrawing their participation in this space. Insurers exposed to ancillary content risk may simply exclude it from cover if adequate price increases are impractical. Such reactions could result in content coverage becoming harder to obtain or unavailable altogether. This, in turn, would incentivize organizations to limit or avoid certain digital activities.

Finding a More Thoughtful Approach

The tension between calls for reform of Section 230 and the potential for disrupting online activity does not mean that political leaders and courts should ignore these issues. Rather, it means that what’s required is a thoughtful, clear, and predictable approach to any changes, with the goal of maximizing the clarity of the changes and their application and minimizing any resulting litigation. Regardless of whether accomplished through legislation or the judicial process, addressing the following issues could minimize the duration and severity of any period of harmful disruption regarding content-risk:

  1. Presumptive immunity – Including an express statement in the definition of “interactive computer service,” or inferring one judicially, to clarify that platforms hosting third-party content enjoy a rebuttable presumption that statutory immunity applies would discourage frivolous litigation as courts establish precedent defining the applicability of any other revisions. 
  1. Specify the grounds for losing immunity – Clarify, at a minimum, what constitutes “good faith” with respect to content restrictions and further clarify what material is or is not “objectionable,” as it relates to newsworthy content or actions that trigger loss of immunity.
  1. Specify the scope and duration of any loss of immunity – Clarify whether the loss of immunity is total, categorical, or specific to the situation under review and the duration of that loss of immunity, if applicable.
  1. Reinstatement of immunity, subject to burden-shifting – Clarify what a platform must do to reinstate statutory immunity on a go-forward basis and clarify that it bears the burden of proving its go-forward conduct entitled it to statutory protection.
  1. Address associated issues – Any clarification or interpretation should address other issues likely to arise, such as the effect and weight to be given to a platform’s application of its community standards, adherence to neutral takedown/complain procedures, etc. Care should be taken to avoid overcorrecting and creating a “heckler’s veto.” 
  1. Deferred effect – If change is made legislatively, the effective date should be deferred for a reasonable time to allow platforms sufficient opportunity to adjust their current risk-management policies, contractual arrangements, content publishing and storage practices, and insurance arrangements in a thoughtful, orderly fashion that accounts for the new rules.

Ultimately, legislative and judicial stakeholders will chart their own course to address the widespread dissatisfaction with Section 230. More important than any of these specific policy suggestions is the principle underpins them: that any changes incorporate due consideration for the potential direct and downstream harm that can be caused if policy is not clear, comprehensive, and designed to minimize unnecessary litigation. 

It is no surprise that, in the years since Section 230 of the Communications Decency Act was passed, the environment and risks associated with digital platforms have evolved or that those changes have created a certain amount of friction in the law’s application. Policymakers should employ a holistic approach when evaluating their legislative and judicial options to revise or clarify the application of Section 230. Doing so in a targeted, predictable fashion should help to mitigate or avoid the risk of increased litigation and other unintended consequences that might otherwise prove harmful to online platforms in the commercial insurance market.

Aaron Tilley is a senior insurance executive with more than 16 years of commercial insurance experience in executive management, underwriting, legal, and claims working in or with the U.S., Bermuda, and London markets. He has served as chief underwriting officer of a specialty media E&O and cyber-liability insurer and as coverage counsel representing international insurers with respect to a variety of E&O and advertising liability claims


[1] The triple-trigger method allowed a policy to be accessed based on the date of the injury-in-fact, manifestation of injury, or exposure to substances known to cause injury. The continuous trigger allowed all policies issued by an insurer, not just one, to be accessed if a triggering event could be established during the policy period.

In what has become regularly scheduled programming on Capitol Hill, Facebook CEO Mark Zuckerberg, Twitter CEO Jack Dorsey, and Google CEO Sundar Pichai will be subject to yet another round of congressional grilling—this time, about the platforms’ content-moderation policies—during a March 25 joint hearing of two subcommittees of the House Energy and Commerce Committee.

The stated purpose of this latest bit of political theatre is to explore, as made explicit in the hearing’s title, “social media’s role in promoting extremism and misinformation.” Specific topics are expected to include proposed changes to Section 230 of the Communications Decency Act, heightened scrutiny by the Federal Trade Commission, and misinformation about COVID-19—the subject of new legislation introduced by Rep. Jennifer Wexton (D-Va.) and Sen. Mazie Hirono (D-Hawaii).

But while many in the Democratic majority argue that social media companies have not done enough to moderate misinformation or hate speech, it is a problem with no realistic legal fix. Any attempt to mandate removal of speech on grounds that it is misinformation or hate speech, either directly or indirectly, would run afoul of the First Amendment.

Much of the recent focus has been on misinformation spread on social media about the 2020 election and the COVID-19 pandemic. The memorandum for the March 25 hearing sums it up:

Facebook, Google, and Twitter have long come under fire for their role in the dissemination and amplification of misinformation and extremist content. For instance, since the beginning of the coronavirus disease of 2019 (COVID-19) pandemic, all three platforms have spread substantial amounts of misinformation about COVID-19. At the outset of the COVID-19 pandemic, disinformation regarding the severity of the virus and the effectiveness of alleged cures for COVID-19 was widespread. More recently, COVID-19 disinformation has misrepresented the safety and efficacy of COVID-19 vaccines.

Facebook, Google, and Twitter have also been distributors for years of election disinformation that appeared to be intended either to improperly influence or undermine the outcomes of free and fair elections. During the November 2016 election, social media platforms were used by foreign governments to disseminate information to manipulate public opinion. This trend continued during and after the November 2020 election, often fomented by domestic actors, with rampant disinformation about voter fraud, defective voting machines, and premature declarations of victory.

It is true that, despite social media companies’ efforts to label and remove false content and bar some of the biggest purveyors, there remains a considerable volume of false information on social media. But U.S. Supreme Court precedent consistently has limited government regulation of false speech to distinct categories like defamation, perjury, and fraud.

The Case of Stolen Valor

The court’s 2011 decision in United States v. Alvarez struck down as unconstitutional the Stolen Valor Act of 2005, which made it a federal crime to falsely claim to have earned a military medal. A four-justice plurality opinion written by Justice Anthony Kennedy, along with a two-justice concurrence, both agreed that a statement being false did not, by itself, exclude it from First Amendment protection. 

Kennedy’s opinion noted that while the government may impose penalties for false speech connected with the legal process (perjury or impersonating a government official); with receiving a benefit (fraud); or with harming someone’s reputation (defamation); the First Amendment does not sanction penalties for false speech, in and of itself. The plurality exhibited particular skepticism toward the notion that government actors could be entrusted as a “Ministry of Truth,” empowered to determine what categories of false speech should be made illegal:

Permitting the government to decree this speech to be a criminal offense, whether shouted from the rooftops or made in a barely audible whisper, would endorse government authority to compile a list of subjects about which false statements are punishable. That governmental power has no clear limiting principle. Our constitutional tradition stands against the idea that we need Oceania’s Ministry of Truth… Were this law to be sustained, there could be an endless list of subjects the National Government or the States could single out… Were the Court to hold that the interest in truthful discourse alone is sufficient to sustain a ban on speech, absent any evidence that the speech was used to gain a material advantage, it would give government a broad censorial power unprecedented in this Court’s cases or in our constitutional tradition. The mere potential for the exercise of that power casts a chill, a chill the First Amendment cannot permit if free speech, thought, and discourse are to remain a foundation of our freedom. [EMPHASIS ADDED]

As noted in the opinion, declaring false speech illegal constitutes a content-based restriction subject to “exacting scrutiny.” Applying that standard, the court found “the link between the Government’s interest in protecting the integrity of the military honors system and the Act’s restriction on the false claims of liars like respondent has not been shown.” 

While finding that the government “has not shown, and cannot show, why counterspeech would not suffice to achieve its interest,” the plurality suggested a more narrowly tailored solution could be simply to publish Medal of Honor recipients in an online database. In other words, the government could overcome the problem of false speech by promoting true speech. 

In 2012, President Barack Obama signed an updated version of the Stolen Valor Act that limited its penalties to situations where a misrepresentation is shown to result in receipt of some kind of benefit. That places the false speech in the category of fraud, consistent with the Alvarez opinion.

A Social Media Ministry of Truth

Applying the Alvarez standard to social media, the government could (and already does) promote its interest in public health or election integrity by publishing true speech through official channels. But there is little reason to believe the government at any level could regulate access to misinformation. Anything approaching an outright ban on accessing speech deemed false by the government not only would not be the most narrowly tailored way to deal with such speech, but it is bound to have chilling effects even on true speech.

The analysis doesn’t change if the government instead places Big Tech itself in the position of Ministry of Truth. Some propose making changes to Section 230, which currently immunizes social media companies from liability for user speech (with limited exceptions), regardless what moderation policies the platform adopts. A hypothetical change might condition Section 230’s liability shield on platforms agreeing to moderate certain categories of misinformation. But that would still place the government in the position of coercing platforms to take down speech. 

Even the “fix” of making social media companies liable for user speech they amplify through promotions on the platform, as proposed by Sen. Mark Warner’s (D-Va.) SAFE TECH Act, runs into First Amendment concerns. The aim of the bill is to regard sponsored content as constituting speech made by the platform, thus opening the platform to liability for the underlying misinformation. But any such liability also would be limited to categories of speech that fall outside First Amendment protection, like fraud or defamation. This would not appear to include most of the types of misinformation on COVID-19 or election security that animate the current legislative push.

There is no way for the government to regulate misinformation, in and of itself, consistent with the First Amendment. Big Tech companies are free to develop their own policies against misinformation, but the government may not force them to do so. 

Extremely Limited Room to Regulate Extremism

The Big Tech CEOs are also almost certain to be grilled about the use of social media to spread “hate speech” or “extremist content.” The memorandum for the March 25 hearing sums it up like this:

Facebook executives were repeatedly warned that extremist content was thriving on their platform, and that Facebook’s own algorithms and recommendation tools were responsible for the appeal of extremist groups and divisive content. Similarly, since 2015, videos from extremists have proliferated on YouTube; and YouTube’s algorithm often guides users from more innocuous or alternative content to more fringe channels and videos. Twitter has been criticized for being slow to stop white nationalists from organizing, fundraising, recruiting and spreading propaganda on Twitter.

Social media has often played host to racist, sexist, and other types of vile speech. While social media companies have community standards and other policies that restrict “hate speech” in some circumstances, there is demand from some public officials that they do more. But under a First Amendment analysis, regulating hate speech on social media would fare no better than the regulation of misinformation.

The First Amendment doesn’t allow for the regulation of “hate speech” as its own distinct category. Hate speech is, in fact, as protected as any other type of speech. There are some limited exceptions, as the First Amendment does not protect incitement, true threats of violence, or “fighting words.” Some of these flatly do not apply in the online context. “Fighting words,” for instance, applies only in face-to-face situations to “those personally abusive epithets which, when addressed to the ordinary citizen, are, as a matter of common knowledge, inherently likely to provoke violent reaction.”

One relevant precedent is the court’s 1992 decision in R.A.V. v. St. Paul, which considered a local ordinance in St. Paul, Minnesota, prohibiting public expressions that served to cause “outrage, alarm, or anger with respect to racial, gender or religious intolerance.” A juvenile was charged with violating the ordinance when he created a makeshift cross and lit it on fire in front of a black family’s home. The court unanimously struck down the ordinance as a violation of the First Amendment, finding it an impermissible content-based restraint that was not limited to incitement or true threats.

By contrast, in 2003’s Virginia v. Black, the Supreme Court upheld a Virginia law outlawing cross burnings done with the intent to intimidate. The court’s opinion distinguished R.A.V. on grounds that the Virginia statute didn’t single out speech regarding disfavored topics. Instead, it was aimed at speech that had the intent to intimidate regardless of the victim’s race, gender, religion, or other characteristic. But the court was careful to limit government regulation of hate speech to instances that involve true threats or incitement.

When it comes to incitement, the legal standard was set by the court’s landmark Brandenberg v. Ohio decision in 1969, which laid out that:

the constitutional guarantees of free speech and free press do not permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action. [EMPHASIS ADDED]

In other words, while “hate speech” is protected by the First Amendment, specific types of speech that convey true threats or fit under the related doctrine of incitement are not. The government may regulate those types of speech. And they do. In fact, social media users can be, and often are, charged with crimes for threats made online. But the government can’t issue a per se ban on hate speech or “extremist content.”

Just as with misinformation, the government also can’t condition Section 230 immunity on platforms removing hate speech. Insofar as speech is protected under the First Amendment, the government can’t specifically condition a government benefit on its removal. Even the SAFE TECH Act’s model for holding platforms accountable for amplifying hate speech or extremist content would have to be limited to speech that amounts to true threats or incitement. This is a far narrower category of hateful speech than the examples that concern legislators. 

Social media companies do remain free under the law to moderate hateful content as they see fit under their terms of service. Section 230 immunity is not dependent on whether companies do or don’t moderate such content, or on how they define hate speech. But government efforts to step in and define hate speech would likely run into First Amendment problems unless they stay focused on unprotected threats and incitement.

What Can the Government Do?

One may fairly ask what it is that governments can do to combat misinformation and hate speech online. The answer may be a law that requires takedowns by court order of speech after it is declared illegal, as proposed by the PACT Act, sponsored in the last session by Sens. Brian Schatz (D-Hawaii) and John Thune (R-S.D.). Such speech may, in some circumstances, include misinformation or hate speech.

But as outlined above, the misinformation that the government can regulate is limited to situations like fraud or defamation, while the hate speech it can regulate is limited to true threats and incitement. A narrowly tailored law that looked to address those specific categories may or may not be a good idea, but it would likely survive First Amendment scrutiny, and may even prove a productive line of discussion with the tech CEOs.

As the initial shock of the COVID quarantine wanes, the Techlash waxes again bringing with it a raft of renewed legislative proposals to take on Big Tech. Prominent among these is the EARN IT Act (the Act), a bipartisan proposal to create a new national commission responsible for proposing best practices designed to mitigate the proliferation of child sexual abuse material (CSAM) online. The Act’s proposal is seemingly simple, but its fallout would be anything but.

Section 230 of the Communications Decency Act currently provides online services like Facebook and Google with a robust protection from liability that could arise as a result of the behavior of their users. Under the Act, this liability immunity would be conditioned on compliance with “best practices” that are produced by the new commission and adopted by Congress.  

Supporters of the Act believe that the best practices are necessary in order to ensure that platform companies effectively police CSAM. While critics of the Act assert that it is merely a backdoor for law enforcement to achieve its long-sought goal of defeating strong encryption. 

The truth of EARN IT—and how best to police CSAM—is more complicated. Ultimately, Congress needs to be very careful not to exceed its institutional capabilities by allowing the new commission to venture into areas beyond its (and Congress’s) expertise.

More can be done about illegal conduct online

On its face, conditioning Section 230’s liability protections on certain platform conduct is not necessarily objectionable. There is undoubtedly some abuse of services online, and it is also entirely possible that the incentives for finding and policing CSAM are not perfectly aligned with other conflicting incentives private actors face. It is, of course, first the responsibility of the government to prevent crime, but it is also consistent with past practice to expect private actors to assist such policing when feasible. 

By the same token, an immunity shield is necessary in some form to facilitate user generated communications and content at scale. Certainly in 1996 (when Section 230 was enacted), firms facing conflicting liability standards required some degree of immunity in order to launch their services. Today, the control of runaway liability remains important as billions of user interactions take place on platforms daily. Related, the liability shield also operates as a way to promote good samaritan self-policing—a measure that surely helps avoid actual censorship by governments, as opposed to the spurious claims made by those like Senator Hawley.

In this context, the Act is ambiguous. It creates a commission composed of a fairly wide cross-section of interested parties—from law enforcement, to victims, to platforms, to legal and technical experts—to recommend best practices. That hardly seems a bad thing, as more minds considering how to design a uniform approach to controlling CSAM would be beneficial—at least theoretically.

In practice, however, there are real pitfalls to imbuing any group of such thinkers—especially ones selected by political actors—with an actual or de facto final say over such practices. Much of this domain will continue to be mercurial, the rules necessary for one type of platform may not translate well into general principles, and it is possible that a public board will make recommendations that quickly tax Congress’s institutional limits. To the extent possible, Congress should be looking at ways to encourage private firms to work together to develop best practices in light of their unique knowledge about their products and their businesses. 

In fact, Facebook has already begun experimenting with an analogous idea in its recently announced Oversight Board. There, Facebook is developing a governance structure by giving the Oversight Board the ability to review content moderation decisions on the Facebook platform. 

So far as the commission created by the Act works to create best practices that align the incentives of firms with the removal of CSAM, it has a lot to offer. Yet, a better solution than the Act would be for Congress to establish policy that works with the private processes already in development.

Short of a more ideal solution, it is critical, however, that the Act establish the boundaries of the commission’s remit very clearly and keep it from venturing into technical areas outside of its expertise. 

The complicated problem of encryption (and technology)

The Act has a major problem insofar as the commission has a fairly open ended remit to recommend best practices, and this liberality can ultimately result in dangerous unintended consequences.

The Act only calls for two out of nineteen members to have some form of computer science background. A panel of non-technical experts should not design any technology—encryption or otherwise. 

To be sure, there are some interesting proposals to facilitate access to encrypted materials (notably, multi-key escrow systems and self-escrow). But such recommendations are beyond the scope of what the commission can responsibly proffer.

If Congress proceeds with the Act, it should put an explicit prohibition in the law preventing the new commission from recommending rules that would interfere with the design of complex technology, such as by recommending that encryption be weakened to provide access to law enforcement, mandating particular network architectures, or modifying the technical details of data storage.

Congress is right to consider if there is better policy to be had for aligning the incentives of the platforms with the deterrence of CSAM—including possible conditional access to Section 230’s liability shield.But just because there is a policy balance to be struck between policing CSAM and platform liability protection doesn’t mean that the new commission is suited to vetting, adopting and updating technical standards – it clearly isn’t. Conversely, to the extent that encryption and similarly complex technologies could be subject to broad policy change it should be through an explicit and considered democratic process, and not as a by-product of the Act. 

[Note: A group of 50 academics and 27 organizations, including both myself and ICLE, recently released a statement of principles for lawmakers to consider in discussions of Section 230.]

In a remarkable ruling issued earlier this month, the Third Circuit Court of Appeals held in Oberdorf v. Amazon that, under Pennsylvania products liability law, Amazon could be found liable for a third party vendor’s sale of a defective product via Amazon Marketplace. This ruling comes in the context of Section 230 of the Communications Decency Act, which is broadly understood as immunizing platforms against liability for harmful conduct posted to their platforms by third parties (Section 230 purists may object to myu use of “platform” as approximation for the statute’s term of “interactive computer services”; I address this concern by acknowledging it with this parenthetical). This immunity has long been a bedrock principle of Internet law; it has also long been controversial; and those controversies are very much at the fore of discussion today. 

The response to the opinion has been mixed, to say the least. Eric Goldman, for instance, has asked “are we at the end of online marketplaces?,” suggesting that they “might in the future look like a quaint artifact of the early 21st century.” Kate Klonick, on the other hand, calls the opinion “a brilliant way of both holding tech responsible for harms they perpetuate & making sure we preserve free speech online.”

My own inclination is that both Eric and Kate overstate their respective positions – though neither without reason. The facts of Oberdorf cabin the effects of the holding both to Pennsylvania law and to situations where the platform cannot identify the seller. This suggests that the effects will be relatively limited. 

But, and what I explore in this post, the opinion does elucidate a particular and problematic feature of section 230: that it can be used as a liability shield for harmful conduct. The judges in Oberdorf seem ill-inclined to extend Section 230’s protections to a platform that can easily be used by bad actors as a liability shield. Riffing on this concern, I argue below that Section 230 immunity be proportional to platforms’ ability to reasonably identify speakers using their platforms to engage in harmful speech or conduct.

This idea is developed in more detail in the last section of this post – including responding to the obvious (and overwrought) objections to it. But first it offers some background on Section 230, the Oberdorf and related cases, the Third Circuit’s analysis in Oberdorf, and the recent debates about Section 230. 

Section 230

“Section 230” refers to a portion of the Communications Decency Act that was added to the Communications Act by the 1996 Telecommunications Act, codified at 47 U.S.C. 230. (NB: that’s a sentence that only a communications lawyer could love!) It is widely recognized as – and discussed even by those who disagree with this view as – having been critical to the growth of the modern Internet. As Jeff Kosseff labels it in his recent book, the key provision of section 230 comprises the “26 words that created the Internet.” That section, 230(c)(1), states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” (For those not familiar with it, Kosseff’s book is worth a read – or for the Cliff’s Notes version see here, here, here, here, here, or here.)

Section 230 was enacted to do two things. First, section (c)(1) makes clear that platforms are not liable for user-generated content. In other words, if a user of Facebook, Amazon, the comments section of a Washington Post article, a restaurant review site, a blog that focuses on the knitting of cat-themed sweaters, or any other “interactive computer service,” posts something for which that user may face legal liability, the platform hosting that user’s speech does not face liability for that speech. 

And second, section (c)(2) makes clear that platforms are free to moderate content uploaded by their users, and that they face no liability for doing so. This section was added precisely to repudiate a case that had held that once a platform (in that case, Prodigy) decided to moderate user-generated content, it undertook an obligation to do so. That case meant that platforms faced a Hobson’s choice: either don’t moderate content and don’t risk liability, or moderate all content and face liability for failure to do so well. There was no middle ground: a platform couldn’t say, for instance, “this one post is particularly problematic, so we are going to take it down – but this doesn’t mean that we are going to pervasively moderate content.”

Together, these two provisions stand generally for the proposition that online platforms are not liable for content created by their users, but they are free to moderate that content without facing liability for doing so. It recognized, on the one hand, that it was impractical (i.e., the Internet economy could not function) to require that platforms moderate all user-generated content, so section (c)(1) says that they don’t need to; but, on the other hand, it recognizes that it is desirable for platforms to moderate problematic content to the best of their ability, so section (c)(2) says that they won’t be punished (i.e., lose the immunity granted by section (c)(1) if they voluntarily elect to moderate content). 

Section 230 is written in broad – and has been interpreted by the courts in even broader – terms. Section (c)(1) says that platforms cannot be held liable for the content generated by their users, full stop. The only exceptions are for copyrighted content and content that violates federal criminal law. There is no “unless it is really bad” exception, or a “the platform may be liable if the user-generated content causes significant tangible harm” exception, or an “unless the platform knows about it” exception, or even an “unless the platform makes money off of and actively facilitates harmful content” exception. So long as the content is generated by the user (not by the platform itself), Section 230 shields the platform from liability. 

Oberdorf v. Amazon

This background leads us to the Third Circuit’s opinion in Oberdorf v. Amazon. The opinion is remarkable because it is one of only a few cases in which a court has, despite Section 230, found a platform liable for the conduct of a third party facilitated through the use of that platform. 

Prior to the Third Circuit’s recent opinion, the best known previous case is the 9th Circuit’s Model Mayhem opinion. In that case, the court found that Model Mayhem, a website that helps match models with modeling jobs, had a duty to warn models about individuals who were known to be using the website to find women to sexually assault. 

It is worth spending another moment on the Model Mayhem opinion before returning to the Third Circuit’s Oberdorf opinion. The crux of the 9th Circuit’s opinion in the Model Mayhem case was that the state of Florida (where the assaults occurred) has a duty-to-warn law, which creates a duty between the platform and the user. This duty to warn was triggered by the case-specific fact that the platform had actual knowledge that two of its users were predatorily using the site to find women to assault. Once triggered, this duty to warn exists between the platform and the user. Because the platform faces liability directly for its failure to warn, it is not shielded by section 230 (which only shields the platform from liability for the conduct of the third parties using the platform to engage in harmful conduct). 

In its opinion, the Third Circuit offered a similar analysis – but in a much broader context. 

The Oberdorf case involves a defective dog leash sold to Ms. Oberdorf by a seller doing business as The Furry Gang on Amazon Marketplace. The leash malfunctioned, hitting Ms. Oberdorf in the face and causing permanent blindness in one eye. When she attempted to sue The Furry Gang, she discovered that they were no longer doing business on Amazon Marketplace – and that Amazon did not have sufficient information about their identity for Ms. Oberdorf to bring suit against them.

Undeterred, Ms. Oberdorf sued Amazon under Pennsylvania product liability law, arguing that Amazon was the seller of the defective leash, so was liable for her injuries. Part of Amazon’s defense was that the actual seller, The Furry Gang, was a user of their Marketplace platform – the sale resulted from the storefront generated by The Furry Gang and merely hosted by Amazon Marketplace. Under this theory, Section 230 would bar Amazon from liability for the sale that resulted from the seller’s user-generated storefront. 

The Third Circuit judges had none of that argument. All three judges agreed that under Pennsylvania law, the products liability relationship existed between Ms. Oberdorf and Amazon, so Section 230 did not apply. The two-judge majority found Amazon liable to Ms. Oberford under this law – the dissenting judge would have found Amazon’s conduct insufficient as a basis for liability.

This opinion, in other words, follows in the footsteps of the Ninth Circuit’s Model Mayhem opinion in holding that state law creates a duty directly between the harmed user and the platform, and that that duty isn’t affected by Section 230. But Oberdorf is potentially much broader in impact than Model Mayhem. States are more likely to have broader product liability laws than duty to warn laws. Even more impactful, product liability laws are generally strict liability laws, whereas duty to warn laws are generally triggered by an actual knowledge requirement.

The Third Circuit’s Focus on Agency and Liability Shields

The understanding of Oberdorf described above is that it is the latest in a developing line of cases holding that claims based on state law duties that require platforms to protect users from third party harms can survive Section 230 defenses. 

But there is another, critical, issue in the background of the case that appears to have affected the court’s thinking – and that, I argue, should be a path forward for Section 230. The judges writing for the Third Circuit majority draw attention to

the extensive record evidence that Amazon fails to vet third-party vendors for amenability to legal process. The first factor [of analysis for application of the state’s products liability law] weighs in favor of strict liability not because The Furry Gang cannot be located and/or may be insolvent, but rather because Amazon enables third-party vendors such as The Furry Gang to structure and/or conceal themselves from liability altogether.

This is important for analysis under the Pennsylvania product liability law, which has a marketing chain provision that allows injured consumers to seek redress up the marketing chain if the direct seller of a defective product is insolvent or otherwise unavailable for suit. But the court’s language focuses on Amazon’s design of Marketplace and the ease with which Marketplace can be used by merchants as a liability shield. 

This focus is unsurprising: the law generally does not allow one party to shield another from liability without assuming liability for the shielded party’s conduct. Indeed, this is pretty basic vicarious liability, agency, first-year law school kind of stuff. It is unsurprising that judges would balk at an argument that Amazon could design its platform in a way that makes it impossible for harmed parties to sue a tortfeasor without Amazon in turn assuming liability for any potentially tortious conduct. 

Section 230 is having a bad day

As most who have read this far are almost certainly aware, Section 230 is a big, controversial, political mess right now. Politicians from Josh Hawley to Nancy Pelosi have suggested curtailing Section 230. President Trump just held his “Social Media Summit.” And countries around the world are imposing near-impossible obligations on platforms to remove or otherwise moderate potentially problematic content – obligations that are anathema to Section 230 as they increasingly reflect and influence discussions in the United States. 

To be clear, almost all of the ideas floating around about how to change Section 230 are bad. That is an understatement: they are potentially devastating to the Internet – both to the economic ecosystem and the social ecosystem that have developed and thrived largely because of Section 230.

To be clear, there is also a lot of really, disgustingly, problematic content online – and social media platforms, in particular, have facilitated a great deal of legitimately problematic conduct. But deputizing them to police that conduct and to make real-time decisions about speech that is impossible to evaluate in real time is not a solution to these problems. And to the extent that some platforms may be able to do these things, the novel capabilities of a few platforms to obligations for all would only serve to create entry barriers for smaller platforms and to stifle innovation. 

This is why a group of 50 academics and 27 organizations released a statement of principles last week to inform lawmakers about key considerations to take into account when discussing how Section 230 may be changed. The purpose of these principles is to acknowledge that some change to Section 230 may be appropriate – may even be needed at this juncture – but that such changes should be careful and modest, carefully considered so as to not disrupt the vast benefits for society that Section 230 has made possible and is needed to keep vital.

The Third Circuit offers a Third Way on 230 

The Third Circuit’s opinion offers a modest way that Section 230 could be changed – and, I would say, improved – to address some of the real harms that it enables without undermining the important purposes that it serves. To wit, Section 230’s immunity could be attenuated by an obligation to facilitate the identification of users on that platform, subject to legal process, in proportion to the size and resources available to the platform, the technological feasibility of such identification, the foreseeability of the platform being used to facilitate harmful speech or conduct, and the expected importance (as defined from a First Amendment perspective) of speech on that platform.

In other words, if there are readily available ways to establish some form of identify for users – for instance, by email addresses on widely-used platforms, social media accounts, logs of IP addresses – and there is reason to expect that users of the platform could be subject to suit – for instance, because they’re engaged in commercial activities or the purpose of the platform is to provide a forum for speech that is likely to legally actionable – then the platform needs to be reasonably able to provide reasonable information about speakers subject to legal action in order to avail itself of any Section 230 defense. Stated otherwise, platforms need to be able to reasonably comply with so-called unmasking subpoenas issued in the civil context to the extent such compliance is feasible for the platform’s size, sophistication, resources, &c.

An obligation such as this would have been at best meaningless and at worst devastating at the time Section 230 was adopted. But 25 years later, the Internet is a very different place. Most users have online accounts – email addresses, social media profiles, &c – that can serve as some form of online identification.

More important, we now have evidence of a growing range of harmful conduct and speech that can occur online, and of platforms that use Section 230 as a shield to protect those engaging in such speech or conduct from litigation. Such speakers are clear bad actors who are clearly abusing Section 230 facilitate bad conduct. They should not be able to do so.

Many of the traditional proponents of Section 230 will argue that this idea is a non-starter. Two of the obvious objections are that it would place a disastrous burden on platforms especially start-ups and smaller platforms, and that it would stifle socially valuable anonymous speech. Both are valid concerns, but also accommodated by this proposal.

The concern that modest user-identification requirements would be disastrous to platforms made a great deal of sense in the early years of the Internet, both the law and technology around user identification were less developed. Today, there is a wide-range of low-cost, off-the-shelf, techniques to establish a user’s identity to some level of precision – from logging of IP addresses, to requiring a valid email address to an established provider, registration with an established social media identity, or even SMS-authentication. None of these is perfect; they present a range of cost and sophistication to implement and a range of offer a range of ease of identification.

The proposal offered here is not that platforms be able to identify their speaker – it’s better described as that they not deliberately act as a liability shield. It’s requirement is that platforms implement reasonable identity technology in proportion to their size, sophistication, and the likelihood of harmful speech on their platforms. A small platform for exchanging bread recipes would be fine to maintain a log of usernames and IP addresses. A large, well-resourced, platform hosting commercial activity (such as Amazon Marketplace) may be expected to establish a verified identity for the merchants it hosts. A forum known for hosting hate speech would be expected to have better identification records – it is entirely foreseeable that its users would be subject to legal action. A forum of support groups for marginalized and disadvantaged communities would face a lower obligation than a forum of similar size and sophistication known for hosting legally-actionable speech.

This proportionality approach also addresses the anonymous speech concern. Anonymous speech is often of great social and political value. But anonymity can also be used for, and as made amply clear in contemporary online discussion can bring out the worst of, speech that is socially and politically destructive. Tying Section 230’s immunity to the nature of speech on a platform gives platforms an incentive to moderate speech – to make sure that anonymous speech is used for its good purposes while working to prevent its use for its lesser purposes. This is in line with one of the defining goals of Section 230. 

The challenge, of course, has been how to do this without exposing platforms to potentially crippling liability if they fail to effectively moderate speech. This is why Section 230 took the approach that it did, allowing but not requiring moderation. This proposal’s user-identification requirement shifts that balance from “allowing but not requiring” to “encouraging but not requiring.” Platforms are under no legal obligation to moderate speech, but if they elect not to, they need to make reasonable efforts to ensure that their users engaging in problematic speech can be identified by parties harmed by their speech or conduct. In an era in which sites like 8chan expressly don’t maintain user logs in order to shield themselves from known harmful speech, and Amazon Marketplace allows sellers into the market who cannot be sued by injured consumers, this is a common-sense change to the law.

It would also likely have substantially the same effect as other proposals for Section 230 reform, but without the significant challenges those suggestions face. For instance, Danielle Citron & Ben Wittes have proposed that courts should give substantive meaning to Section 230’s “Good Samaritan” language in section (c)(2)’s subheading, or, in the alternative, that section (c)(1)’s immunity require that platforms “take[] reasonable steps to prevent unlawful uses of its services.” This approach is problematic on both First Amendment and process grounds, because it requires courts to evaluate the substantive content and speech decisions that platforms engage in. It effectively tasks platforms with undertaking the task of the courts in developing a (potentially platform-specific) law of content moderations – and threatens them with a loss of Section 230 immunity is they fail effectively to do so.

By contrast, this proposal would allow, and even encourage, platforms to engage in such moderation, but offers them a gentler, more binary, and procedurally-focused safety valve to maintain their Section 230 immunity. If a user engages in harmful speech or conduct and the platform can assist plaintiffs and courts in bringing legal action against the user in the courts, then the “moderation” process occurs in the courts through ordinary civil litigation. 

To be sure, there are still some uncomfortable and difficult substantive questions – has a platform implemented reasonable identification technologies, is the speech on the platform of the sort that would be viewed as requiring (or otherwise justifying protection of the speaker’s) anonymity, and the like. But these are questions of a type that courts are accustomed to, if somewhat uncomfortable with, addressing. They are, for instance, the sort of issues that courts address in the context of civil unmasking subpoenas.

This distinction is demonstrated in the comparison between Sections 230 and 512. Section 512 is an exception to 230 for copyrighted materials that was put into place by the 1998 Digital Millennium Copyright Act. It takes copyrighted materials outside of the scope of Section 230 and requires platforms to put in place a “notice and takedown” regime in order to be immunized for hosting copyrighted content uploaded by users. This regime has proved controversial, among other reasons, because it effectively requires platforms to act as courts in deciding whether a given piece of content is subject to a valid copyright claim. The Citron/Wittes proposal effectively subjects platforms to a similar requirement in order to maintain Section 230 immunity; the identity-technology proposal, on the other hand, offers an intermediate requirement.

Indeed, the principal effect of this intermediate requirement is to maintain the pre-platform status quo. IRL, if one person says or does something harmful to another person, their recourse is in court. This is true in public and in private; it’s true if the harmful speech occurs on the street, in a store, in a public building, or a private home. If Donny defames Peggy in Hank’s house, Peggy sues Donny in court; she doesn’t sue Hank, and she doesn’t sue Donny in the court of Hank. To the extent that we think of platforms as the fora where people interact online – as the “place” of the Internet – this proposal is intended to ensure that those engaging in harmful speech or conduct online can be hauled into court by the aggrieved parties, and to facilitate the continued development of platforms without disrupting the functioning of this system of adjudication.

Conclusion

Section 230 is, and has long been, the most important and one of the most controversial laws of the Internet. It is increasingly under attack today from a disparate range of voices across the political and geographic spectrum — voices that would overwhelming reject Section 230’s pro-innovation treatment of platforms and in its place attempt to co-opt those platforms as government-compelled (and, therefore, controlled) content moderators. 

In light of these demands, academics and organizations that understand the importance of Section 230, but also recognize the increasing pressures to amend it, have recently released a statement of principles for legislators to consider as they think about changes to Section 230.

Into this fray, the Third Circuit’s opinion in Oberdorf offers a potential change: making Section 230’s immunity for platforms proportional to their ability to reasonably identify speakers that use the platform to engage in harmful speech or conduct. This would restore the status quo ante, under which intermediaries and agents cannot be used as litigation shields without themselves assuming responsibility for any harmful conduct. This shielding effect was not an intended goal of Section 230, and it has been the cause of Section 230’s worst abuses. It was allowed at the time Section 230 was adopted because the used-identity requirements such as proposed here would not have been technologically reasonable at the time Section 230 was adopted. But technology has changed and, today, these requirements would impose only a moderate  burden on platforms today

Neither side in the debate over Section 230 is blameless for the current state of affairs. Reform/repeal proponents have tended to offer ill-considered, irrelevant, or often simply incorrect justifications for amending or tossing Section 230. Meanwhile, many supporters of the law in its current form are reflexively resistant to any change and too quick to dismiss the more reasonable concerns that have been voiced.

Most of all, the urge to politicize this issue — on all sides — stands squarely in the way of any sensible discussion and thus of any sensible reform.

Continue Reading...