Twitter v. Taamneh: Intermediary Liability, The First Amendment, and Section 230

Cite this Article
Ben Sperry, Twitter v. Taamneh: Intermediary Liability, The First Amendment, and Section 230, Truth on the Market (May 24, 2023), https://truthonthemarket.com/2023/05/24/twitter-v-taamneh-intermediary-liability-the-first-amendment-and-section-230/

After the oral arguments in Twitter v. Taamneh, Geoffrey Manne, Kristian Stout, and I spilled a lot of ink thinking through the law & economics of intermediary liability and how to draw lines when it comes to social-media companies’ responsibility to prevent online harms stemming from illegal conduct on their platforms. With the Supreme Court’s recent decision in Twitter v. Taamneh, it is worth revisiting that post to see what we got right, as well as what the opinion could mean for future First Amendment cases—particularly those concerning Texas and Florida’s common-carriage laws and other challenges to the bounds of Section 230 more generally.

What We Got Right: Necessary Limitations on Secondary Liability Mean the Case Against Twitter Must be Dismissed

In our earlier post, which built on our previous work on the law & economics of intermediary liability, we argued that the law sometimes does and should allow enforcement against intermediaries when they are the least-cost avoider. This is especially true on social-media sites like Twitter, where information costs may be sufficiently low that effective monitoring and control of end users is possible and pseudonymity makes bringing remedies against end users ineffective. We note, however, that there are also costs to intermediary liability. These manifest particularly in “collateral censorship,” which occurs when social-media companies remove user-generated content in order to avoid liability. Thus, a balance must be struck:

From an economic perspective, liability should be imposed on the party or parties best positioned to deter the harms in question, so long as the social costs incurred by, and as a result of, enforcement do not exceed the social gains realized. In other words, there is a delicate balance that must be struck to determine when intermediary liability makes sense in a given case. On the one hand, we want illicit content to be deterred, and on the other, we want to preserve the open nature of the Internet. The costs generated from the over-deterrence of legal, beneficial speech is why intermediary liability for user-generated content can’t be applied on a strict-liability basis, and why some bad content will always exist in the system.

In particular, we noted the need for limiting principles to intermediary liability. As we put it in our Fleites amicus:

In theory, any sufficiently large firm with a role in the commerce at issue could be deemed liable if all that is required is that its services “allow[]” the alleged principal actors to continue to do business. FedEx, for example, would be liable for continuing to deliver packages to MindGeek’s address. The local waste management company would be liable for continuing to service the building in which MindGeek’s offices are located. And every online search provider and Internet service provider would be liable for continuing to provide service to anyone searching for or viewing legal content on MindGeek’s sites.

The Court struck very similar notes in its Taamneh opinion regarding the need to limit what they call “secondary liability” under the aiding-and-abetting statute. They note that a person may be responsible at common law for a crime or tort if he helps another complete its commission, but that such liability has never been “boundless.” If it were otherwise, Justice Clarence Thomas wrote for a unanimous Court, “aiding-and-abetting liability could sweep in innocent bystanders as well as those who gave only tangential assistance.” Offering the example of a robbery, Thomas argued that if “any assistance of any kind were sufficient to create liability… then anyone who passively watched a robbery could be said to commit aiding and abetting by failing to call the police.”

Here, the Court found important the common law’s distinction between acts of commission and omission:

[O]ur legal system generally does not impose liability for mere omissions, inactions, or nonfeasance; although inaction can be culpable in the face of some independent duty to act, the law does not impose a generalized duty to rescue… both criminal and tort law typically sanction only “wrongful conduct,” bad acts, and misfeasance… Some level of blameworthiness is therefore ordinarily required.

If omissions could be held liable in the absence of an independent duty to act, then there would be no limiting principle to prevent the application of liability far beyond what anyone (except for the cop in the final episode of Seinfeld) would believe reasonable:

[I]f aiding-and-abetting liability were taken too far, then ordinary merchants could become liable for any misuse of their goods and services, no matter how attenuated their relationship with the wrongdoer. And those who merely deliver mail or transmit emails could be liable for the tortious messages contained therein. For these reasons, courts have long recognized the need to cabin aiding-and-abetting liability to cases of truly culpable conduct.

Applying this to Twitter, the Court first outlined the theories of how Twitter “helped” ISIS:

First, ISIS was active on defendants social-media platforms, which are generally available to the internet-using public with little to no front-end screening by defendants. In other words, ISIS was able to upload content to the platforms and connect with third parties, just like everyone else. Second, defendants’ recommendation algorithms matched ISIS-related content to users most likely to be interested in that content—again, just like any other content. And, third, defendants allegedly knew that ISIS was uploading this content to such effect, but took insufficient steps to ensure that ISIS supporters and ISIS-related content were removed from their platforms. Notably, plaintiffs never allege that ISIS used defendants’ platforms to plan or coordinate the Reina attack; in fact, they do not allege that Masharipov himself ever used Facebook, YouTube, or Twitter.

The Court rejected each of these allegations as insufficient to establish Twitter’s liability in the absence of an independent duty to act, pointing back to the distinction between an act that affirmatively helped to cause harm and an omission:

[T]he only affirmative “conduct” defendants allegedly undertook was creating their platforms and setting up their algorithms to display content relevant to user inputs and user history. Plaintiffs never allege that, after defendants established their platforms, they gave ISIS any special treatment or words of encouragement. Nor is there reason to think that defendants selected or took any action at all with respect to ISIS’ content (except, perhaps, blocking some of it).

In our earlier post on Taamneh, we argued that the plaintiff’s “theory of liability would contain no viable limiting principle” and asked “what in principle would separate a search engine from Twitter, if the search engine linked to an alleged terrorist’s account?” The Court made a similar argument, positing that, while “bad actors like ISIS are able to use platforms like defendants’ for illegal—and sometimes terrible—ends,” the same “could be said of cell phones, email, or the internet generally.” Despite this, “internet or cell service providers [can’t] incur culpability merely for providing their services to the public writ large. Nor do we think that such providers would normally be described as aiding and abetting, for example, illegal drug deals brokered over cell phones—even if the provider’s conference-call or video-call features made the sale easier.”

The Court concluded:

At bottom, then, the claim here rests less on affirmative misconduct and more on an alleged failure to stop ISIS from using these platforms. But, as noted above, both tort and criminal law have long been leery of imposing aiding-and-abetting liability for mere passive nonfeasance.

In sum, since there was no independent duty to act to be found in statute, Twitter could not be found liable under these allegations.

The First Amendment and Common Carriage

It’s notable that the opinion was written by Justice Thomas, who previously invited states to create common-carriage laws that he believed would be consistent with the First Amendment. In his concurrence to the Court’s dismissal (as moot) of the petition for certification in Biden v. First Amendment Institute, Thomas wrote of the market power allegedly held by social-media companies like Twitter, Facebook, and YouTube that:

If part of the problem is private, concentrated control over online content and platforms available to the public, then part of the solution may be found in doctrines that limit the right of a private company to exclude. Historically, at least two legal doctrines limited a company’s right to exclude.

He proceeded to outline how common-carriage and public-accommodation laws can be used to limit companies from excluding users, suggesting that they would be subject to a lower standard of First Amendment scrutiny under Turner and its progeny.

Among the reasons for imposing common-carriage requirements on social-media companies, Justice Thomas found it important that they are like conduits that carry speech of others:

Though digital instead of physical, they are at bottom communications networks, and they “carry” information from one user to another. A traditional telephone company laid physical wires to create a network connecting people. Digital platforms lay information infrastructure that can be controlled in much the same way. And unlike newspapers, digital platforms hold themselves out as organizations that focus on distributing the speech of the broader public. Federal law dictates that companies cannot “be treated as the publisher or speaker” of information that they merely distribute. 110 Stat. 137, 47 U. S. C. §230(c).

Thomas also noted the relationship between certain benefits bestowed upon common carriers in exchange for universal service:

In exchange for regulating transportation and communication industries, governments—both State and Federal— have sometimes given common carriers special government favors. For example, governments have tied restrictions on a carrier’s ability to reject clients to “immunity from certain types of suits” or to regulations that make it more difficult for other companies to compete with the carrier (such as franchise licenses). (internal citations omitted)

While Taamneh is not about the First Amendment, some of the language in Thomas’ opinion would suggest that social-media companies are the types of businesses that may receive conduit liability for third-party conduct in exchange for common-carriage requirements.

As noted above, the Court found it important for its holding that there was no aiding-and-abetting by Twitter that “there is not even reason to think that defendants carefully screened any content before allowing users to upload it onto their platforms. If anything, the opposite is true: By plaintiffs’ own allegations, these platforms appear to transmit most content without inspecting it.” The Court then compared social-media platforms to “cell phones, email, or the internet generally,” which are classic examples of conduits. In particular, phone service was a common carrier that largely received immunity from liability for its users’ conduct.

Thus, while Taamneh wouldn’t be directly binding in the First Amendment context, this language will likely be cited in the briefs by those supporting the Texas and Florida common-carriage laws when the Supreme Court reviews them.

Section 230 and Neutral Tools

On the other hand—and despite the views Thomas expressed about Section 230 immunity in his Malwarebytes statement—there is much in the Court’s reasoning in Taamneh that would lead one to believe the justices sees algorithmic recommendations as neutral tools that would not, in and of themselves, restrict a finding of immunity for online platforms.

While the Court’s decision in Gonzalez v. Google basically said it didn’t need to reach the Section 230 question because the allegations failed to state a claim under Taamneh’s reasoning, it appears highly likely that a majority would have found the platforms immune under Section 230 despite their use of algorithmic recommendations. For instance, in Taamneh, the Court disagreed with the assertion that recommendation algorithms amounted to substantial assistance, reasoning that:

By plaintiffs’ own telling, their claim is based on defendants’ “provision of the infrastructure which provides material support to ISIS.” Viewed properly, defendants’ “recommendation” algorithms are merely part of that infrastructure. All the content on their platforms is filtered through these algorithms, which allegedly sort the content by information and inputs provided by users and found in the content itself. As presented here, the algorithms appear agnostic as to the nature of the content, matching any content (including ISIS’ content) with any user who is more likely to view that content. The fact that these algorithms matched some ISIS content with some users thus does not convert defendants’ passive assistance into active abetting. Once the platform and sorting-tool algorithms were up and running, defendants at most allegedly stood back and watched; they are not alleged to have taken any further action with respect to ISIS.

On the other hand, the Court thought it important to its finding that there were no allegations establishing a nexus (due to unusual provision or conscious and selective promotion) between Twitter’s provision of a communications platform and the terrorist activity:

To be sure, we cannot rule out the possibility that some set of allegations involving aid to a known terrorist group would justify holding a secondary defendant liable for all of the group’s actions or perhaps some definable subset of terrorist acts. There may be, for example, situations where the provider of routine services does so in an unusual way or provides such dangerous wares that selling those goods to a terrorist group could constitute aiding and abetting a foreseeable terror attack. Cf. Direct Sales Co. v. United States, 319 U. S. 703, 707, 711–712, 714–715 (1943) (registered morphine distributor could be liable as a coconspirator of an illicit operation to which it mailed morphine far in excess of normal amounts). Or, if a platform consciously and selectively chose to promote content provided by a particular terrorist group, perhaps it could be said to have culpably assisted the terrorist group. Cf. Passaic Daily News v. Blair, 63 N. J. 474, 487–488, 308 A. 2d 649, 656 (1973) (publishing employment advertisements that discriminate on the basis of sex could aid and abet the discrimination).

In other words, this language could suggest that, as long as the algorithms are essentially “neutral tools” (to use the language of Roommates.com and its progeny), social-media platforms are immune for third-party speech that they incidentally promote. But if they design their algorithmic recommendations in such a way that suggests the platforms “consciously and selectively” promote illegal content, then they could lose immunity.

Unless other justices share Thomas’ appetite to limit Section 230 immunity substantially in a future case, this language from Taamneh would likely be used to expand the law’s protections to algorithmic recommendations under a Roommates.com/”neutral tools” analysis.

Conclusion

While the Court did not end up issuing the huge Section 230 decision that some expected, the Taamneh decision will be a big deal going forward for the interconnected issues of online intermediary liability, the First Amendment, and Section 230. Language from Justice Thomas’ opinion will likely be cited in the litigation over the Texas and Florida common-carrier laws, as well as future Section 230 cases.