Between a TikTok and a Hard Place: Products Liability, Section 230, and the First Amendment

Cite this Article
Ben Sperry, Between a TikTok and a Hard Place: Products Liability, Section 230, and the First Amendment, Truth on the Market (September 12, 2024), https://truthonthemarket.com/2024/09/12/between-a-tiktok-and-a-hard-place-products-liability-section-230-and-the-first-amendment/

With the 3rd U.S. Circuit Court of Appeals’ recent decision in Anderson v. TikTok, it’s time to revisit the interplay between the First Amendment’s right to editorial discretion, Section 230 immunity, and children’s online safety in the context of algorithms.

As has been noted many times, the use of algorithmic recommendations is ubiquitous online. And the potential harms to children receiving bad recommendations is significant, as well, as the underlying facts of the TikTok case show. But there are also countervailing speech and consumer-welfare concerns that arise if algorithmic recommendations are not protected under the law. 

Ironically, insofar as the 3rd Circuit is right about algorithmic recommendations constituting first-party speech (and thus not receiving immunity under Section 230), this could mean that online platforms that use algorithmic recommendations have a strong First Amendment defense against products-liability claims.

TikTok v Anderson

The facts alleged in the TikTok case are straightforward and horrifying. A 10-year-old girl died of self-asphyxiation due to her attempting the so-called “Blackout Challenge” after watching a video of the viral “choking game” served to her by TikTok’s For You Page (FYP). The question for the court was whether products-liability claims (both strict liability and negligence) against TikTok could go forward in light of Section 230 immunity. In particular, it was alleged that TikTok knew of the challenge, allowed users to post videos of them participating, and that its algorithm served videos of the challenge to minors’ FYPs.

The district court initially dismissed the complaint on grounds of TikTok’s Section 230 immunity as an interactive computer service. But the appeals panel reversed, in relevant part, remanding the case to go forward on those counts. The 3rd Circuit’s short opinion basically argued that, after the U.S. Supreme Court’s ruling in the NetChoice cases, editorial decisions about what is promoted through a social-media platform’s algorithms are now to be considered that platform’s first-party speech:

The Court held that a platform’s algorithm that reflects “editorial judgments” about “compiling the third-party speech it wants in the way it wants” is the platform’s own “expressive product” and is therefore protected by the First Amendment…

Therefore, the reasoning goes, for the purposes of Section 230, such algorithmic recommendations must be social-media platforms’ expressive products, meaning they receive no immunity:

Given the Supreme Court’s observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others’ content via their expressive algorithms […] it follows that doing so amounts to first-party speech under § 230, too.

Thus, the court concluded:

Here, as alleged, TikTok’s FYP algorithm “[d]ecid[es] on the third-party speech that will be included in or excluded from a compilation—and then organiz[es] and present[s] the included items” on users’ FYPs. NetChoice, 144 S. Ct. at 2402. Accordingly, TikTok’s algorithm, which recommended the Blackout Challenge to Nylah on her FYP, was TikTok’s own “expressive activity,” id., and thus its first-party speech. Such first-party speech is the basis for Anderson’s claims. See App. 39 (Compl. ¶ 107(k), (o)) (alleging, among other things, that TikTok’s FYP algorithm was defectively designed because it “recommended” and “promoted” the Blackout Challenge). 11 Section 230 immunizes only information “provided by another[,]” 47 U.S.C. § 230(c)(1), and here, because the information that forms the basis of Anderson’s lawsuit—i.e., TikTok’s recommendations via its FYP algorithm—is TikTok’s own expressive activity, § 230 does not bar Anderson’s claims.

It is worth noting that the phrase “first-party speech” is used nowhere in the NetChoice majority opinion or the concurrences. This is important, because the majority opinion’s discussion of the right to editorial discretion is about the compilation and curation of third-party speech. It’s true that the majority describes the result as an “expressive product” when it comes to the First Amendment protection of social-media feeds, but nothing suggests that this makes it the social-media platform’s own speech for Section 230 purposes. 

In fact, the 3rd Circuit ignored that curating is itself part of the definition of an interactive computer service. Section 230(f)(2) defines an interactive computer service to mean “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server.” (emphasis added). Section 230(f)(4) then defines “access software provider” to mean:

a provider of software . . . or enabling tools that do any one or more of the following: 

(A) filter, screen, allow, or disallow content; 

(B) pick, choose, analyze, or digest content; or 

(C) transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content.”

Under Section 230, social-media platforms act as access software providers, and therefore as an “interactive computer service,” when they filter, screen, pick, choose, organize, or reorganize content. It would make no sense for social-media platforms (or other online platforms that use algorithmic recommendations) to then be held liable for offering recommendations, if that is what makes it an interactive computer service in the first place.

The whole history of Section 230 supports this point. As the story goes, the purpose of the provision was to protect early online chatrooms’ ability to take down some content without being held liable for whatever is left up. In other words, Section 230 was intended to further support the right to editorial discretion without making online platforms that host speech into publishers like newspapers for the purposes of civil lawsuits.

In sum, Section 230 expressly protects the same right to editorial discretion that the First Amendment does.

Products Liability for Algorithmic Recommendations and the First Amendment

However flawed the 3rd Circuit’s opinion may be from the perspective of the text and history of Section 230 (for more on that topic, see the brief joined by International Center for Law & Economics scholars Gus Hurwitz and Geoffrey Manne in Gonzalez v. Google here), the case is more interesting in its endorsement of a strong version of the right to editorial discretion that would regard algorithmic recommendations as online platforms’ first-party speech. If that is the case, then a court would likely have to say the products-liability claims are about TikTok’s speech, and not its conduct.

The result of this line of reasoning would likely imperil the underlying products-liability counts in this case. 

The Supreme Court held in Snyder v. Phelps that: “The Free Speech Clause of the First Amendment… can serve as a defense in state tort suits.” There, the Court circumscribed the application of an intentional infliction of emotional distress claim to only matters of private concern. Similarly, in New York Times v. Sullivan and its progeny, the Court heightened the requirements for defamation claims when dealing with public figures. In all of these cases, the Court was motivated by First Amendment concerns in limiting the reach of state tort law. It is likely that, even outside of Section 230 immunity, products-liability claims would be similarly limited when they apply to algorithmic recommendations, if such recommendations’ constitute the platforms’ first-party speech. 

As I’ve noted before in a previous post, one of the seminal opinions on the interaction of products-liability claims and the First Amendment comes from a 1991 9th U.S. Circuit Court of Appeals opinion in Winter v. G.P. Putman’s Sons. There, an American publisher that “neither wrote nor edited the book” was sued by two plaintiffs who ingested wild mushrooms that made them critically ill after relying on information in The Encyclopedia of Mushrooms. The 9th Circuit rejected both strict product liability and negligence-based claims against the publisher, in part for First Amendment reasons.

The court in Winter determined that “[t]he purposes served by products liability law also are focused on the tangible world and do not take into consideration the unique characteristics of ideas and expression.” While the costs of strict liability are tolerable when applied to products, “[t]hey are much less disturbing than the prospect that we might be deprived of the latest ideas and theories.” The appeals court noted that the presentation of ideas in a book is quite different from aeronautical charts, compasses, or even computer software that fails to do what it is designed to do.

In a footnote, the court found a number of cases declining to apply strict product liability to information in books or magazines. Interestingly, this included a case where a reader of Hustler magazine died during an attempt at “autoerotic asphyxiation” after reading an article titled “Orgasm of Death” that described the experience. 

Recent cases continue this trend. A federal court in Estate of B.H. v. Netflix rejected a strict-liability claim against Netflix arising from a child who committed suicide after watching the Netflix series 13 Reasons Why, in which teen suicide plays a prominent role in the plot. They cited Winter for the proposition that “[t]here is no strict liability for books, movies, or other forms of media.” 

The court rejected attempts by the plaintiffs to plead around this by alleging that Netflix should have placed safeguards preventing minors from viewing the series, stating that the “plaintiffs’ efforts to distance the claims from the content of the show do not persuade. Without the content, there would be no claim.” 

Similarly, in Rodgers v. Christie, the 3rd Circuit held that an “algorithm” for determining pretrial release was not a product subject to strict products liability, because “‘information, guidance, ideas, and recommendations’ are not ‘product[s]’ under the Third Restatement, both as a definitional matter and because extending strict liability to the distribution of ideas would raise serious First Amendment concerns.” 

Here, TikTok is in a similar position to the publisher in Winter or to Netflix in terms of using its algorithmic recommendations to users’ FYPs allegedly to promote the “Blackout Challenge.” The content is ugly, and led to similarly tragic results as what occurred in the Hustler and Netflix cases. But as courts continue to find, there is no basis to find the distribution of ideas, even by algorithmic recommendations, are subject to strict liability consistent with the First Amendment.

Winter also explored the possibility of negligence, on the theory that the publisher owed a duty of care to its readers to make sure the information was accurate. But the 9th Circuit rejected this as well, again stating that: “[g]uided by the First Amendment and the values embodied therein, we decline to extend liability under this theory to the ideas and expression contained in a book.” The court found the publisher had no inherent duty to investigate and assure the accuracy of the contents of its book, unless it made some representation that it had done so. Similarly, in Netflix, the court collected a number of cases for the proposition that content creators and publishers have no duty to their consumers for the safety of ideas being expressed or disseminated.

Here, the same principles would likely apply to a negligence-based claim. TikTok probably would not have an independent duty to assure the videos recommended by its algorithm are safe for viewership. But if the plaintiff could show that TikTok made representations that it would protect its users from harmful content, then they may have a duty. If so, a jury would have to determine whether TikTok negligently designed the algorithm and whether the harm was caused by that negligence in a way that was reasonably foreseeable.

Conclusion

There is much to be critical of in the 3rd Circuit’s TikTok decision, from both a practical and legal standpoint. But there is also relatively unexplored ground to consider how products-liability cases can move forward against online platforms. If the court is right that algorithmic recommendations of third-party speech are “expressive products” of social-media platforms, then it will be difficult for courts to allow products-liability suits to go forward that are premised on restricting this speech, absent a promise by TikTok to protect its users from harmful content.