Site icon Truth on the Market

What Do Twitter’s Struggles with CSAM Mean for Section 230 Reform? 

Twitter has seen a lot of ups and downs since Elon Musk closed on his acquisition of the company in late October and almost immediately set about his initiatives to “reform” the platform’s operations.

One of the stories that has gotten somewhat lost in the ensuing chaos is that, in the short time under Musk, Twitter has made significant inroads—on at least some margins—against the visibility of child sexual abuse material (CSAM) by removing major hashtags that were used to share it, creating a direct reporting option, and removing major purveyors. On the other hand, due to the large reductions in Twitter’s workforce—both voluntary and involuntary—there are now very few human reviewers left to deal with the issue.

Section 230 immunity currently protects online intermediaries from most civil suits for CSAM (a narrow carveout is made under Section 1595 of the Trafficking Victims Protection Act). While the federal government could bring criminal charges if it believes online intermediaries are violating federal CSAM laws, and certain narrow state criminal claims could be brought consistent with federal law, private litigants are largely left without the ability to find redress on their own in the courts.

This, among other reasons, is why there has been a push to amend Section 230 immunity. Our proposal (along with co-author Geoffrey Manne) suggests online intermediaries should have a reasonable duty of care to remove illegal content. But this still requires thinking carefully about what a reasonable duty of care entails.

For instance, one of the big splash moves made by Twitter after Musk’s acquisition was to remove major CSAM distribution hashtags. While this did limit visibility of CSAM for a time, some experts say it doesn’t really solve the problem, as new hashtags will arise. So, would a reasonableness standard require the periodic removal of major hashtags? Perhaps it would. It appears to have been a relatively low-cost way to reduce access to such material, and could theoretically be incorporated into a larger program that uses automated discovery to find and remove future hashtags.

Of course it won’t be perfect, and will be subject to something of a Whac-A-Mole dynamic. But the relevant question isn’t whether it’s a perfect solution, but whether it yields significant benefit relative to its cost, such that it should be regarded as a legally reasonable measure that platforms should broadly implement.

On the flip side, Twitter has lost such a large amount of its workforce that it potentially no longer has enough staff to do the important review of CSAM. As long as Twitter allows adult nudity, and algorithms are unable to effectively distinguish between different types of nudity, human reviewers remain essential. A reasonableness standard might also require sufficient staff and funding dedicated to reviewing posts for CSAM. 

But what does it mean for a platform to behave “reasonably”?

Platforms Should Behave ‘Reasonably’

Rethinking platforms’ safe harbor from liability as governed by a “reasonableness” standard offers a way to more effectively navigate the complexities of these tradeoffs without resorting to the binary of immunity or total liability that typically characterizes discussions of Section 230 reform.

It could be the case that, given the reality that machines can’t distinguish between “good” and “bad” nudity, it is patently unreasonable for an open platform to allow any nudity at all if it is run with the level of staffing that Musk seems to prefer for Twitter.

Consider the situation that MindGeek faced a couple of years ago. It was pressured by financial providers, including PayPal and Visa, to clean up the CSAM and nonconsenual pornography that appeared on its websites. In response, they removed more than 80% of suspected illicit content and required greater authentication for posting.

Notwithstanding efforts to clean up the service, a lawsuit was filed against MindGeek and Visa by victims who asserted that the credit-card company was a knowing conspirator for processing payments to MindGeek’s sites when they were purveying child pornography. Notably, Section 230 issues were dismissed early on in the case, but the remaining claims—rooted in the Racketeer Influenced and Corrupt Organizations Act (RICO) and the Trafficking Victims Protection Act (TVPA)—contained elements that support evaluating the conduct of online intermediaries, including payment providers who support online services, through a reasonableness lens.

In our amicus, we stressed the broader policy implications of failing to appropriately demarcate the bounds of liability. In short, we stressed that deterrence is best encouraged by placing responsibility for control on the party most closely able to monitor the situation—i.e., MindGeek, and not Visa. Underlying this, we believe that an appropriately tuned reasonableness standard should be able to foreclose these sorts of inquiries at early stages of litigation if there is good evidence that an intermediary behaved reasonably under the circumstances.

In this case, we believed the court should have taken seriously the fact that a payment processor needs to balance a number of competing demands— legally, economically, and morally—in a way that enables them to serve their necessary prosocial roles. Here, Visa had to balance its role, on the one hand, as a neutral intermediary responsible for handling millions of daily transactions, with its interests to ensure that it did not facilitate illegal behavior. But it also was operating, essentially, under a veil of ignorance: all of the information it had was derived from news reports, as it was not directly involved in, nor did it have special insight into, the operation of MindGeek’s businesses.

As we stressed in our intermediary-liability paper, there is indeed a valid concern that changes to intermediary-liability policy not invite a flood of ruinous litigation. Instead, there needs to be some ability to determine at the early stages of litigation whether a defendant behaved reasonably under the circumstances. In the MindGeek case, we believed that Visa did.

In essence, much of this approach to intermediary liability boils down to finding socially and economically efficient dividing lines that can broadly demarcate when liability should attach. For example, if Visa is liable as a co-conspirator in MindGeek’s allegedly illegal enterprise for providing a payment network that MindGeek uses by virtue of its relationship with yet other intermediaries (i.e., the banks that actually accept and process the credit-card payments), why isn’t the U.S. Post Office also liable for providing package-delivery services that allow MindGeek to operate? Or its maintenance contractor for cleaning and maintaining its offices?

Twitter implicitly engaged in this sort of analysis when it considered becoming an OnlyFans competitor. Despite having considerable resources—both algorithmic and human—Twitter’s internal team determined they could not “accurately detect child sexual exploitation and non-consensual nudity at scale.” As a result, they abandoned the project. Similarly, Tumblr tried to make many changes, including taking down CSAM hashtags, before finally giving up and removing all pornographic material in order to remain in the App Store for iOS. At root, these firms demonstrated the ability to weigh costs and benefits in ways entirely consistent with a reasonableness analysis. 

Thinking about the MindGeek situation again, it could also be the case that MindGeek did not behave reasonably. Some of MindGeek’s sites encouraged the upload of user-generated pornography. If MindGeek experienced the same limitations in detecting “good” and “bad” pornography (which is likely), it could be that the company behaved recklessly for many years, and only tightened its verification procedures once it was caught. If true, that is behavior that should not be protected by the law with a liability shield, as it is patently unreasonable.

Apple is sometimes derided as an unfair gatekeeper of speech through its App Store. But, ironically, Apple itself has made complex tradeoffs between data security and privacy—through use of encryption, on the one hand, and checking devices for CSAM material, on the other. Prioritizing encryption over scanning devices (especially photos and messages) for CSAM is a choice that could allow for more CSAM to proliferate. But the choice is, again, a difficult one: how much moderation is needed and how do you balance such costs against other values important to users, such as privacy for the vast majority of nonoffending users?

As always, these issues are complex and involve tradeoffs. But it is obvious that more can and needs to be done by online intermediaries to remove CSAM.

But What Is ‘Reasonable’? And How Do We Get There?

The million-dollar legal question is what counts as “reasonable?” We are not unaware of the fact that, particularly when dealing with online platforms that deal with millions of users a day, there is a great deal of surface area exposed to litigation by potentially illicit user-generated conduct. Thus, it is not the case, at least for the foreseeable future, that we need to throw open gates of a full-blown common-law process to determine questions of intermediary liability. What is needed, instead, is a phased-in approach that gets courts in the business of parsing these hard questions and building up a body of principles that, on the one hand, encourage platforms to do more to control illicit content on their services, and on the other, discourages unmeritorious lawsuits by the plaintiffs’ bar.

One of our proposals for Section 230 reform is for a multistakeholder body, overseen by an expert agency like the Federal Trade Commission or National Institute of Standards and Technology, to create certified moderation policies. This would involve online intermediaries working together with a convening federal expert agency to develop a set of best practices for removing CSAM, including thinking through the cost-benefit analysis of more moderation—human or algorithmic—or even wholesale removal of nudity and pornographic content.

Compliance with these standards should, in most cases, operate to foreclose litigation against online service providers at an early stage. If such best practices are followed, a defendant could point to its moderation policies as a “certified answer” to any complaint alleging a cause of action arising out of user-generated content. Compliant practices will merit dismissal of the case, effecting a safe harbor similar to the one currently in place in Section 230.

In litigation, after a defendant answers a complaint with its certified moderation policies, the burden would shift to the plaintiff to adduce sufficient evidence to show that the certified standards were not actually adhered to. Such evidence should be more than mere res ipsa loquitur; it must be sufficient to demonstrate that the online service provider should have been aware of a harm or potential harm, that it had the opportunity to cure or prevent it, and that it failed to do so. Such a claim would need to meet a heightened pleading requirement, as for fraud, requiring particularity. And, periodically, the body overseeing the development of this process would incorporate changes to the best practices standards based on the cases being brought in front of courts.

Online service providers don’t need to be perfect in their content-moderation decisions, but they should behave reasonably. A properly designed duty-of-care standard should be flexible and account for a platform’s scale, the nature and size of its user base, and the costs of compliance, among other considerations. What is appropriate for YouTube, Facebook, or Twitter may not be the same as what’s appropriate for a startup social-media site, a web-infrastructure provider, or an e-commerce platform.

Indeed, this sort of flexibility is a benefit of adopting a “reasonableness” standard, such as is found in common-law negligence. Allowing courts to apply the flexible common-law duty of reasonable care would also enable jurisprudence to evolve with the changing nature of online intermediaries, the problems they pose, and the moderating technologies that become available.

Conclusion

Twitter and other online intermediaries continue to struggle with the best approach to removing CSAM, nonconsensual pornography, and a whole host of other illicit content. There are no easy answers, but there are strong ethical reasons, as well as legal and market pressures, to do more. Section 230 reform is just one part of a complete regulatory framework, but it is an important part of getting intermediary liability incentives right. A reasonableness approach that would hold online platforms accountable in a cost-beneficial way is likely to be a key part of a positive reform agenda for Section 230.