Archives For First Amendment

The Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law will host a hearing this afternoon on Gonzalez v. Google, one of two terrorism-related cases currently before the U.S. Supreme Court that implicate Section 230 of the Communications Decency Act of 1996.

We’ve written before about how the Court might and should rule in Gonzalez (see here and here), but less attention has been devoted to the other Section 230 case on the docket: Twitter v. Taamneh. That’s unfortunate, as a thoughtful reading of the dispute at issue in Taamneh could highlight some of the law’s underlying principles. At first blush, alas, it does not appear that the Court is primed to apply that reading.

During the recent oral arguments, the Court considered whether Twitter (and other social-media companies) can be held liable under the Antiterrorism Act for providing a general communications platform that may be used by terrorists. The question under review by the Court is whether Twitter “‘knowingly’ provided substantial assistance [to terrorist groups] under [the statute] merely because it allegedly could have taken more ‘meaningful’ or ‘aggressive’ action to prevent such use.” Plaintiffs’ (respondents before the Court) theory is, essentially, that Twitter aided and abetted terrorism through its inaction.

The oral argument found the justices grappling with where to draw the line between aiding and abetting, and otherwise legal activity that happens to make it somewhat easier for bad actors to engage in illegal conduct. The nearly three-hour discussion between the justices and the attorneys yielded little in the way of a viable test. But a more concrete focus on the law & economics of collateral liability (which we also describe as “intermediary liability”) would have significantly aided the conversation.   

Taamneh presents a complex question of intermediary liability generally that goes beyond the bounds of a (relatively) simpler Section 230 analysis. As we discussed in our amicus brief in Fleites v. Mindgeek (and as briefly described in this blog post), intermediary liability generally cannot be predicated on the mere existence of harmful or illegal content on an online platform that could, conceivably, have been prevented by some action by the platform or other intermediary.

The specific statute may impose other limits (like the “knowing” requirement in the Antiterrorism Act), but intermediary liability makes sense only when a particular intermediary defendant is positioned to control (and thus remedy) the bad conduct in question, and when imposing liability would cause the intermediary to act in such a way that the benefits of its conduct in deterring harm outweigh the costs of impeding its normal functioning as an intermediary.

Had the Court adopted such an approach in its questioning, it could have better honed in on the proper dividing line between parties whose normal conduct might reasonably give rise to liability (without some heightened effort to mitigate harm) and those whose conduct should not entail this sort of elevated responsibility.

Here, the plaintiffs have framed their case in a way that would essentially collapse this analysis into a strict liability standard by simply asking “did something bad happen on a platform that could have been prevented?” As we discuss below, the plaintiffs’ theory goes too far and would overextend intermediary liability to the point that the social costs would outweigh the benefits of deterrence.

The Law & Economics of Intermediary Liability: Who’s Best Positioned to Monitor and Control?

In our amicus brief in Fleites v. MindGeek (as well as our law review article on Section 230 and intermediary liability), we argued that, in limited circumstances, the law should (and does) place responsibility on intermediaries to monitor and control conduct. It is not always sufficient to aim legal sanctions solely at the parties who commit harms directly—e.g., where harms are committed by many pseudonymous individuals dispersed across large online services. In such cases, social costs may be minimized when legal responsibility is placed upon the least-cost avoider: the party in the best position to limit harm, even if it is not the party directly committing the harm.

Thus, in some circumstances, intermediaries (like Twitter) may be the least-cost avoider, such as when information costs are sufficiently low that effective monitoring and control of end users is possible, and when pseudonymity makes remedies against end users ineffective.

But there are costs to imposing such liability—including, importantly, “collateral censorship” of user-generated content by online social-media platforms. This manifests in platforms acting more defensively—taking down more speech, and generally moving in a direction that would make the Internet less amenable to open, public discussion—in an effort to avoid liability. Indeed, a core reason that Section 230 exists in the first place is to reduce these costs. (Whether Section 230 gets the balance correct is another matter, which we take up at length in our law review article linked above).

From an economic perspective, liability should be imposed on the party or parties best positioned to deter the harms in question, so long as the social costs incurred by, and as a result of, enforcement do not exceed the social gains realized. In other words, there is a delicate balance that must be struck to determine when intermediary liability makes sense in a given case. On the one hand, we want illicit content to be deterred, and on the other, we want to preserve the open nature of the Internet. The costs generated from the over-deterrence of legal, beneficial speech is why intermediary liability for user-generated content can’t be applied on a strict-liability basis, and why some bad content will always exist in the system.

The Spectrum of Properly Construed Intermediary Liability: Lessons from Fleites v. Mindgeek

Fleites v. Mindgeek illustrates well that proper application of liability to intermedium exists on a spectrum. Mindgeek—the owner/operator of the website Pornhub—was sued under Racketeer Influenced and Corrupt Organizations Act (RICO) and Victims of Trafficking and Violence Protection Act (TVPA) theories for promoting and profiting from nonconsensual pornography and human trafficking. But the plaintiffs also joined Visa as a defendant, claiming that Visa knowingly provided payment processing for some of Pornhub’s services, making it an aider/abettor.

The “best” defendants, obviously, would be the individuals actually producing the illicit content, but limiting enforcement to direct actors may be insufficient. The statute therefore contemplates bringing enforcement actions against certain intermediaries for aiding and abetting. But there are a host of intermediaries you could theoretically bring into a liability scheme. First, obviously, is Mindgeek, as the platform operator. Plaintiffs felt that Visa was also sufficiently connected to the harm by processing payments for MindGeek users and content posters, and that it should therefore bear liability, as well.

The problem, however, is that there is no limiting principle in the plaintiffs’ theory of the case against Visa. Theoretically, the group of intermediaries “facilitating” the illicit conduct is practically limitless. As we pointed out in our Fleites amicus:

In theory, any sufficiently large firm with a role in the commerce at issue could be deemed liable if all that is required is that its services “allow[]” the alleged principal actors to continue to do business. FedEx, for example, would be liable for continuing to deliver packages to MindGeek’s address. The local waste management company would be liable for continuing to service the building in which MindGeek’s offices are located. And every online search provider and Internet service provider would be liable for continuing to provide service to anyone searching for or viewing legal content on MindGeek’s sites.

Twitter’s attorney in Taamneh, Seth Waxman, made much the same point in responding to Justice Sonia Sotomayor:

…the rule that the 9th Circuit has posited and that the plaintiffs embrace… means that as a matter of course, every time somebody is injured by an act of international terrorism committed, planned, or supported by a foreign terrorist organization, each one of these platforms will be liable in treble damages and so will the telephone companies that provided telephone service, the bus company or the taxi company that allowed the terrorists to move about freely. [emphasis added]

In our Fleites amicus, we argued that a more practical approach is needed; one that tries to draw a sensible line on this liability spectrum. Most importantly, we argued that Visa was not in a position to monitor and control what happened on MindGeek’s platform, and thus was a poor candidate for extending intermediary liability. In that case, because of the complexities of the payment-processing network, Visa had no visibility into what specific content was being purchased, what content was being uploaded to Pornhub, and which individuals may have been uploading illicit content. Worse, the only evidence—if it can be called that—that Visa was aware that anything illicit was happening consisted of news reports in the mainstream media, which may or may not have been accurate, and on which Visa was unable to do any meaningful follow-up investigation.

Our Fleites brief didn’t explicitly consider MindGeek’s potential liability. But MindGeek obviously is in a much better position to monitor and control illicit content. With that said, merely having the ability to monitor and control is not sufficient. Given that content moderation is necessarily an imperfect activity, there will always be some bad content that slips through. Thus, the relevant question is, under the circumstances, did the intermediary act reasonably—e.g., did it comply with best practices—in attempting to identify, remove, and deter illicit content?

In Visa’s case, the answer is not difficult. Given that it had no way to know about or single out transactions as likely to be illegal, its only recourse to reduce harm (and its liability risk) would be to cut off all payment services for Mindgeek. The constraints on perfectly legal conduct that this would entail certainly far outweigh the benefits of reducing illegal activity.

Moreover, such a theory could require Visa to stop processing payments for an enormous swath of legal activity outside of PornHub. For example, purveyors of illegal content on PornHub use ISP services to post their content. A theory of liability that held Visa responsible simply because it plays some small part in facilitating the illegal activity’s existence would presumably also require Visa to shut off payments to ISPs—certainly, that would also curtail the amount of illegal content.

With MindGeek, the answer is a bit more difficult. The anonymous or pseudonymous posting of pornographic content makes it extremely difficult to go after end users. But knowing that human trafficking and nonconsensual pornography are endemic problems, and knowing (as it arguably did) that such content was regularly posted on Pornhub, Mindgeek could be deemed to have acted unreasonably for not having exercised very strict control over its users (at minimum, say, by verifying users’ real identities to facilitate law enforcement against bad actors and deter the posting of illegal content). Indeed, it is worth noting that MindGeek/Pornhub did implement exactly this control, among others, following the public attention arising from news reports of nonconsensual and trafficking-related content on the site. 

But liability for MindGeek is only even plausible given that it might be able to act in such a way that imposes greater burdens on illegal content providers without deterring excessive amounts of legal content. If its only reasonable means of acting would be, say, to shut down PornHub entirely, then just as with Visa, the cost of imposing liability in terms of this “collateral censorship” would surely outweigh the benefits.

Applying the Law & Economics of Collateral Liability to Twitter in Taamneh

Contrast the situation of MindGeek in Fleites with Twitter in Taamneh. Twitter may seem to be a good candidate for intermediary liability. It also has the ability to monitor and control what is posted on its platform. And it faces a similar problem of pseudonymous posting that may make it difficult to go after end users for terrorist activity. But this is not the end of the analysis.

Given that Twitter operates a platform that hosts the general—and overwhelmingly legal—discussions of hundreds of millions of users, posting billions of pieces of content, it would be reasonable to impose a heightened responsibility on Twitter only if it could exercise it without excessively deterring the copious legal content on its platform.

At the same time, Twitter does have active policies to police and remove terrorist content. The relevant question, then, is not whether it should do anything to police such content, but whether a failure to do some unspecified amount more was unreasonable, such that its conduct should constitute aiding and abetting terrorism.

Under the Antiterrorism Act, because the basis of liability is “knowingly providing substantial assistance” to a person who committed an act of international terrorism, “unreasonableness” here would have to mean that the failure to do more transforms its conduct from insubstantial to substantial assistance and/or that the failure to do more constitutes a sort of willful blindness. 

The problem is that doing more—policing its site better and removing more illegal content—would do nothing to alter the extent of assistance it provides to the illegal content that remains. And by the same token, almost by definition, Twitter does not “know” about illegal content it fails to remove. In theory, there is always “more” it could do. But given the inherent imperfections of content moderation at scale, this will always be true, right up to the point that the platform is effectively forced to shut down its service entirely.  

This doesn’t mean that reasonable content moderation couldn’t entail something more than Twitter was doing. But it does mean that the mere existence of illegal content that, in theory, Twitter could have stopped can’t be the basis of liability. And yet the Taamneh plaintiffs make no allegation that acts of terrorism were actually planned on Twitter’s platform, and offer no reasonable basis on which Twitter could have practical knowledge of such activity or practical opportunity to control it.

Nor did plaintiffs point out any examples where Twitter had actual knowledge of such content or users and failed to remove them. Most importantly, the plaintiffs did not demonstrate that any particular content-moderation activities (short of shutting down Twitter entirely) would have resulted in Twitter’s knowledge of or ability to control terrorist activity. Had they done so, it could conceivably constitute a basis for liability. But if the only practical action Twitter can take to avoid liability and prevent harm entails shutting down massive amounts of legal speech, the failure to do so cannot be deemed unreasonable or provide the basis for liability.   

And, again, such a theory of liability would contain no viable limiting principle if it does not consider the practical ability to control harmful conduct without imposing excessively costly collateral damage. Indeed, what in principle would separate a search engine from Twitter, if the search engine linked to an alleged terrorist’s account? Both entities would have access to news reports, and could thus be assumed to have a generalized knowledge that terrorist content might exist on Twitter. The implication of this case, if the plaintiff’s theory is accepted, is that Google would be forced to delist Twitter whenever a news article appears alleging terrorist activity on the service. Obviously, that is untenable for the same reason it’s not tenable to impose an effective obligation on Twitter to remove all terrorist content: the costs of lost legal speech and activity.   

Justice Ketanji Brown Jackson seemingly had the same thought when she pointedly asked whether the plaintiffs’ theory would mean that Linda Hamilton in the Halberstram v. Welch case could have been held liable for aiding and abetting, merely for taking care of Bernard Welch’s kids at home while Welch went out committing burglaries and the murder of Michael Halberstram (instead of the real reason she was held liable, which was for doing Welch’s bookkeeping and helping sell stolen items). As Jackson put it:

…[I]n the Welch case… her taking care of his children [was] assisting him so that he doesn’t have to be at home at night? He’s actually out committing robberies. She would be assisting his… illegal activities, but I understood that what made her liable in this situation is that the assistance that she was providing was… assistance that was directly aimed at the criminal activity. It was not sort of this indirect supporting him so that he can actually engage in the criminal activity.

In sum, the theory propounded by the plaintiffs (and accepted by the 9th U.S. Circuit Court of Appeals) is just too far afield for holding Twitter liable. As Twitter put it in its reply brief, the plaintiffs’ theory (and the 9th Circuit’s holding) is that:

…providers of generally available, generic services can be held responsible for terrorist attacks anywhere in the world that had no specific connection to their offerings, so long as a plaintiff alleges (a) general awareness that terrorist supporters were among the billions who used the services, (b) such use aided the organization’s broader enterprise, though not the specific attack that injured the plaintiffs, and (c) the defendant’s attempts to preclude that use could have been more effective.

Conclusion

If Section 230 immunity isn’t found to apply in Gonzalez v. Google, and the complaint in Taamneh is allowed to go forward, the most likely response of social-media companies will be to reduce the potential for liability by further restricting access to their platforms. This could mean review by some moderator or algorithm of messages or videos before they are posted to ensure that there is no terrorist content. Or it could mean review of users’ profiles before they are able to join the platform to try to ascertain their political leanings or associations with known terrorist groups. Such restrictions would entail copious false negatives, along with considerable costs to users and to open Internet speech.

And in the end, some amount of terrorist content would still get through. If the plaintiffs’ theory leads to liability in Taamneh, it’s hard to see how the same principle wouldn’t entail liability even under these theoretical, heightened practices. Absent a focus on an intermediary defendant’s ability to control harmful content or conduct, without imposing excessive costs on legal content or conduct, the theory of liability has no viable limit.

In sum, to hold Twitter (or other social-media platforms) liable under the facts presented in Taamneh would stretch intermediary liability far beyond its sensible bounds. While Twitter may seem a good candidate for intermediary liability in theory, it should not be held liable for, in effect, simply providing its services.

Perhaps Section 230’s blanket immunity is excessive. Perhaps there is a proper standard that could impose liability on online intermediaries for user-generated content in limited circumstances properly tied to their ability to control harmful actors and the costs of doing so. But liability in the circumstances suggested by the Taamneh plaintiffs—effectively amounting to strict liability—would be an even bigger mistake in the opposite direction.

It seems that large language models (LLMs) are all the rage right now, from Bing’s announcement that it plans to integrate the ChatGPT technology into its search engine to Google’s announcement of its own LLM called “Bard” to Meta’s recent introduction of its Large Language Model Meta AI, or “LLaMA.” Each of these LLMs use artificial intelligence (AI) to create text-based answers to questions.

But it certainly didn’t take long after these innovative new applications were introduced for reports to emerge of LLM models just plain getting facts wrong. Given this, it is worth asking: how will the law deal with AI-created misinformation?

Among the first questions courts will need to grapple with is whether Section 230 immunity applies to content produced by AI. Of course, the U.S. Supreme Court already has a major Section 230 case on its docket with Gonzalez v. Google. Indeed, during oral arguments for that case, Justice Neil Gorsuch appeared to suggest that AI-generated content would not receive Section 230 immunity. And existing case law would appear to support that conclusion, as LLM content is developed by the interactive computer service itself, not by its users.

Another question raised by the technology is what legal avenues would be available to those seeking to challenge the misinformation. Under the First Amendment, the government can only regulate false speech under very limited circumstances. One of those is defamation, which seems like the most logical cause of action to apply. But under defamation law, plaintiffs—especially public figures, who are the most likely litigants and who must prove “malice”—may have a difficult time proving the AI acted with the necessary state of mind to sustain a cause of action.

Section 230 Likely Does Not Apply to Information Developed by an LLM

Section 230(c)(1) states:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

The law defines an interactive computer service as “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.”

The access software provider portion of that definition includes any tool that can “filter, screen, allow, or disallow content; pick, choose, analyze, or digest content; or transmit, receive, display, forward, cache, search, subset, organize, reorganize, or translate content.”

And finally, an information content provider is “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.”

Taken together, Section 230(c)(1) gives online platforms (“interactive computer services”) broad immunity for user-generated content (“information provided by another information content provider”). This even covers circumstances where the online platform (acting as an “access software provider”) engages in a great deal of curation of the user-generated content.

Section 230(c)(1) does not, however, protect information created by the interactive computer service itself.

There is case law to help determine whether content is created or developed by the interactive computer service. Online platforms applying “neutral tools” to help organize information have not lost immunity. As the 9th U.S. Circuit Court of Appeals put it in Fair Housing Council v. Roommates.com:

Providing neutral tools for navigating websites is fully protected by CDA immunity, absent substantial affirmative conduct on the part of the website creator promoting the use of such tools for unlawful purposes.

On the other hand, online platforms are liable for content they create or develop, which does not include “augmenting the content generally,” but does include “materially contributing to its alleged unlawfulness.” 

The question here is whether the text-based answers provided by LLM apps like Bing’s Sydney or Google’s Bard comprise content created or developed by those online platforms. One could argue that LLMs are neutral tools simply rearranging information from other sources for display. It seems clear, however, that the LLM is synthesizing information to create new content. The use of AI to answer a question, rather than a human agent of Google or Microsoft, doesn’t seem relevant to whether or not it was created or developed by those companies. (Though, as Matt Perault notes, how LLMs are integrated into a product matters. If an LLM just helps “determine which search results to prioritize or which text to highlight from underlying search results,” then it may receive Section 230 protection.)

The technology itself gives text-based answers based on inputs from the questioner. LLMs uses AI-trained engines to guess the next word based on troves of data from the internet. While the information may come from third parties, the creation of the content itself is due to the LLM. As ChatGPT put it in response to my query here:

Proving Defamation by AI

In the absence of Section 230 immunity, there is still the question of how one could hold Google’s Bard or Microsoft’s Sydney accountable for purveying misinformation. There are no laws against false speech in general, nor can there be, since the Supreme Court declared such speech was protected in United States v. Alvarez. There are, however, categories of false speech, like defamation and fraud, which have been found to lie outside of First Amendment protection.

Defamation is the most logical cause of action that could be brought for false information provided by an LLM app. But it is notable that it is highly unlikely that people who have not received significant public recognition will be known by these LLM apps (believe me, I tried to get ChatGPT to tell me something about myself—alas, I’m not famous enough). On top of that, those most likely to have significant damages from their reputations being harmed by falsehoods spread online are those who are in the public eye. This means that, for the purposes of the defamation suit, it is public figures who are most likely to sue.

As an example, if ChatGPT answers the question of whether Johnny Depp is a wife-beater by saying that he is, contrary to one court’s finding (but consistent with another’s), Depp could sue the creators of the service for defamation. He would have to prove that a false statement was publicized to a third party that resulted in damages to him. For the sake of argument, let’s say he can do both. The case still isn’t proven because, as a public figure, he would also have to prove “actual malice.”

Under New York Times v. Sullivan and its progeny, a public figure must prove the defendant acted with “actual malice” when publicizing false information about the plaintiff. Actual malice is defined as “knowledge that [the statement] was false or with reckless disregard of whether it was false or not.”

The question arises whether actual malice can be attributed to a LLM. It seems unlikely that it could be said that the AI’s creators trained it in a way that they “knew” the answers provided would be false. But it may be a more interesting question whether the LLM is giving answers with “reckless disregard” of their truth or falsity. One could argue that these early versions of the technology are exactly that, but the underlying AI is likely to improve over time with feedback. The best time for a plaintiff to sue may be now, when the LLMs are still in their infancy and giving off false answers more often.

It is possible that, given enough context in the query, LLM-empowered apps may be able to recognize private figures, and get things wrong. For instance, when I asked ChatGPT to give a biography of myself, I got no results:

When I added my workplace, I did get a biography, but none of the relevant information was about me. It was instead about my boss, Geoffrey Manne, the president of the International Center for Law & Economics:

While none of this biography is true, it doesn’t harm my reputation, nor does it give rise to damages. But it is at least theoretically possible that an LLM could make a defamatory statement against a private person. In such a case, a lower burden of proof would apply to the plaintiff, that of negligence, i.e., that the defendant published a false statement of fact that a reasonable person would have known was false. This burden would be much easier to meet if the AI had not been sufficiently trained before being released upon the public.

Conclusion

While it is unlikely that a service like ChapGPT would receive Section 230 immunity, it also seems unlikely that a plaintiff would be able to sustain a defamation suit against it for false statements. The most likely type of plaintiff (public figures) would encounter difficulty proving the necessary element of “actual malice.” The best chance for a lawsuit to proceed may be against the early versions of this service—rolled out quickly and to much fanfare, while still being in a beta stage in terms of accuracy—as a colorable argument can be made that they are giving false answers in “reckless disregard” of their truthfulness.

Later next month, the U.S. Supreme Court will hear oral arguments in Gonzalez v. Google LLC, a case that has drawn significant attention and many bad takes regarding how Section 230 of the Communications Decency Act should be interpreted. Enacted in the mid-1990s, when the Internet as we know it was still in its infancy, Section 230 has grown into a law that offers online platforms a fairly comprehensive shield against liability for the content that third parties post to their services. But the law has also come increasingly under fire, from both the political left and the right. 

At issue in Gonzalez is whether Section 230(c)(1) immunizes Google from a set of claims brought under the Antiterrorism Act of 1990 (ATA). The petitioners are relatives of Nohemi Gonzalez, an American citizen murdered in a 2015 terrorist attack in Paris. They allege that Google, through YouTube, is liable under the ATA for providing assistance to ISIS for four main reasons. They allege that: 

  1. Google allowed ISIS to use YouTube to disseminate videos and messages, thereby recruiting and radicalizing terrorists responsible for the murder.
  2. Google failed to take adequate steps to take down videos and accounts and keep them down.
  3. Google recommends videos of others, both through subscriptions and algorithms.
  4. Google monetizes this content through its AdSense service, with ISIS-affiliated users receiving revenue. 

The 9th U.S. Circuit Court of Appeals dismissed all of the non-revenue-sharing claims as barred by Section 230(c)(1), but allowed the revenue-sharing claim to go forward. 

Highlights of DOJ’s Brief

In an amicus brief, the U.S. Justice Department (DOJ) ultimately asks the Court to vacate the 9th Circuit’s judgment regarding those claims that are based on YouTube’s alleged targeted recommendations of ISIS content. But the DOJ also rejects much of the petitioner’s brief, arguing that Section 230 does rightfully apply to the rest of the claims. 

The crux of the DOJ’s brief concerns when and how design choices can be outside of Section 230 immunity. The lodestar 9th Circuit case that the DOJ brief applies is 2008’s Fair Housing Council of San Fernando Valley v. Roommates.com.

As the DOJ notes, radical theories advanced by the plaintiffs and other amici would go too far in restricting Section 230 immunity based on a platform’s decisions on whether or not to block or remove user content (see, e.g., its discussion on pp. 17-21 of the merits and demerits of Justice Clarence Thomas’s Malwarebytes concurrence).  

At the same time, the DOJ’s brief notes that there is room for a reasonable interpretation of Section 230 that allows for liability to attach when online platforms behave unreasonably in their promotion of users’ content. Applying essentially the 9th Circuit’s Roommates.com standard, the DOJ argues that YouTube’s choice to amplify certain terrorist content through its recommendations algorithm is a design choice, rather than simply the hosting of third-party content, thereby removing it from the scope of  Section 230 immunity.  

While there is much to be said in favor of this approach, it’s important to point out that, although directionally correct, it’s not at all clear that a Roommates.com analysis should ultimately come down as the DOJ recommends in Gonzalez. More broadly, the way the DOJ structures its analysis has important implications for how we should think about the scope of Section 230 reform that attempts to balance accountability for intermediaries with avoiding undue collateral censorship.

Charting a Middle Course on Immunity

The important point on which the DOJ relies from Roommates.com is that intermediaries can be held accountable when their own conduct creates violations of the law, even if it involves third–party content. As the DOJ brief puts it:

Section 230(c)(1) protects an online platform from claims premised on its dissemination of third-party speech, but the statute does not immunize a platform’s other conduct, even if that conduct involves the solicitation or presentation of third-party content. The Ninth Circuit’s Roommates.com decision illustrates the point in the context of a website offering a roommate-matching service… As a condition of using the service, Roommates.com “require[d] each subscriber to disclose his sex, sexual orientation and whether he would bring children to a household,” and to “describe his preferences in roommates with respect to the same three criteria.” Ibid. The plaintiffs alleged that asking those questions violated housing-discrimination laws, and the court of appeals agreed that Section 230(c)(1) did not shield Roommates.com from liability for its “own acts” of “posting the questionnaire and requiring answers to it.” Id. at 1165.

Imposing liability in such circumstances does not treat online platforms as the publishers or speakers of content provided by others. Nor does it obligate them to monitor their platforms to detect objectionable postings, or compel them to choose between “suppressing controversial speech or sustaining prohibitive liability.”… Illustrating that distinction, the Roommates.com court held that although Section 230(c)(1) did not apply to the website’s discriminatory questions, it did shield the website from liability for any discriminatory third-party content that users unilaterally chose to post on the site’s “generic” “Additional Comments” section…

The DOJ proceeds from this basis to analyze what it would take for Google (via YouTube) to no longer benefit from Section 230 immunity by virtue of its own editorial actions, as opposed to its actions as a publisher (which 230 would still protect). For instance, are the algorithmic suggestions of videos simply neutral tools that allow for users to get more of the content they desire, akin to search results? Or are the algorithmic suggestions of new videos a design choice that makes it akin to Roommates?

The DOJ argues that taking steps to better display pre-existing content is not content development or creation, in and of itself. Similarly, it would be a mistake to make intermediaries liable for creating tools that can then be deployed by users:

Interactive websites invariably provide tools that enable users to create, and other users to find and engage with, information. A chatroom might supply topic headings to organize posts; a photo-sharing site might offer a feature for users to signal that they like or dislike a post; a classifieds website might enable users to add photos or maps to their listings. If such features rendered the website a co-developer of all users’ content, Section 230(c)(1) would be a dead letter.

At a high level, this is correct. Unfortunately, the DOJ argument then moves onto thinner ice. The DOJ believes that the 230 liability shield in Gonzalez depends on whether an automated “recommendation” rises to the level of development or creation, as the design of filtering criteria in Roommates.com did. Toward this end, the brief notes that:

The distinction between a recommendation and the recommended content is particularly clear when the recommendation is explicit. If YouTube had placed a selected ISIS video on a user’s homepage alongside a message stating, “You should watch this,” that message would fall outside Section 230(c)(1). Encouraging a user to watch a selected video is conduct distinct from the video’s publication (i.e., hosting). And while YouTube would be the “publisher” of the recommendation message itself, that message would not be “information provided by another information content provider.” 47 U.S.C. 230(c)(1).

An Absence of Immunity Does Not Mean a Presence of Liability

Importantly, the DOJ brief emphasizes throughout that remanding the ATA claims is not the end of the analysis—i.e., it does not mean that the plaintiffs can prove the elements. Moreover, other background law—notably, the First Amendment—can limit the application of liability to intermediaries, as well. As we put it in our paper on Section 230 reform:

It is important to again note that our reasonableness proposal doesn’t change the fact that the underlying elements in any cause of action still need to be proven. It is those underlying laws, whether civil or criminal, that would possibly hold intermediaries liable without Section 230 immunity. Thus, for example, those who complain that FOSTA/SESTA harmed sex workers by foreclosing a safe way for them to transact (illegal) business should really be focused on the underlying laws that make sex work illegal, not the exception to Section 230 immunity that FOSTA/SESTA represents. By the same token, those who assert that Section 230 improperly immunizes “conservative bias” or “misinformation” fail to recognize that, because neither of those is actually illegal (nor could they be under First Amendment law), Section 230 offers no additional immunity from liability for such conduct: There is no underlying liability from which to provide immunity in the first place.

There’s a strong likelihood that, on remand, the court will find there is no violation of the ATA at all. Section 230 immunity need not be stretched beyond all reasonable limits to protect intermediaries from hypothetical harms when underlying laws often don’t apply. 

Conclusion

To date, the contours of Section 230 reform largely have been determined by how courts interpret the statute. There is an emerging consensus that some courts have gone too far in extending Section 230 immunity to intermediaries. The DOJ’s brief is directionally correct, but the Court should not adopt it wholesale. More needs to be done to ensure that the particular facts of Gonzalez are not used to completely gut Section 230 more generally.  

In an expected decision (but with a somewhat unexpected coalition), the U.S. Supreme Court has moved 5 to 4 to vacate an order issued early last month by the 5th U.S. Circuit Court of Appeals, which stayed an earlier December 2021 order from the U.S. District Court for the Western District of Texas enjoining Texas’ attorney general from enforcing the state’s recently enacted social-media law, H.B. 20. The law would bar social-media platforms with more than 50 million active users from engaging in “censorship” based on political viewpoint. 

The shadow-docket order serves to grant the preliminary injunction sought by NetChoice and the Computer & Communications Industry Association to block the law—which they argue is facially unconstitutional—from taking effect. The trade groups also are challenging a similar Florida law, which the 11th U.S. Circuit Court of Appeals last week ruled was “substantially likely” to violate the First Amendment. Both state laws will thus be stayed while challenges on the merits proceed. 

But the element of the Supreme Court’s order drawing the most initial interest is the “strange bedfellows” breakdown that produced it. Chief Justice John Roberts was joined by conservative Justices Brett Kavanaugh and Amy Coney Barrett and liberals Stephen Breyer and Sonia Sotomayor in moving to vacate the 5th Circuit’s stay. Meanwhile, Justice Samuel Alito wrote a dissent that was joined by fellow conservatives Clarence Thomas and Neil Gorsuch, and liberal Justice Elena Kagan also dissented without offering a written justification.

A glance at the recent history, however, reveals why it should not be all that surprising that the justices would not come down along predictable partisan lines. Indeed, when it comes to content moderation and the question of whether to designate platforms as “common carriers,” the one undeniably predictable outcome is that both liberals and conservatives have been remarkably inconsistent.

Both Sides Flip Flop on Common Carriage

Ever since Justice Thomas used his concurrence in 2021’s Biden v. Knight First Amendment Institute to lay out a blueprint for how states could regulate social-media companies as common carriers, states led by conservatives have been working to pass bills to restrict the ability of social media companies to “censor.” 

Forcing common carriage on the Internet was, not long ago, something conservatives opposed. It was progressives who called net neutrality the “21st Century First Amendment.” The actual First Amendment, however, protects the rights of both Internet service providers (ISPs) and social-media companies to decide the rules of the road on their own platforms.

Back in the heady days of 2014, when the Federal Communications Commission (FCC) was still planning its next moves on net neutrality after losing at the U.S. Court of Appeals for the D.C. Circuit the first time around, Geoffrey Manne and I at the International Center for Law & Economics teamed with Berin Szoka and Tom Struble of TechFreedom to write a piece for the First Amendment Law Review arguing that there was no exception that would render broadband ISPs “state actors” subject to the First Amendment. Further, we argued that the right to editorial discretion meant that net-neutrality regulations would be subject to (and likely fail) First Amendment scrutiny under Tornillo or Turner.

After the FCC moved to reclassify broadband as a Title II common carrier in 2015, then-Judge Kavanaugh of the D.C. Circuit dissented from the denial of en banc review, in part on First Amendment grounds. He argued that “the First Amendment bars the Government from restricting the editorial discretion of Internet service providers, absent a showing that an Internet service provider possesses market power in a relevant geographic market.” In fact, Kavanaugh went so far as to link the interests of ISPs and Big Tech (and even traditional media), stating:

If market power need not be shown, the Government could regulate the editorial decisions of Facebook and Google, of MSNBC and Fox, of NYTimes.com and WSJ.com, of YouTube and Twitter. Can the Government really force Facebook and Google and all of those other entities to operate as common carriers? Can the Government really impose forced-carriage or equal-access obligations on YouTube and Twitter? If the Government’s theory in this case were accepted, then the answers would be yes. After all, if the Government could force Internet service providers to carry unwanted content even absent a showing of market power, then it could do the same to all those other entities as well. There is no principled distinction between this case and those hypothetical cases.

This was not a controversial view among free-market, right-of-center types at the time.

An interesting shift started to occur during the presidency of Donald Trump, however, as tensions between social-media companies and many on the right came to a head. Instead of seeing these companies as private actors with strong First Amendment rights, some conservatives began looking either for ways to apply the First Amendment to them directly as “state actors” or to craft regulations that would essentially make social-media companies into common carriers with regard to speech.

But Kavanaugh’s opinion in USTelecom remains the best way forward to understand how the First Amendment applies online today, whether regarding net neutrality or social-media regulation. Given Justice Alito’s view, expressed in his dissent, that it “is not at all obvious how our existing precedents, which predate the age of the internet, should apply to large social media companies,” it is a fair bet that laws like those passed by Texas and Florida will get a hearing before the Court in the not-distant future. If Justice Kavanaugh’s opinion has sway among the conservative bloc of the Supreme Court, or is able to peel off justices from the liberal bloc, the Texas law and others like it (as well as net-neutrality regulations) will be struck down as First Amendment violations.

Kavanaugh’s USTelecom Dissent

In then-Judge Kavanaugh’s dissent, he highlighted two reasons he believed the FCC’s reclassification of broadband as Title II was unlawful. The first was that the reclassification decision was a “major question” that required clear authority delegated by Congress. The second, more important point was that the FCC’s reclassification decision was subject to the Turner standard. Under that standard, since the FCC did not engage—at the very least—in a market-power analysis, the rules could not stand, as they amounted to mandated speech.

The interesting part of this opinion is that it tracks very closely to the analysis of common-carriage requirements for social-media companies. Kavanaugh’s opinion offered important insights into:

  1. the applicability of the First Amendment right to editorial discretion to common carriers;
  2. the “use it or lose it” nature of this right;
  3. whether Turner’s protections depended on scarcity; and 
  4. what would be required to satisfy Turner scrutiny.

Common Carriage and First Amendment Protection

Kavanaugh found unequivocally that common carriers, such as ISPs classified under Title II, were subject to First Amendment protection under the Turner decisions:

The Court’s ultimate conclusion on that threshold First Amendment point was not obvious beforehand. One could have imagined the Court saying that cable operators merely operate the transmission pipes and are not traditional editors. One could have imagined the Court comparing cable operators to electricity providers, trucking companies, and railroads – all entities subject to traditional economic regulation. But that was not the analytical path charted by the Turner Broadcasting Court. Instead, the Court analogized the cable operators to the publishers, pamphleteers, and bookstore owners traditionally protected by the First Amendment. As Turner Broadcasting concluded, the First Amendment’s basic principles “do not vary when a new and different medium for communication appears” – although there of course can be some differences in how the ultimate First Amendment analysis plays out depending on the nature of (and competition in) a particular communications market. Brown v. Entertainment Merchants Association, 564 U.S. 786, 790 (2011) (internal quotation mark omitted).

Here, of course, we deal with Internet service providers, not cable television operators. But Internet service providers and cable operators perform the same kinds of functions in their respective networks. Just like cable operators, Internet service providers deliver content to consumers. Internet service providers may not necessarily generate much content of their own, but they may decide what content they will transmit, just as cable operators decide what content they will transmit. Deciding whether and how to transmit ESPN and deciding whether and how to transmit ESPN.com are not meaningfully different for First Amendment purposes.

Indeed, some of the same entities that provide cable television service – colloquially known as cable companies – provide Internet access over the very same wires. If those entities receive First Amendment protection when they transmit television stations and networks, they likewise receive First Amendment protection when they transmit Internet content. It would be entirely illogical to conclude otherwise. In short, Internet service providers enjoy First Amendment protection of their rights to speak and exercise editorial discretion, just as cable operators do.

‘Use It or Lose It’ Right to Editorial Discretion

Kavanaugh questioned whether the First Amendment right to editorial discretion depends, to some degree, on how much the entity used the right. Ultimately, he rejected the idea forwarded by the FCC that, since ISPs don’t restrict access to any sites, they were essentially holding themselves out to be common carriers:

I find that argument mystifying. The FCC’s “use it or lose it” theory of First Amendment rights finds no support in the Constitution or precedent. The FCC’s theory is circular, in essence saying: “They have no First Amendment rights because they have not been regularly exercising any First Amendment rights and therefore they have no First Amendment rights.” It may be true that some, many, or even most Internet service providers have chosen not to exercise much editorial discretion, and instead have decided to allow most or all Internet content to be transmitted on an equal basis. But that “carry all comers” decision itself is an exercise of editorial discretion. Moreover, the fact that the Internet service providers have not been aggressively exercising their editorial discretion does not mean that they have no right to exercise their editorial discretion. That would be akin to arguing that people lose the right to vote if they sit out a few elections. Or citizens lose the right to protest if they have not protested before. Or a bookstore loses the right to display its favored books if it has not done so recently. That is not how constitutional rights work. The FCC’s “use it or lose it” theory is wholly foreign to the First Amendment.

Employing a similar logic, Kavanaugh also rejected the notion that net-neutrality rules were essentially voluntary, given that ISPs held themselves out as carrying all content.

Relatedly, the FCC claims that, under the net neutrality rule, an Internet service provider supposedly may opt out of the rule by choosing to carry only some Internet content. But even under the FCC’s description of the rule, an Internet service provider that chooses to carry most or all content still is not allowed to favor some content over other content when it comes to price, speed, and availability. That half-baked regulatory approach is just as foreign to the First Amendment. If a bookstore (or Amazon) decides to carry all books, may the Government then force the bookstore (or Amazon) to feature and promote all books in the same manner? If a newsstand carries all newspapers, may the Government force the newsstand to display all newspapers in the same way? May the Government force the newsstand to price them all equally? Of course not. There is no such theory of the First Amendment. Here, either Internet service providers have a right to exercise editorial discretion, or they do not. If they have a right to exercise editorial discretion, the choice of whether and how to exercise that editorial discretion is up to them, not up to the Government.

Think about what the FCC is saying: Under the rule, you supposedly can exercise your editorial discretion to refuse to carry some Internet content. But if you choose to carry most or all Internet content, you cannot exercise your editorial discretion to favor some content over other content. What First Amendment case or principle supports that theory? Crickets.

In a footnote, Kavanugh continued to lambast the theory of “voluntary regulation” forwarded by the concurrence, stating:

The concurrence in the denial of rehearing en banc seems to suggest that the net neutrality rule is voluntary. According to the concurrence, Internet service providers may comply with the net neutrality rule if they want to comply, but can choose not to comply if they do not want to comply. To the concurring judges, net neutrality merely means “if you say it, do it.”…. If that description were really true, the net neutrality rule would be a simple prohibition against false advertising. But that does not appear to be an accurate description of the rule… It would be strange indeed if all of the controversy were over a “rule” that is in fact entirely voluntary and merely proscribes false advertising. In any event, I tend to doubt that Internet service providers can now simply say that they will choose not to comply with any aspects of the net neutrality rule and be done with it. But if that is what the concurrence means to say, that would of course avoid any First Amendment problem: To state the obvious, a supposed “rule” that actually imposes no mandates or prohibitions and need not be followed would not raise a First Amendment issue.

Scarcity and Capacity to Carry Content

The FCC had also argued that there was a difference between ISPs and the cable companies in Turner in that ISPs did not face decisions about scarcity in content carriage. But Kavanaugh rejected this theory as inconsistent with the First Amendment’s right not to be compelled to carry a message or speech.

That argument, too, makes little sense as a matter of basic First Amendment law. First Amendment protection does not go away simply because you have a large communications platform. A large bookstore has the same right to exercise editorial discretion as a small bookstore. Suppose Amazon has capacity to sell every book currently in publication and therefore does not face the scarcity of space that a bookstore does. Could the Government therefore force Amazon to sell, feature, and promote every book on an equal basis, and prohibit Amazon from promoting or recommending particular books or authors? Of course not. And there is no reason for a different result here. Put simply, the Internet’s technological architecture may mean that Internet service providers can provide unlimited content; it does not mean that they must.

Keep in mind, moreover, why that is so. The First Amendment affords editors and speakers the right not to speak and not to carry or favor unwanted speech of others, at least absent sufficient governmental justification for infringing on that right… That foundational principle packs at least as much punch when you have room on your platform to carry a lot of speakers as it does when you have room on your platform to carry only a few speakers.

Turner Scrutiny and Bottleneck Market Power

Finally, Kavanaugh applied Turner scrutiny and found that, at the very least, it requires a finding of “bottleneck market power” that would allow ISPs to harm consumers. 

At the time of the Turner Broadcasting decisions, cable operators exercised monopoly power in the local cable television markets. That monopoly power afforded cable operators the ability to unfairly disadvantage certain broadcast stations and networks. In the absence of a competitive market, a broadcast station had few places to turn when a cable operator declined to carry it. Without Government intervention, cable operators could have disfavored certain broadcasters and indeed forced some broadcasters out of the market altogether. That would diminish the content available to consumers. The Supreme Court concluded that the cable operators’ market-distorting monopoly power justified Government intervention. Because of the cable operators’ monopoly power, the Court ultimately upheld the must-carry statute…

The problem for the FCC in this case is that here, unlike in Turner Broadcasting, the FCC has not shown that Internet service providers possess market power in a relevant geographic market… 

Rather than addressing any problem of market power, the net neutrality rule instead compels private Internet service providers to supply an open platform for all would-be Internet speakers, and thereby diversify and increase the number of voices available on the Internet. The rule forcibly reduces the relative voices of some Internet service and content providers and enhances the relative voices of other Internet content providers.

But except in rare circumstances, the First Amendment does not allow the Government to regulate the content choices of private editors just so that the Government may enhance certain voices and alter the content available to the citizenry… Turner Broadcasting did not allow the Government to satisfy intermediate scrutiny merely by asserting an interest in diversifying or increasing the number of speakers available on cable systems. After all, if that interest sufficed to uphold must-carry regulation without a showing of market power, the Turner Broadcasting litigation would have unfolded much differently. The Supreme Court would have had little or no need to determine whether the cable operators had market power. But the Supreme Court emphasized and relied on the Government’s market power showing when the Court upheld the must-carry requirements… To be sure, the interests in diversifying and increasing content are important governmental interests in the abstract, according to the Supreme Court But absent some market dysfunction, Government regulation of the content carriage decisions of communications service providers is not essential to furthering those interests, as is required to satisfy intermediate scrutiny.

In other words, without a finding of bottleneck market power, there would be no basis for satisfying the government interest prong of Turner.

Applying Kavanaugh’s Dissent to NetChoice v. Paxton

Interestingly, each of these main points arises in the debate over regulating social-media companies as common carriers. Texas’ H.B. 20 attempts to do exactly that, which is at the heart of the litigation in NetChoice v. Paxton.

Common Carriage and First Amendment Protection

To the first point, Texas attempts to claim in its briefs that social-media companies are common carriers subject to lesser First Amendment protection: “Assuming the platforms’ refusals to serve certain customers implicated First Amendment rights, Texas has properly denominated the platforms common carriers. Imposing common-carriage requirements on a business does not offend the First Amendment.”

But much like the cable operators before them in Turner, social-media companies are not simply carriers of persons or things like the classic examples of railroads, telegraphs, and telephones. As TechFreedom put it in its brief: “As its name suggests… ‘common carriage’ is about offering, to the public at large  and on indiscriminate terms, to carry generic stuff from point A to point B. Social media websites fulfill none of these elements.”

In a sense, it’s even clearer that social-media companies are not common carriers than it was in the case of ISPs, because social-media platforms have always had terms of service that limit what can be said and that even allow the platforms to remove users for violations. All social-media platforms curate content for users in ways that ISPs normally do not.

‘Use It or Lose It’ Right to Editorial Discretion

Just as the FCC did in the Title II context, Texas also presses the idea that social-media companies gave up their right to editorial discretion by disclaiming the choice to exercise it, stating: “While the platforms compare their business policies to classic examples of First Amendment speech, such as a newspaper’s decision to include an article in its pages, the platforms have disclaimed any such status over many years and in countless cases. This Court should not accept the platforms’ good-for-this-case-only characterization of their businesses.” Pointing primarily to cases where social-media companies have invoked Section 230 immunity as a defense, Texas argues they have essentially lost the right to editorial discretion.

This, again, flies in the face of First Amendment jurisprudence, as Kavanaugh earlier explained. Moreover, the idea that social-media companies have disclaimed editorial discretion due to Section 230 is inconsistent with what that law actually does. Section 230 allows social-media companies to engage in as much or as little content moderation as they so choose by holding the third-party speakers accountable rather than the platform. Social-media companies do not relinquish their First Amendment rights to editorial discretion because they assert an applicable defense under the law. Moreover, social-media companies have long had rules delineating permissible speech, and they enforce those rules actively.

Interestingly, there has also been an analogue to the idea forwarded in USTelecom that the law’s First Amendment burdens are relatively limited. As noted above, then-Judge Kavanaugh rejected the idea forwarded by the concurrence that net-neutrality rules were essentially voluntary. In the case of H.B. 20, the bill’s original sponsor recently argued on Twitter that the Texas law essentially incorporates Section 230 by reference. If this is true, then the rules would be as pointless as the net-neutrality rules would have been, because social-media companies would be free under Section 230(c)(2) to remove “otherwise objectionable” material under the Texas law.

Scarcity and Capacity to Carry Content

In an earlier brief to the 5th Circuit, Texas attempted to differentiate social-media companies from the cable company in Turner by stating there was no necessary conflict between speakers, stating “[HB 20] does not, for example, pit one group of speakers against another.” But this is just a different way of saying that, since social-media companies don’t face scarcity in their technical capacity to carry speech, they can be required to carry all speech. This is inconsistent with the right Kavanaugh identified not to carry a message or speech, which is not subject to an exception that depends on the platform’s capacity to carry more speech.

Turner Scrutiny and Bottleneck Market Power

Finally, Judge Kavanaugh’s application of Turner to ISPs makes clear that a showing of bottleneck market power is necessary before common-carriage regulation may be applied to social-media companies. In fact, Kavanaugh used a comparison to social-media sites and broadcasters as a reductio ad absurdum for the idea that one could regulate ISPs without a showing of market power. As he put it there:

Consider the implications if the law were otherwise. If market power need not be shown, the Government could regulate the editorial decisions of Facebook and Google, of MSNBC and Fox, of NYTimes.com and WSJ.com, of YouTube and Twitter. Can the Government really force Facebook and Google and all of those other entities to operate as common carriers? Can the Government really impose forced-carriage or equal-access obligations on YouTube and Twitter? If the Government’s theory in this case were accepted, then the answers would be yes. After all, if the Government could force Internet service providers to carry unwanted content even absent a showing of market power, then it could do the same to all those other entities as well. There is no principled distinction between this case and those hypothetical cases.

Much like the FCC with its Open Internet Order, Texas did not make a finding of bottleneck market power in H.B. 20. Instead, Texas basically asked for the opportunity to get to discovery to develop the case that social-media platforms have market power, stating that “[b]ecause the District Court sharply limited discovery before issuing its preliminary injunction, the parties have not yet had the opportunity to develop many factual questions, including whether the platforms possess market power.” This simply won’t fly under Turner, which required a legislative finding of bottleneck market power that simply doesn’t exist in H.B. 20. 

Moreover, bottleneck market power means more than simply “market power” in an antitrust sense. As Judge Kavanaugh put it: “Turner Broadcasting seems to require even more from the Government. The Government apparently must also show that the market power would actually be used to disadvantage certain content providers, thereby diminishing the diversity and amount of content available.” Here, that would mean not only that social-media companies have market power, but they want to use it to disadvantage users in a way that makes less diverse content and less total content available.

The economics of multi-sided markets is probably the best explanation for why platforms have moderation rules. They are used to maximize a platform’s value by keeping as many users engaged and on those platforms as possible. In other words, the effect of moderation rules is to increase the amount of user speech by limiting harassing content that could repel users. This is a much better explanation for these rules than “anti-conservative bias” or a desire to censor for censorship’s sake (though there may be room for debate on the margin when it comes to the moderation of misinformation and hate speech).

In fact, social-media companies, unlike the cable operators in Turner, do not have the type of “physical connection between the television set and the cable network” that would grant them “bottleneck, or gatekeeper, control over” speech in ways that would allow platforms to “silence the voice of competing speakers with a mere flick of the switch.” Cf. Turner, 512 U.S. at 656. Even if they tried, social-media companies simply couldn’t prevent Internet users from accessing content they wish to see online; they inevitably will find such content by going to a different site or app.

Conclusion: The Future of the First Amendment Online

While many on both sides of the partisan aisle appear to see a stark divide between the interests of—and First Amendment protections afforded to—ISPs and social-media companies, Kavanaugh’s opinion in USTelecom shows clearly they are in the same boat. The two rise or fall together. If the government can impose common-carriage requirements on social-media companies in the name of free speech, then they most assuredly can when it comes to ISPs. If the First Amendment protects the editorial discretion of one, then it does for both.

The question then moves to relative market power, and whether the dominant firms in either sector can truly be said to have “bottleneck” market power, which implies the physical control of infrastructure that social-media companies certainly lack.

While it will be interesting to see what the 5th Circuit (and likely, the Supreme Court) ultimately do when reviewing H.B. 20 and similar laws, if now-Justice Kavanaugh’s dissent is any hint, there will be a strong contingent on the Court for finding the First Amendment applies online by protecting the right of private actors (ISPs and social-media companies) to set the rules of the road on their property. As Kavanaugh put it in Manhattan Community Access Corp. v. Halleck: “[t]he Free Speech Clause of the First Amendment constrains governmental actors and protects private actors.” Competition is the best way to protect consumers’ interests, not prophylactic government regulation.

With the 11th Circuit upholding the stay against Florida’s social-media law and the Supreme Court granting the emergency application to vacate the stay of the injunction in NetChoice v. Paxton, the future of the First Amendment appears to be on strong ground. There is no basis to conclude that simply calling private actors “common carriers” reduces their right to editorial discretion under the First Amendment.

The tentatively pending sale of Twitter to Elon Musk has been greeted with celebration by many on the right, along with lamentation by some on the left, regarding what it portends for the platform’s moderation policies. Musk, for his part, has announced that he believes Twitter should be a free-speech haven and that it needs to dial back the (allegedly politically biased) moderation in which it has engaged.

The good news for everyone is that a differentiated product at Twitter could be exactly what the market―and the debate over Big Tech―needs.

The Market for Speech Governance

As I’ve written previously, the First Amendment (bolstered by Section 230 of the Communications Decency Act) protects not only speech itself, but also the private ordering of speech. “Congress shall make no law… abridging the freedom of speech” means that state actors can’t infringe speech, but it also (in most cases) protects private actors’ ability to make such rules free from government regulation. As the Supreme Court has repeatedly held, private actors can make their own rules about speech on their own property.

As Justice Brett Kavanaugh put it on behalf of the Court in Manhattan Community Access Corp. v. Halleck:

[W]hen a private entity provides a forum for speech, the private entity is not ordinarily constrained by the First Amendment because the private entity is not a state actor. The private entity may thus exercise editorial discretion over the speech and speakers in the forum…

In short, merely hosting speech by others is not a traditional, exclusive public function and does not alone transform private entities into state actors subject to First Amendment constraints.

If the rule were otherwise, all private property owners and private lessees who open their property for speech would be subject to First Amendment constraints and would lose the ability to exercise what they deem to be appropriate editorial discretion within that open forum. Private property owners and private lessees would face the unappetizing choice of allowing all comers or closing the platform altogether.

In other words, as much as it protects “the marketplace of ideas,” the First Amendment also protects “the market for speech governance.” Musk’s idea that Twitter should be subject to the First Amendment is simply incoherent, but his vision for Twitter to have less politically biased content moderation could work.

Musk’s Plan for Twitter

There has been much commentary on what Musk intends to do, and whether it is a realistic way to maximize the platform’s value. As a multi-sided platform, Twitter’s revenue is driven by advertisers, who want to reach a mass audience. This means Twitter, much like other social-media platforms, must consider the costs and benefits of speech to its users, and strike a balance that maximizes the value of the platform. The history of social-media content moderation suggests that these platforms have found that rules against harassment, abuse, spam, bots, pornography, and certain hate speech and misinformation are necessary.

For rules pertaining to harassment and abuse, in particular, it is easy to understand how they are necessary to prevent losing users. There seems to be a wide societal consensus that such speech is intolerable. Similarly, spam, bots, and pornographic content, even if legal speech, are largely not what social media users want to see.

But for hate speech and misinformation, however much one agrees in the abstract about their undesirableness, there is significant debate on the margins about what is acceptable or unacceptable discourse, just as there is over what is true or false when it comes to touchpoint social and political issues. It is one thing to ban Nazis due to hate speech; it is arguably quite another to remove a prominent feminist author due to “misgendering” people. It is also one thing to say crazy conspiracy theories like QAnon should be moderated, but quite another to fact-check good-faith questioning of the efficacy of masks or vaccines. It is likely in these areas that Musk will offer an alternative to what is largely seen as biased content moderation from Big Tech companies.

Musk appears to be making a bet that the market for speech governance is currently not well-served by the major competitors in the social-media space. If Twitter could thread the needle by offering a more politically neutral moderation policy that still manages to keep off the site enough of the types of content that repel users, then it could conceivably succeed and even influence the moderation policies of other social-media companies.

Let the Market Decide

The crux of the issue is this: Conservatives who have backed antitrust and regulatory action against Big Tech because of political bias concerns should be willing to back off and allow the market to work. And liberals who have defended the right of private companies to make rules for their platforms should continue to defend that principle. Let the market decide.

All too frequently, vocal advocates for “Internet Freedom” imagine it exists along just a single dimension: the extent to which it permits individuals and firms to interact in new and unusual ways.

But that is not the sum of the Internet’s social value. The technologies that underlie our digital media remain a relatively new means to distribute content. It is not just the distributive technology that matters, but also the content that is distributed. Thus, the norms and laws that facilitate this interaction of content production and distribution are critical.

Sens. Patrick Leahy (D-Vt.) and Thom Tillis (R-N.C.)—the chair and ranking member, respectively, of the Senate Judiciary Committee’s Subcommittee on Intellectual Property—recently introduced legislation that would require online service providers (OSPs) to comply with a slightly heightened set of obligations to deter copyright piracy on their platforms. This couldn’t come at a better time.

S. 3880, the SMART Copyright Act, would amend Section 512 of the Copyright Act, originally enacted as part of the Digital Millennium Copyright Act of 1998. Section 512, among other things, provides safe harbor for OSPs for copyright infringements by their users. The expectation at the time was that OSPs would work voluntarily with rights holders to develop industry best practices to deal with pirated content, while also allowing the continued growth of the commercial Internet.

Alas, it has become increasingly apparent in the nearly quarter-century since the DMCA was passed that the law has not adequately kept pace with the technological capabilities of digital piracy. In April 2020 alone, U.S. consumers logged 725 million visits to pirate sites for movies and television programming. Close to 90% of those visits were attributable to illegal streaming services that use internet protocol television to distribute pirated content. Such services now serve more than 9 million U.S. subscribers and generate more than $1 billion in annual revenue.

Globally, there are more than 26.6 billion annual illicit views of U.S.-produced movies and 126.7 billion views of U.S.-produced television episodes. A report produced for the U.S. Chamber of Commerce by NERA Economic Consulting estimates the annual impact to the United States to be $30 to $70 billion of lost revenue, 230,000 to 560,000 of lost jobs, and between $45 and $115 billion in lower GDP.

Thus far, the most effective preventative measures produced have been filtering solutions adopted by YouTube, Facebook, and Audible Magic, but neither filtering nor other solutions have been adopted industrywide. As the U.S. Copyright Office has observed:

Throughout the Study, the Office heard from participants that Congress’ intent to have multi-stakeholder consensus drive improvements to the system has not been borne out in practice. By way of example, more than twenty years after passage of the DMCA, although some individual OSPs have deployed DMCA+ systems that are primarily open to larger content owners, not a single technology has been designated a “standard technical measure” under section 512(i). While numerous potential reasons were cited for this failure— from a lack of incentives for ISPs to participate in standards to the inappropriateness of one-size-fits-all technologies—the end result is that few widely-available tools have been created and consistently implemented across the internet ecosystem. Similarly, while various voluntary initiatives have been undertaken by different market participants to address the volume of true piracy within the system, these initiatives, although initially promising, likewise have suffered from various shortcomings, from limited participation to ultimate ineffectiveness.

Given the lack of standard technical measures (STMs), the Leahy-Tillis bill would empower the Office of the Librarian of Congress (LOC) broad latitude to recommend STMs for everything from off-the-shelf software to open-source software to general technical strategies that can be applied to a wide variety of systems. This would include the power to initiate public rulemakings in which it could either propose new STMs or revise or rescind existing STMs. The STMs could be as broad or as narrow as the LOC deems appropriate, including being tailored to specific types of content and specific types of providers. Following rulemaking, subject firms would have at least one year to adopt a given STM.

Critically, the SMART Copyright Act would not hold OSPs liable for the infringing content itself, but for failure to make reasonable efforts to accommodate the STM (or for interference with the STM). Courts finding an OSP to have violated their obligation for good-faith compliance could award an injunction, damages, and costs.

The SMART Copyright Act is a directionally correct piece of legislation with two important caveats: it all depends on the kinds of STMs that the LOC recommends and on how a “violation” is determined for the purposes of awarding damages.

The law would magnify the incentive for private firms to work together with rights holders to develop STMs that more reasonably recruit OSPs into the fight against online piracy. In this sense, the LOC would be best situated as a convener, encouraging STMs to emerge from the broad group of OSPs and rights holders. The fact that the LOC would be able to adopt STMs with or without stakeholders’ participation should provide more incentive for collaboration among the relevant parties.

Short of a voluntary set of STMs, the LOC could nonetheless rely on the technical suggestions and concerns of the multistakeholder community to discern a minimum viable set of practices that constitute best efforts to control piracy. The least desirable outcome—and, I suspect, the one most susceptible to failure—would be for the LOC to examine and select specific technologies. If implemented sensibly, the SMART Copyright Act would create a mechanism to enforce the original goals of Section 512.

The damages provisions are likewise directionally correct but need more clarity. Repeat “violations” allow courts to multiply damages awards. But there is no definition of what counts as a “violation,” nor is there adequate clarity about how a “violation” interacts with damages. For example, is a single infringement on a platform a “violation” such that if three occur, the platform faces treble damages for all the infringements in a single case? That seems unlikely.

More reasonable would be to interpret the provision as saying that a final adjudication that the platform behaved unreasonably is what counts for the purposes of calculating whether damages are multiplied. Then, within each adjudication, damages are calculated for all infringements, up to the statutory damages cap. This interpretation would put teeth in the law, but it’s just one possible interpretation. Congress would need to ensure the final language is clear.

An even better would be to make Section 512’s safe harbor contingent on an OSP’s reasonable compliance. Unreasonable behavior, in that case, provides a much more straightforward way to assess damages, without needing to leave it up to court interpretations about what counts as a “violation.” Particularly since courts have historically tended to interpret the DMCA in ways that are unfavorable to rights holders (e.g., “red flag” knowledge), it would be much better to create a simple standard here.

This is not to say there are no potential problems. Among the concerns that surround promulgating new STMs are potentially creating cybersecurity vulnerabilities, sources for privacy leaks, or accidentally chilling speech. Of course, it’s possible that there will be costs to implementing an STM, just as there are costs when private firms operate their own content-protection mechanisms. But just because harms can happen doesn’t mean they will happen, or that they are insurmountable when they do. The criticisms that have emerged have so far taken on the breathless quality of the empirically unfounded claims that 2012’s SOPA/PIPA legislation would spell doom for the Internet. If Section 512 reforms are well-calibrated and sufficiently flexible to adapt to the market realities, I think we can reasonably expect them to be, on net, beneficial.

Toward this end, the SMART Copyright Act contemplates, for each proposed STM, a public comment period and at least one meeting with relevant stakeholders, to allow time to understand its likely costs and benefits. This process would provide ample opportunities to alert the LOC to potential shortcomings.

But the criticisms do suggest a potentially valuable change to the bill’s structure. If a firm does indeed discover that a particular STM, in practice, leads to unacceptable security or privacy risks, or is systematically biased against lawful content, there should be a legal mechanism that would allow for good-faith compliance while also mitigating STMs’ unforeseen flaws. Ideally, this would involve working with the LOC in an iterative process to refine relevant compliance obligations.

Congress will soon be wrapped up in the volatile midterm elections, which could make it difficult for relatively low-salience issues like copyright to gain traction. Nonetheless, the Leahy-Tillis bill marks an important step toward addressing online piracy, and Congress should move deliberatively toward that goal.

[The following post was adapted from the International Center for Law & Economics White Paper “Polluting Words: Is There a Coasean Case to Regulate Offensive Speech?]

Words can wound. They can humiliate, anger, insult.

University students—or, at least, a vociferous minority of them—are keen to prevent this injury by suppressing offensive speech. To ensure campuses are safe places, they militate for the cancellation of talks by speakers with opinions they find offensive, often successfully. And they campaign to get offensive professors fired from their jobs.

Off campus, some want this safety to be extended to the online world and, especially, to the users of social media platforms such as Twitter and Facebook. In the United States, this would mean weakening the legal protections of offensive speech provided by Section 230 of the Communications Decency Act (as President Joe Biden has recommended) or by the First Amendment and. In the United Kingdom, the Online Safety Bill is now before Parliament. If passed, it will give a U.K. government agency the power to dictate the content-moderation policies of social media platforms.

You don’t need to be a woke university student or grandstanding politician to suspect that society suffers from an overproduction of offensive speech. Basic economics provides a reason to suspect it—the reason being that offense is an external cost of speech. The cost is borne not by the speaker but by his audience. And when people do not bear all the costs of an action, they do it too much.

Jack tweets “women don’t have penises.” This offends Jill, who is someone with a penis who considers herself (or himself, if Jack is right) to be a woman. And it offends many others, who agree with Jill that Jack is indulging in ugly transphobic biological essentialism. Lacking Bill Clinton’s facility for feeling the pain of others, Jack does not bear this cost. So, even if it exceeds whatever benefit Jack gets from saying that women don’t have penises, he will still say it. In other words, he will say it even when doing so makes society altogether worse off.

It shouldn’t be allowed!

That’s what we normally say when actions harm others more than they benefit the agent. The law normally conforms to John Stuart Mill’s “Harm Principle” by restricting activities—such as shooting people or treating your neighbours to death metal at 130 decibels at 2 a.m.—with material external costs. Those who seek legal reform to restrict offensive speech are surely doing no more than following an accepted general principle.

But it’s not so simple. As Ronald Coase pointed out in his famous 1960 article “The Problem of Social Cost,” externalities are a reciprocal problem. If Wayne had no neighbors, his playing death metal at 130 decibels at 2 a.m. would have no external costs. Their choice of address is equally a source of the problem. Similarly, if Jill weren’t a Twitter user, she wouldn’t have been offended by Jack’s tweet about who has a penis, since she wouldn’t have encountered it. Externalities are like tangos: they always have at least two perpetrators.

So, the legal question, “who should have a right to what they want?”—Wayne to his loud music or his neighbors to their sleep; Jack to expressing his opinion about women or Jill to not hearing such opinions—cannot be answered by identifying the party who is responsible for the external cost. Both parties are responsible.

How, then, should the question be answered? In the same paper, Coase the showed that, in certain circumstances, who the courts favor will make no difference to what ends up happening, and that what ends up happening will be efficient. Suppose the court says that Wayne cannot bother his neighbors with death metal at 2 a.m. If Wayne would be willing to pay $100,000 to keep doing it and his neighbors, combined, would put up with it for anything more than $95,000, then they should be able to arrive at a mutually beneficial deal whereby Wayne pays them something between $95,000 and $100,000 to forgo their right to stop him making his dreadful noise.

That’s not exactly right. If negotiating a deal would cost more than $5,000, then no mutually beneficial deal is possible and the rights-trading won’t happen. Transaction costs being less than the difference between the two parties’ valuations is the circumstance in which the allocation of legal rights makes no difference to how resources get used, and where efficiency will be achieved, in any event.

But it is an unusual circumstance, especially when the external cost is suffered by many people. When the transaction cost is too high, efficiency does depend on the allocation of rights by courts or legislatures. As Coase argued, when this is so, efficiency will be served if a right to the disputed resource is granted to the party with the higher cost of avoiding the externality.

Given the (implausible) valuations Wayne and his neighbors place on the amount of noise in their environment at 2 a.m., efficiency is served by giving Wayne the right to play his death metal, unless he could soundproof his house or play his music at a much lower volume or take some other avoidance measure that costs him less than the $90,000 cost to his neighbours.

And given that Jack’s tweet about penises offends a large open-ended group of people, with whom Jack therefore cannot negotiate, it looks like they should be given the right not to be offended by Jack’s comment and he should be denied the right to make it. Coasean logic supports the woke censors!          

But, again, it’s not that simple—for two reasons.

The first is that, although those are offended may be harmed by the offending speech, they needn’t necessarily be. Physical pain is usually harmful, but not when experienced by a sexual masochist (in the right circumstances, of course). Similarly, many people take masochistic pleasure in being offended. You can tell they do, because they actively seek out the sources of their suffering. They are genuinely offended, but the offense isn’t harming them, just as the sexual masochist really is in physical pain but isn’t harmed by it. Indeed, real pain and real offense are required, respectively, for the satisfaction of the sexual masochist and the offense masochist.

How many of the offended are offense masochists? Where the offensive speech can be avoided at minimal cost, the answer must be most. Why follow Jordan Peterson on Twitter when you find his opinions offensive unless you enjoy being offended by him? Maybe some are keeping tabs on the dreadful man so that they can better resist him, and they take the pain for that reason rather than for masochistic glee. But how could a legislator or judge know? For all they know, most of those offended by Jordan Peterson are offense masochists and the offense he causes is a positive externality.

The second reason Coasean logic doesn’t support the would-be censors is that social media platforms—the venues of offensive speech that they seek to regulate—are privately owned. To see why this is significant, consider not offensive speech, but an offensive action, such as openly masturbating on a bus.

This is prohibited by law. But it is not the mere act that is illegal. You are allowed to masturbate in the privacy of your bedroom. You may not masturbate on a bus because those who are offended by the sight of it cannot easily avoid it. That’s why it is illegal to express obscenities about Jesus on a billboard erected across the road from a church but not at a meeting of the Angry Atheists Society. The laws that prohibit offensive speech in such circumstances—laws against public nuisance, harassment, public indecency, etc.—are generally efficient. The cost they impose on the offenders is less than the benefits to the offended.

But they are unnecessary when the giving and taking of offense occur within a privately owned place. Suppose no law prohibited masturbating on a bus. It still wouldn’t be allowed on buses owned by a profit-seeker. Few people want to masturbate on buses and most people who ride on buses seek trips that are masturbation-free. A prohibition on masturbation will gain the owner more customers than it loses him. The prohibition is simply another feature of the product offered by the bus company. Nice leather seats, punctual departures, and no wankers (literally). There is no more reason to believe that the bus company’s passenger-conduct rules will be inefficient than that its other product features will be and, therefore, no more reason to legally stipulate them.

The same goes for the content-moderation policies of social media platforms. They are just another product feature offered by a profit-seeking firm. If they repel more customers than they attract (or, more accurately, if they repel more advertising revenue than they attract), they would be inefficient. But then, of course, the company would not adopt them.

Of course, the owner of a social media platform might not be a pure profit-maximiser. For example, he might forgo $10 million in advertising revenue for the sake of banning speakers he personally finds offensive. But the outcome is still efficient. Allowing the speech would have cost more by way of the owner’s unhappiness than the lost advertising would have been worth.  And such powerful feelings in the owner of a platform create an opportunity for competitors who do not share his feelings. They can offer a platform that does not ban the offensive speakers and, if enough people want to hear what they have to say, attract users and the advertising revenue that comes with them. 

If efficiency is your concern, there is no problem for the authorities to solve. Indeed, the idea that the authorities would do a better job of deciding content-moderation rules is not merely absurd, but alarming. Politicians and the bureaucrats who answer to them or are appointed by them would use the power not to promote efficiency, but to promote agendas congenial to them. Jurisprudence in liberal democracies—and, especially, in America—has been suspicious of governmental control of what may be said. Nothing about social media provides good reason to become any less suspicious.

In his recent concurrence in Biden v. Knight, Justice Clarence Thomas sketched a roadmap for how to regulate social-media platforms. The animating factor for Thomas, much like for other conservatives, appears to be a sense that Big Tech has exhibited anti-conservative bias in its moderation decisions, most prominently by excluding former President Donald Trump from Twitter and Facebook. The opinion has predictably been greeted warmly by conservative champions of social-media regulation, who believe it shows how states and the federal government can proceed on this front.

While much of the commentary to date has been on whether Thomas got the legal analysis right, or on the uncomfortable fit of common-carriage law to social media, the deeper question of the First Amendment’s protection of private ordering has received relatively short shrift.

Conservatives’ main argument has been that Big Tech needs to be reined in because it is restricting the speech of private individuals. While conservatives traditionally have defended the state-action doctrine and the right to editorial discretion, they now readily find exceptions to both in order to justify regulating social-media companies. But those two First Amendment doctrines have long enshrined an important general principle: private actors can set the rules for speech on their own property. I intend to analyze this principle from a law & economics perspective and show how it benefits society.

Who Balances the Benefits and Costs of Speech?

Like virtually any other human activity, there are benefits and costs to speech and it is ultimately subjective individual preference that determines the value that speech has. The First Amendment protects speech from governmental regulation, with only limited exceptions, but that does not mean all speech is acceptable or must be tolerated. Under the state-action doctrine, the First Amendment only prevents the government from restricting speech.

Some purported defenders of the principle of free speech no longer appear to see a distinction between restraints on speech imposed by the government and those imposed by private actors. But this is surely mistaken, as no one truly believes all speech protected by the First Amendment should be without consequence. In truth, most regulation of speech has always come by informal means—social mores enforced by dirty looks or responsive speech from others.

Moreover, property rights have long played a crucial role in determining speech rules within any given space. If a man were to come into my house and start calling my wife racial epithets, I would not only ask that person to leave but would exercise my right as a property owner to eject the trespasser—if necessary, calling the police to assist me. I similarly could not expect to go to a restaurant and yell at the top of my lungs about political issues and expect them—even as “common carriers” or places of public accommodation—to allow me to continue.

As Thomas Sowell wrote in Knowledge and Decisions:

The fact that different costs and benefits must be balanced does not in itself imply who must balance them―or even that there must be a single balance for all, or a unitary viewpoint (one “we”) from which the issue is categorically resolved.

Knowledge and Decisions, p. 240

When it comes to speech, the balance that must be struck is between one individual’s desire for an audience and that prospective audience’s willingness to play the role. Asking government to use regulation to make categorical decisions for all of society is substituting centralized evaluation of the costs and benefits of access to communications for the individual decisions of many actors. Rather than incremental decisions regarding how and under what terms individuals may relate to one another—which can evolve over time in response to changes in what individuals find acceptable—government by its nature can only hand down categorical guidelines: “you must allow x, y, and z speech.”

This is particularly relevant in the sphere of social media. Social-media companies are multi-sided platforms. They are profit-seeking, to be sure, but the way they generate profits is by acting as intermediaries between users and advertisers. If they fail to serve their users well, those users could abandon the platform. Without users, advertisers would have no interest in buying ads. And without advertisers, there is no profit to be made. Social-media companies thus need to maximize the value of their platform by setting rules that keep users engaged.

In the cases of Facebook, Twitter, and YouTube, the platforms have set content-moderation standards that restrict many kinds of speech that are generally viewed negatively by users, even if the First Amendment would foreclose the government from regulating those same types of content. This is a good thing. Social-media companies balance the speech interests of different kinds of users to maximize the value of the platform and, in turn, to maximize benefits to all.

Herein lies the fundamental difference between private action and state action: one is voluntary, and the other based on coercion. If Facebook or Twitter suspends a user for violating community rules, it represents termination of a previously voluntary association. If the government kicks someone out of a public forum for expressing legal speech, that is coercion. The state-action doctrine recognizes this fundamental difference and creates a bright-line rule that courts may police when it comes to speech claims. As Sowell put it:

The courts’ role as watchdogs patrolling the boundaries of governmental power is essential in order that others may be secure and free on the other side of those boundaries. But what makes watchdogs valuable is precisely their ability to distinguish those people who are to be kept at bay and those who are to be left alone. A watchdog who could not make that distinction would not be a watchdog at all, but simply a general menace.

Knowledge and Decisions, p. 244

Markets Produce the Best Moderation Policies

The First Amendment also protects the right of editorial discretion, which means publishers, platforms, and other speakers are free from carrying or transmitting government-compelled speech. Even a newspaper with near-monopoly power cannot be compelled by a right-of-reply statute to carry responses by political candidates to editorials it has published. In other words, not only is private regulation of speech not state action, but in many cases, private regulation is protected by the First Amendment.

There is no reason to think that social-media companies today are in a different position than was the newspaper in Miami Herald v. Tornillo. These companies must determine what, how, and where content is presented within their platform. While this right of editorial discretion protects the moderation decisions of social-media companies, its benefits accrue to society at-large.

Social-media companies’ abilities to differentiate themselves based on functionality and moderation policies are important aspects of competition among them. How each platform is used may differ depending on those factors. In fact, many consumers use multiple social-media platforms throughout the day for different purposes. Market competition, not government power, has enabled internet users (including conservatives!) to have more avenues than ever to get their message out.

Many conservatives remain unpersuaded by the power of markets in this case. They see multiple platforms all engaging in very similar content-moderation policies when it comes to certain touchpoint issues, and thus allege widespread anti-conservative bias and collusion. Neither of those claims have much factual support, but more importantly, the similarity of content-moderation standards may simply be common responses to similar demand structures—not some nefarious and conspiratorial plot.

In other words, if social-media users demand less of the kinds of content commonly considered to be hate speech, or less misinformation on certain important issues, platforms will do their best to weed those things out. Platforms won’t always get these determinations right, but it is by no means clear that forcing them to carry all “legal” speech—which would include not just misinformation and hate speech, but pornographic material, as well—would better serve social-media users. There are always alternative means to debate contestable issues of the day, even if it may be more costly to access them.

Indeed, that content-moderation policies make it more difficult to communicate some messages is precisely the point of having them. There is a subset of protected speech to which many users do not wish to be subject. Moreover, there is no inherent right to have an audience on a social-media platform.

Conclusion

Much of the First Amendment’s economic value lies in how it defines roles in the market for speech. As a general matter, it is not the government’s place to determine what speech should be allowed in private spaces. Instead, the private ordering of speech emerges through the application of social mores and property rights. This benefits society, as it allows individuals to create voluntary relationships built on marginal decisions about what speech is acceptable when and where, rather than centralized decisions made by a governing few and that are difficult to change over time.

In the battle of ideas, it is quite useful to be able to brandish clear and concise debating points in support of a proposition, backed by solid analysis. Toward that end, in a recent primer about antitrust law published by the Mercatus Center, I advance four reasons to reject neo-Brandeisian critiques of the consensus (at least, until very recently) consumer welfare-centric approach to antitrust enforcement. My four points, drawn from the primer (with citations deleted and hyperlinks added) are as follows:

First, the underlying assumptions of rising concentration and declining competition on which the neo-Brandeisian critique is largely based (and which are reflected in the introductory legislative findings of the Competition and Antitrust Law Enforcement Reform Act [of 2021, introduced by Senator Klobuchar on February 4, lack merit]. Chapter 6 of the 2020 Economic Report of the President, dealing with competition policy, summarizes research debunking those assumptions. To begin with, it shows that studies complaining that competition is in decline are fatally flawed. Studies such as one in 2016 by the Council of Economic Advisers rely on overbroad market definitions that say nothing about competition in specific markets, let alone across the entire economy. Indeed, in 2018, professor Carl Shapiro, chief DOJ antitrust economist in the Obama administration, admitted that a key summary chart in the 2016 study “is not informative regarding overall trends in concentration in well-defined relevant markets that are used by antitrust economists to assess market power, much less trends in concentration in the U.S. economy.” Furthermore, as the 2020 report points out, other literature claiming that competition is in decline rests on a problematic assumption that increases in concentration (even assuming such increases exist) beget softer competition. Problems with this assumption have been understood since at least the 1970s. The most fundamental problem is that there are alternative explanations (such as exploitation of scale economies) for why a market might demonstrate both high concentration and high markups—explanations that are still consistent with procompetitive behavior by firms. (In a related vein, research by other prominent economists has exposed flaws in studies that purport to show a weakening of merger enforcement standards in recent years.) Finally, the 2020 report notes that the real solution to perceived economic problems may be less government, not more: “As historic regulatory reform across American industries has shown, cutting government-imposed barriers to innovation leads to increased competition, strong economic growth, and a revitalized private sector.”

Second, quite apart from the flawed premises that inform the neo-Brandeisian critique, specific neo-Brandeisian reforms appear highly problematic on economic grounds. Breakups of dominant firms or near prohibitions on dominant firm acquisitions would sacrifice major economies of scale and potential efficiencies of integration, harming consumers without offering any proof that the new market structures in reshaped industries would yield consumer or producer benefits. Furthermore, a requirement that merging parties prove a negative (that the merger will not harm competition) would limit the ability of entrepreneurs and market makers to act on information about misused or underutilized assets through the merger process. This limitation would reduce economic efficiency. After-the-fact studies indicating that a large percentage of mergers do not add wealth and do not otherwise succeed as much as projected miss this point entirely. They ignore what the world would be like if mergers were much more difficult to enter into: a world where there would be lower efficiency and dynamic economic growth because there would be less incentive to seek out market-improving opportunities.

Third, one aspect of the neo-Brandeisian approach to antitrust policy is at odds with fundamental notions of fair notice of wrongdoing and equal treatment under neutral principles, notions that are central to the rule of law. In particular, the neo-Brandeisian call for considering a multiplicity of new factors such as fairness, labor, and the environment when enforcing policy is troublesome. There is no neutral principle for assigning weights to such divergent interests, and (even if weights could be assigned) there are no economic tools for accurately measuring how a transaction under review would affect those interests. It follows that abandoning antitrust law’s consumer-welfare standard in favor of an ill-defined multifactor approach would spawn confusion in the private sector and promote arbitrariness in enforcement decisions, undermining the transparency that is a key aspect of the rule of law. Whereas concerns other than consumer welfare may of course be validly considered in setting public policy, they are best dealt with under other statutory schemes, not under antitrust law.

Fourth, and finally, neo-Brandeisian antitrust proposals are not a solution to widely expressed concerns that big companies in general, and large digital platforms in particular, are undermining free speech by censoring content of which they disapprove. Antitrust law is designed to prevent businesses from creating impediments to market competition that reduce economic welfare; it is not well-suited to policing companies’ determinations regarding speech. To the extent that policymakers wish to address speech censorship on large platforms, they should consider other regulatory institutions that would be better suited to the task (such as communications law), while keeping in mind First Amendment limitations on the ability of government to control private speech.

In light of these four points, the primer concludes that the neo-Brandeisian-inspired antitrust “reform” proposals being considered by Congress should be rejected:

[E]fforts to totally reshape antitrust policy into a quasi-regulatory system that arbitrarily blocks and disincentivizes (1) welfare-enhancing mergers and (2) an array of actions by dominant firms are highly troubling. Such interventionist proposals ignore the lack of evidence of serious competitive problems in the American economy and appear arbitrary compared to the existing consumer-welfare-centric antitrust enforcement regime. To use a metaphor, Congress and public officials should avoid a drastic new antitrust cure for an anticompetitive disease that can be handled effectively with existing antitrust medications.

Let us hope that the serious harm associated with neo-Brandeisian legislative “deformation” (a more apt term than reformation) of the antitrust laws is given a full legislative airing before Congress acts.

In what has become regularly scheduled programming on Capitol Hill, Facebook CEO Mark Zuckerberg, Twitter CEO Jack Dorsey, and Google CEO Sundar Pichai will be subject to yet another round of congressional grilling—this time, about the platforms’ content-moderation policies—during a March 25 joint hearing of two subcommittees of the House Energy and Commerce Committee.

The stated purpose of this latest bit of political theatre is to explore, as made explicit in the hearing’s title, “social media’s role in promoting extremism and misinformation.” Specific topics are expected to include proposed changes to Section 230 of the Communications Decency Act, heightened scrutiny by the Federal Trade Commission, and misinformation about COVID-19—the subject of new legislation introduced by Rep. Jennifer Wexton (D-Va.) and Sen. Mazie Hirono (D-Hawaii).

But while many in the Democratic majority argue that social media companies have not done enough to moderate misinformation or hate speech, it is a problem with no realistic legal fix. Any attempt to mandate removal of speech on grounds that it is misinformation or hate speech, either directly or indirectly, would run afoul of the First Amendment.

Much of the recent focus has been on misinformation spread on social media about the 2020 election and the COVID-19 pandemic. The memorandum for the March 25 hearing sums it up:

Facebook, Google, and Twitter have long come under fire for their role in the dissemination and amplification of misinformation and extremist content. For instance, since the beginning of the coronavirus disease of 2019 (COVID-19) pandemic, all three platforms have spread substantial amounts of misinformation about COVID-19. At the outset of the COVID-19 pandemic, disinformation regarding the severity of the virus and the effectiveness of alleged cures for COVID-19 was widespread. More recently, COVID-19 disinformation has misrepresented the safety and efficacy of COVID-19 vaccines.

Facebook, Google, and Twitter have also been distributors for years of election disinformation that appeared to be intended either to improperly influence or undermine the outcomes of free and fair elections. During the November 2016 election, social media platforms were used by foreign governments to disseminate information to manipulate public opinion. This trend continued during and after the November 2020 election, often fomented by domestic actors, with rampant disinformation about voter fraud, defective voting machines, and premature declarations of victory.

It is true that, despite social media companies’ efforts to label and remove false content and bar some of the biggest purveyors, there remains a considerable volume of false information on social media. But U.S. Supreme Court precedent consistently has limited government regulation of false speech to distinct categories like defamation, perjury, and fraud.

The Case of Stolen Valor

The court’s 2011 decision in United States v. Alvarez struck down as unconstitutional the Stolen Valor Act of 2005, which made it a federal crime to falsely claim to have earned a military medal. A four-justice plurality opinion written by Justice Anthony Kennedy, along with a two-justice concurrence, both agreed that a statement being false did not, by itself, exclude it from First Amendment protection. 

Kennedy’s opinion noted that while the government may impose penalties for false speech connected with the legal process (perjury or impersonating a government official); with receiving a benefit (fraud); or with harming someone’s reputation (defamation); the First Amendment does not sanction penalties for false speech, in and of itself. The plurality exhibited particular skepticism toward the notion that government actors could be entrusted as a “Ministry of Truth,” empowered to determine what categories of false speech should be made illegal:

Permitting the government to decree this speech to be a criminal offense, whether shouted from the rooftops or made in a barely audible whisper, would endorse government authority to compile a list of subjects about which false statements are punishable. That governmental power has no clear limiting principle. Our constitutional tradition stands against the idea that we need Oceania’s Ministry of Truth… Were this law to be sustained, there could be an endless list of subjects the National Government or the States could single out… Were the Court to hold that the interest in truthful discourse alone is sufficient to sustain a ban on speech, absent any evidence that the speech was used to gain a material advantage, it would give government a broad censorial power unprecedented in this Court’s cases or in our constitutional tradition. The mere potential for the exercise of that power casts a chill, a chill the First Amendment cannot permit if free speech, thought, and discourse are to remain a foundation of our freedom. [EMPHASIS ADDED]

As noted in the opinion, declaring false speech illegal constitutes a content-based restriction subject to “exacting scrutiny.” Applying that standard, the court found “the link between the Government’s interest in protecting the integrity of the military honors system and the Act’s restriction on the false claims of liars like respondent has not been shown.” 

While finding that the government “has not shown, and cannot show, why counterspeech would not suffice to achieve its interest,” the plurality suggested a more narrowly tailored solution could be simply to publish Medal of Honor recipients in an online database. In other words, the government could overcome the problem of false speech by promoting true speech. 

In 2012, President Barack Obama signed an updated version of the Stolen Valor Act that limited its penalties to situations where a misrepresentation is shown to result in receipt of some kind of benefit. That places the false speech in the category of fraud, consistent with the Alvarez opinion.

A Social Media Ministry of Truth

Applying the Alvarez standard to social media, the government could (and already does) promote its interest in public health or election integrity by publishing true speech through official channels. But there is little reason to believe the government at any level could regulate access to misinformation. Anything approaching an outright ban on accessing speech deemed false by the government not only would not be the most narrowly tailored way to deal with such speech, but it is bound to have chilling effects even on true speech.

The analysis doesn’t change if the government instead places Big Tech itself in the position of Ministry of Truth. Some propose making changes to Section 230, which currently immunizes social media companies from liability for user speech (with limited exceptions), regardless what moderation policies the platform adopts. A hypothetical change might condition Section 230’s liability shield on platforms agreeing to moderate certain categories of misinformation. But that would still place the government in the position of coercing platforms to take down speech. 

Even the “fix” of making social media companies liable for user speech they amplify through promotions on the platform, as proposed by Sen. Mark Warner’s (D-Va.) SAFE TECH Act, runs into First Amendment concerns. The aim of the bill is to regard sponsored content as constituting speech made by the platform, thus opening the platform to liability for the underlying misinformation. But any such liability also would be limited to categories of speech that fall outside First Amendment protection, like fraud or defamation. This would not appear to include most of the types of misinformation on COVID-19 or election security that animate the current legislative push.

There is no way for the government to regulate misinformation, in and of itself, consistent with the First Amendment. Big Tech companies are free to develop their own policies against misinformation, but the government may not force them to do so. 

Extremely Limited Room to Regulate Extremism

The Big Tech CEOs are also almost certain to be grilled about the use of social media to spread “hate speech” or “extremist content.” The memorandum for the March 25 hearing sums it up like this:

Facebook executives were repeatedly warned that extremist content was thriving on their platform, and that Facebook’s own algorithms and recommendation tools were responsible for the appeal of extremist groups and divisive content. Similarly, since 2015, videos from extremists have proliferated on YouTube; and YouTube’s algorithm often guides users from more innocuous or alternative content to more fringe channels and videos. Twitter has been criticized for being slow to stop white nationalists from organizing, fundraising, recruiting and spreading propaganda on Twitter.

Social media has often played host to racist, sexist, and other types of vile speech. While social media companies have community standards and other policies that restrict “hate speech” in some circumstances, there is demand from some public officials that they do more. But under a First Amendment analysis, regulating hate speech on social media would fare no better than the regulation of misinformation.

The First Amendment doesn’t allow for the regulation of “hate speech” as its own distinct category. Hate speech is, in fact, as protected as any other type of speech. There are some limited exceptions, as the First Amendment does not protect incitement, true threats of violence, or “fighting words.” Some of these flatly do not apply in the online context. “Fighting words,” for instance, applies only in face-to-face situations to “those personally abusive epithets which, when addressed to the ordinary citizen, are, as a matter of common knowledge, inherently likely to provoke violent reaction.”

One relevant precedent is the court’s 1992 decision in R.A.V. v. St. Paul, which considered a local ordinance in St. Paul, Minnesota, prohibiting public expressions that served to cause “outrage, alarm, or anger with respect to racial, gender or religious intolerance.” A juvenile was charged with violating the ordinance when he created a makeshift cross and lit it on fire in front of a black family’s home. The court unanimously struck down the ordinance as a violation of the First Amendment, finding it an impermissible content-based restraint that was not limited to incitement or true threats.

By contrast, in 2003’s Virginia v. Black, the Supreme Court upheld a Virginia law outlawing cross burnings done with the intent to intimidate. The court’s opinion distinguished R.A.V. on grounds that the Virginia statute didn’t single out speech regarding disfavored topics. Instead, it was aimed at speech that had the intent to intimidate regardless of the victim’s race, gender, religion, or other characteristic. But the court was careful to limit government regulation of hate speech to instances that involve true threats or incitement.

When it comes to incitement, the legal standard was set by the court’s landmark Brandenberg v. Ohio decision in 1969, which laid out that:

the constitutional guarantees of free speech and free press do not permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action. [EMPHASIS ADDED]

In other words, while “hate speech” is protected by the First Amendment, specific types of speech that convey true threats or fit under the related doctrine of incitement are not. The government may regulate those types of speech. And they do. In fact, social media users can be, and often are, charged with crimes for threats made online. But the government can’t issue a per se ban on hate speech or “extremist content.”

Just as with misinformation, the government also can’t condition Section 230 immunity on platforms removing hate speech. Insofar as speech is protected under the First Amendment, the government can’t specifically condition a government benefit on its removal. Even the SAFE TECH Act’s model for holding platforms accountable for amplifying hate speech or extremist content would have to be limited to speech that amounts to true threats or incitement. This is a far narrower category of hateful speech than the examples that concern legislators. 

Social media companies do remain free under the law to moderate hateful content as they see fit under their terms of service. Section 230 immunity is not dependent on whether companies do or don’t moderate such content, or on how they define hate speech. But government efforts to step in and define hate speech would likely run into First Amendment problems unless they stay focused on unprotected threats and incitement.

What Can the Government Do?

One may fairly ask what it is that governments can do to combat misinformation and hate speech online. The answer may be a law that requires takedowns by court order of speech after it is declared illegal, as proposed by the PACT Act, sponsored in the last session by Sens. Brian Schatz (D-Hawaii) and John Thune (R-S.D.). Such speech may, in some circumstances, include misinformation or hate speech.

But as outlined above, the misinformation that the government can regulate is limited to situations like fraud or defamation, while the hate speech it can regulate is limited to true threats and incitement. A narrowly tailored law that looked to address those specific categories may or may not be a good idea, but it would likely survive First Amendment scrutiny, and may even prove a productive line of discussion with the tech CEOs.

High-profile cases like those of Michael Brown in Ferguson, Missouri, and Breonna Taylor in Louisville, Kentucky, have garnered attention from the media and the academy alike about decisions by grand juries not to charge police officers with homicide. 

While much of this focus centers on alleged racial bias on the part of police officers and the criminal justice system writ large, it’s also important to examine the perverse incentives faced by local district attorneys tasked with prosecuting police.

District attorneys rely on close professional relationships with police officers and law enforcement departments to prosecute criminal cases. Professional incentives require district attorneys to win cases. They can’t do that without cooperation from the police who investigate and bring criminal complaints. Moreover, police unions have disproportionate influence on district attorney elections.

Applying a law & economics lens to criminal justice offers a way forward that could better align incentives to prosecute police officers who break the law.

The legal profession is regulated largely by the rules of professional conduct developed by bar associations in each jurisdiction. The stated goal of these rules is to promote legal ethics among attorneys admitted to the bar. But these rules can also be understood economically. The organized bar can use legal ethics rules to increase its members’ profits in two main ways: by restricting entry to the practice of law and by adopting efficient rules that reduce the costs of contracting between lawyers and clients.

The bar’s rules can restrict competition in the market by requiring prospective lawyers to have graduated from an accredited law school and passed a bar exam, or to have substantial experience in another jurisdiction before they are allowed to waive in. The ability to practice law in a given jurisdiction without having taken the necessary steps to become a member of the bar is limited to pro hac vice rules that require working with a member of the bar. The result of the limitations allows lawyers to raise prices higher than they would without the restrictions on competition.

But the rules also can promote economically efficient outcomes. For instance, conflict-of-interest rules prevent lawyers from representing clients who have interests directly adverse to other clients, or where there would be significant risk that representation would be materially limited by responsibilities to other clients or former clients. (See, for example, Rule 1.7 of the American Bar Association’s Model Rules of Professional Conduct.) Many of these conflicts are waivable, but some are not

It is worth considering why these rules make sense economically. In a world devoid of transaction costs and strategic behavior, lawyers and clients could negotiate complete contracts for each representation, which would include compensation for those who would possibly be hurt by conflicts. But that’s not the real world. Conflict-of-interest rules are designed to overcome the principal-agent problems that arise from representing clients with adverse interests, including the potential use of information from representations to the detriment of those clients. Thus, conflict-of-interest rules supply efficient defaults that generally limit potentially harmful representation. 

Incentives in prosecuting police

Imagine the following scenario: a local district attorney works with a municipal police officer on a number of cases over the years, relying upon that officer’s evidence and testimony to prosecute criminal defendants. A video of the officer is later posted on YouTube showing him beating a non-resisting handcuffed citizen with his baton. The district attorney must now make the decision of whether to charge the officer with potential crimes. 

The bar’s usual conflict-of-interest rules, as described above, do not apply the same way to prosecutors. The prosecutor’s client is presumed to be the public, rather than the police officers with whom they work on a daily basis. Thus, the district attorney is not deemed to face an ethical problem in prosecuting the officer, despite their long-standing professional institutional relationship. The rules of professional conduct don’t require a district attorney to recuse herself from the case.

Following the incentives, it is no surprise that prosecutors often give benefit of the doubt to police officers in allegations of criminal conduct. One of a prosecutor’s primary jobs is to ensure judges and juries believe the testimony of police officers. Future relationships with officers may be impaired by police prosecutions that are perceived by law enforcement to be unfair.  

Elections are ineffective checks on prosecutorial power

While in theory (and sometimes in fact), public elections could serve as a check on district attorneys who fail to live up to their duty to prosecute unlawful behavior by police officers, there are reasons to be skeptical that they successfully do so consistently. Public choice economics helps explain why.

The public as a whole is dispersed and unorganized, especially when it comes to its interest as potential victims of the criminal justice system. On the other hand, police unions and associations are organized to forward the interest of law enforcement officers. Indeed, among the benefits police unions commonly provide to members are lawyers to defend against civil rights lawsuits and criminal prosecutions. Police unions and associations also can exert significant influence on  who is chosen to be district attorney in the first place. Such organized interests often are among the leaders in spending and campaigning for or against district attorney candidates. By contrast, the voting public tends to have far less information about and interest in those elections. 

Getting the incentives right

In pursuing institutional reform, it is important both to get the incentives right and to remain cognizant of trade-offs. The goal should be to align incentives so that there is no disincentive for prosecuting police officers criminally if the facts call for it. Some popular proposed reforms, however, could be both legally deficient or suffer from similar incentive problems.

For instance, a number of California district attorneys and candidates have called for an amendment to the state’s rules of professional conduct to define it as a conflict of interest for a district attorney candidate to receive campaign contributions from a police union. While this calls out the same problem identified here, the proposal would be subject to challenge on First Amendment grounds for targeting political speech, and on equal protection grounds for preferencing other groups over police unions. 

Other possibilities, such as escalating police prosecutions to the state attorney general’s office, face the same public choice and conflict-of-interest problems identified for local district attorneys. 

One way to avoid the conflict of interest inherent in police prosecutions might be to appoint special prosecutors when there are police defendants. Bar associations could create a panel of lawyers for appointment in such cases, much like some jurisdictions have for indigent defendants. The special prosecutor would need investigatory power and the ability to carry out the case on behalf of the public. 

Conclusion

The incentives faced by district attorneys contribute to the problem of insufficient prosecution of police officers who engage in criminal behavior. Prosecutors who generally rely upon close professional relationships with police officers have a conflict of interest when it comes to cases where police officers are the defendants. A new path is needed to get the incentives right.

President Donald Trump has repeatedly called for repeal of Section 230. But while Trump and fellow conservatives decry Big Tech companies for their alleged anti-conservative bias, including at yet more recent hearings, their issue is not actually with Section 230. It’s with the First Amendment. 

Conservatives can’t actually do anything directly about how social media platforms moderate content because it is the First Amendment that grants those platforms a right to editorial discretion. Even FCC Commissioner Brendan Carr, who strongly opposes “Big Tech censorship,” recognizes this

By the same token, even if one were to grant that conservatives are right about the bias of moderators at these large social media platforms, it does not follow that removal of Section 230 immunity would alter that bias. In fact, in a world without Section 230 immunity, there still would be no legal cause of action for political bias. 

The truth is that conservatives use Section 230 immunity for leverage over social media platforms. The hope is that, because social media platforms desire the protections of civil immunity for third-party content, they will follow whatever conditions the government puts on their editorial discretion. But the attempt to end-run the First Amendment’s protections is also unconstitutional.

There is no cause of action for political bias by online platforms if we repeal Section 230

Consider the counterfactual: if there were no Section 230 to immunize them from liability, under what law would platforms face a viable cause of action for political bias? Conservative critics never answer this question. Instead, they focus on the irrelevant distinction between publishers and platforms. Or they talk about how Section 230 is a giveaway to Big Tech. But none consider the actual relationship between Section 230 immunity and alleged political bias.

But let’s imagine we’ve done what President Trump has called for and repealed Section 230. Where does that leave conservatives?

Unfortunately, it leaves them without any cause of action. There is no law passed by Congress or any state legislature, no regulation promulgated by the Federal Communications Commission or the Federal Trade Commission, no common law tort action that can be asserted against online platforms to force them to carry speech they don’t wish to carry. 

The difficulties of pursuing a contract claim for political bias

The best argument for conservatives is that, without Section 230 immunity, online platforms could be more easily held to any contractual restraints in their terms of service. If a platform promises, for instance, that it will moderate speech in a politically neutral way, a user could make the case that the platform violated its terms of service if it acted with political bias in her particular case.

For the vast majority of users, it is unclear whether there are damages from having a post fact-checked or removed. But for users who share in advertising revenue, the concrete injury from a moderation decision is more obvious. PragerU, for example, has (unsuccessfully) sued Google for being put in Restricted Mode on YouTube, which reduces its reach and advertising revenue. 

Even where there is a concrete injury that gets a case into court, that doesn’t necessarily mean there is a valid contract claim. In PragerU’s case against Google, a California court dismissed contract claims because the YouTube terms of service contract was written to allow the platform to retain discretion over what is published. Specifically, the court found that there can be no implied covenant of good faith and fair dealing where “YouTube reserves the right to remove Content without prior notice” and to “discontinue any aspect of the Service at any time.”

Breach-of-contract claims for moderation practices are highly dependent on what is actually promised in the terms of service. For instance, under Facebook’s TOS the company retains the right “to remove or restrict access to content that is in violation” of its community standards. Facebook does provide a process for users to request further review, but retains the right to remove content. The community standards also give Facebook broad discretion to determine, among other things, what counts as hate speech or false news. It is exceedingly unlikely that a court would ever have a basis to find a contract violation by Facebook if the company can reasonably point to a user’s violation of its terms of service. 

For example, in Ebeid v. Facebook, the U.S. Northern District of California dismissed fraud and breach of contract claims, finding the plaintiff failed to allege what contractual provision Facebook breached, that Facebook retained discretion over what ads would be posted, and that the plaintiff suffered no damages because no money was taken to be spent on the ads. The court also dismissed an implied covenant of good faith and fair dealing claim because Facebook retained the right to “remove or disapprove any post or ad at Facebook’s sole discretion.”

While the conservative critique has been that social media platforms do too much moderation—in the form of politically biased removals, fact-checking, and demonetization—others believe platforms do far too little to restrain bad conduct by users. But as long as social media platforms retain editorial discretion in their terms of service and make no other promises that can be relied upon by their users, there is little basis for a contract claim. 

The First Amendment protects the moderation policies of social media platforms, and there is no way around this

With no reasonable cause of action for political bias under the law, conservatives dangle the threat of making changes to Section 230 immunity that could prove costly to the social media platforms in order to extract concessions from the platforms to alter their practices.

This is why there are no serious efforts to actually repeal Section 230, as President Trump has asked for repeatedly. Instead, several bills propose to amend Section 230, while a rulemaking by the FCC seeks to clarify its meaning. 

But none of these proposed bills would directly affect platforms’ ability to make “biased” moderation decisions. Put simply: the First Amendment protects social media platforms’ editorial discretion. They may set rules to use their platforms, just as any private person may set rules for their own property. If I kick someone off my property for saying racist things, the First Amendment (as well as regular property law) protects my right to do so. Only under extremely limited circumstances can the government change this baseline rule and survive constitutional scrutiny.

Social media platforms’ right to editorial discretion is the same as that enjoyed by newspapers. In Miami Herald Publishing Co. v. Tornillo, the Supreme Court found:

The choice of material to go into a newspaper, and the decisions made as to limitations on the size and content of the paper, and treatment of public issues and public officials—whether fair or unfair—constitute the exercise of editorial control and judgment. It has yet to be demonstrated how governmental regulation of this crucial process can be exercised consistent with First Amendment guarantees of a free press as they have evolved to this time. 

Social media platforms, just like any other property owner, have the right to determine what they want displayed on their property. In other words, Facebook, Google, and Twitter have the right to moderate content on news feeds, search results, and timelines. The attempted constitutional end-run—threatening to remove immunity for third-party content unrelated to political bias, like defamation and other tortious acts, unless social media platforms give up their right to editorial discretion over political speech—is just as unconstitutional as directly imposing “fairness” requirements on social media platforms.

The Supreme Court has held that Congress may not leverage a government benefit to regulate a speech interest outside of the benefit’s scope. This is called the unconstitutional conditions doctrine. It basically delineates the level of regulation the government can undertake through subsidizing behavior. The government can’t condition a government benefit on giving up editorial discretion over political speech.

The point of Section 230 immunity is to remedy the moderator’s dilemma set up by Stratton Oakmont v. Prodigy, which held that if a platform chose to moderate third-party speech at all, they would be liable for what was not removed. Section 230 is not about compelling political neutrality on platforms, because it can’t be consistent with the First Amendment. Civil immunity for third-party speech online is an important benefit for social media platforms because it holds they are not liable for the acts of third-parties, with limited exceptions. Without it, platforms would restrict opportunities for third-parties to post out of fear of liability

In sum, the government may not condition enjoyment of a government benefit upon giving up a constitutionally protected right. Section 230 immunity is a clear government benefit. The right to editorial discretion is clearly protected by the First Amendment. Because the entire point of conservative Section 230 reform efforts is to compel social media platforms to carry speech they otherwise desire to remove, it fails this basic test.

Conclusion

Fundamentally, the conservative push to reform Section 230 in response to the alleged anti-conservative bias of major social media platforms is not about policy. Really, it’s about waging a culture war against the perceived “liberal elites” from Silicon Valley, just as there is an ongoing culture war against perceived “liberal elites” in the mainstream media, Hollywood, and academia. But fighting this culture war is not worth giving up conservative principles of free speech, limited government, and free markets.