Right to Anonymous Speech, Part 2: A Law & Economics Approach

Cite this Article
Ben Sperry, Right to Anonymous Speech, Part 2: A Law & Economics Approach, Truth on the Market (September 06, 2023), https://truthonthemarket.com/2023/09/06/right-to-anonymous-speech-part-2-a-law-economics-approach/

We at the International Center for Law & Economics (ICLE) have written extensively on the intersection of the First Amendment, the regulation of online platforms, and the immunity from liability for user-generated content granted to platforms under Section 230 of the Communications Decency Act of 1996.

One of the proposals we put forward was that Section 230 immunity should be conditioned on platforms making reasonable efforts to help potential plaintiffs be able to track down users for illegal conduct. This post is the second in a series on the right to anonymity. In this edition, I will explore the degree to which the First Amendment protects the right to anonymous speech, and whether it forecloses the application of such a statutory duty of care.

Below, I will outline the general arguments we have made in favor of creating an obligation to identify harmful users, and how they flow from an understanding of the economics of intermediary liability, where we place the burden on the least-cost avoider of harms. Then, I will consider the First Amendment’s implications for anonymous speech, including when it comes to compelling access to information that could unmask bad actors. I will argue that conditioning Section 230 immunity on a duty to unmask (given a proper process) is consistent with both the First Amendment and with the underlying economics of intermediary liability, where the goal should be to provide accountability without imposing excessive collateral censorship.

ICLE Proposals and the Economics of Intermediary Liability

We at ICLE have argued that Section 230’s grant of immunity could be reformed to better hold bad actors accountable. Unlike many critics of Section 230, however, we understand that the law has also generated tremendous benefits in promoting free expression online. Thus, any reform must take potential costs (such as collateral censorship) into consideration, aided by a robust understanding of the law & economics of intermediary liability.

For example, in considering the 3rd U.S. Circuit Court of Appeals’ Oberdorf v. Amazon decision, Gus Hurwitz argued here at Truth on the Market that:

Section 230’s immunity could be attenuated by an obligation to facilitate the identification of users on that platform, subject to legal process, in proportion to the size and resources available to the platform, the technological feasibility of such identification, the foreseeability of the platform being used to facilitate harmful speech or conduct, and the expected importance (as defined from a First Amendment perspective) of speech on that platform. 

In other words, if there are readily available ways to establish some form of identify for users . . . and there is reason to expect that users of the platform could be subject to suit . . . then the platform needs to be reasonably able to provide reasonable information about speakers subject to legal action in order to avail itself of any Section 230 defense. Stated otherwise, platforms need to be able to reasonably comply with so-called unmasking subpoenas issued in the civil context to the extent such compliance is feasible for the platform’s size, sophistication, resources, etc.

Similarly, Geoffrey Manne, Kristian Stout, and I proposed in our paper, “Who Moderates the Moderators?:

Thus, when a platform operates in a fashion that impedes the application of direct liability for user-generated content, it should be offered a choice: risk vicarious liability if a court finds that the law is broken (or a tort is caused) by content it hosts, or else mitigate whatever dynamic it has created that impedes direct law enforcement.

This follows from a proper understanding of intermediary liability:

In limited circumstances, the law should (and does) place responsibility on intermediaries to monitor and control conduct. It is not always sufficient to aim legal sanctions solely at the parties who commit harms directly—e.g., where harms are committed by many pseudonymous individuals dispersed across large online services. In such cases, social costs may be minimized when legal responsibility is placed upon the least-cost avoider: the party in the best position to limit harm, even if it is not the party directly committing the harm.

The application of intermediary liability to speech platforms must, of course, be limited by the threat of collateral censorship, against which the First Amendment (and Section 230) explicitly protects.

This is why we have advocated a balanced and proportional approach that would take into consideration the types of online platforms, how they are used, and the potential harms to society which result. As Gus put it in his version:

The proposal offered here is not that platforms be able to identify their speaker – it’s better described as that they not deliberately act as a liability shield. Its requirement is that platforms implement reasonable identity technology in proportion to their size, sophistication, and the likelihood of harmful speech on their platforms. A small platform for exchanging bread recipes would be fine to maintain a log of usernames and IP addresses. A large, well-resourced, platform hosting commercial activity (such as Amazon Marketplace) may be expected to establish a verified identity for the merchants it hosts. A forum known for hosting hate speech would be expected to have better identification records – it is entirely foreseeable that its users would be subject to legal action. A forum of support groups for marginalized and disadvantaged communities would face a lower obligation than a forum of similar size and sophistication known for hosting legally-actionable speech.

This proportionality approach also addresses the anonymous speech concern. Anonymous speech is often of great social and political value. But anonymity can also be used for, and as made amply clear in contemporary online discussion can bring out the worst of, speech that is socially and politically destructive. Tying Section 230’s immunity to the nature of speech on a platform gives platforms an incentive to moderate speech – to make sure that anonymous speech is used for its good purposes while working to prevent its use for its lesser purposes. This is in line with one of the defining goals of Section 230. 

In sum, the law & economics of intermediary liability suggests that the law should do more to encourage online platforms to hold bad actors to account. Here, this means helping to identify anonymous and pseudonymous actors who would otherwise be able to evade legal accountability for things that normally would be considered tortious or criminal conduct, not protected speech.

The Right to Anonymous Association, Speech, and the Ability to Unmask by Subpoena

The First Amendment does protect some degree of anonymity in the context of  freedom of speech and freedom of association. In his recent (and excellent) book “The United States of Anonymous: How the First Amendment Shaped Online Speech,” Jeff Kosseff of the U.S. Naval Academy explores the history of First Amendment jurisprudence on anonymous speech, and how it applies online. Of particular importance for our analysis here is that he details the legal history of how government subpoena powers to unmask anonymous and pseudonymous online actors are limited by the First Amendment.

The original First Amendment cases on anonymity dealt with the rights of alleged communists and civil rights groups to associate free from government investigations and “transparency” rules that would have opened them to public abuse and harassment. In a series of cases (1957’s Watkins v. United States, 1958’s NAACP v. Alabama, 1960’s Bates v. Little Rock, and 1963’s Gibson v. Florida Legislative Investigation Committee), the Supreme Court protected the right to anonymous association. 

Over time, this right was extended by the Supreme Court to the right to anonymously pass out handbills (1960’s Talley v. California); anonymously hand out leaflets (1995’s McIntyre v. Ohio Elections Comm’n); and anonymously sign people up for petitions (1999’s Buckley v. American Constitutional Law Foundation Inc.). 

This right was also considered by federal courts in cases dealing with expressive conduct, namely the right of Ku Klux Klan members to wear masks in public. First in Goshen, Indiana, and then in New York City, local jurisdictions enacted laws targeted at unmasking the KKK when gathered in public. Federal courts, however, differed in their understanding of the ordinances. 

In American Knights Ku Klux Klan v. Goshen, Indiana (1999), the U.S. District Court for the Northern District of Indiana detailed the Supreme Court’s precedents on anonymous speech:

In Talley v. California, the Court held void on its face a Los Angeles ordinance requiring any handbill to contain the names of the persons who wrote it and distributed it. 362 U.S. at 64-65, 80 S.Ct. 536. The ordinance, the Court found, might deter perfectly peaceful discussions of matters of importance; “[t]here can be no doubt that such an identification requirement would tend to restrict freedom to distribute information and thereby freedom of expression.” Id. Noting that “one who is rightfully on a street … carries with him there as elsewhere the constitutional right to express his views in an orderly fashion … by handbills and literature as well as by the spoken word,” the Court rejected the city’s attempt to justify the ordinance as aiding in “identify[ing] those responsible for fraud, false advertising and libel.” Id. at 63-64, 80 S.Ct. 536 (quoting Jamison v. State of Texas, 318 U.S. 413, 416, 63 S.Ct. 669, 87 L.Ed. 869 (1943)). Likewise, in McIntyre v. Ohio Elections Comm’n, 514 U.S. 334, 115 S.Ct. 1511, 131 L.Ed.2d 426 (1995), the Court struck down an ordinance prohibiting the anonymous distribution of political leaflets. “[A]n author’s decision to remain anonymous, like other decisions concerning omissions or additions to the content of a publication, is an aspect of the freedom of speech protected by the First Amendment.” Id. at 342, 115 S.Ct. 1511. Finding the category of speech — “advocacy of a politically controversial viewpoint” — to be “the essence of First Amendment expression,” the court applied “exacting scrutiny” and struck the ordinance as not being narrowly tailored to serve an overriding state interest. Id. at 347, 115 S.Ct. 1511.

Most recently, in Buckley v. American Constitutional Law Foundation, Inc., 525 U.S. 182, 119 S.Ct. 636, 142 L.Ed.2d 599 (1999), the Court struck down a Colorado law requiring initiative petition circulators to wear identification badges bearing their names. Three petition circulators testified that the badge law kept potential circulators from circulating petitions in public; one also testified about harassment he experienced as a circulator of a hemp initiative petition. Id. at 645. The Court applied “exacting scrutiny” in weighing the injury to speech against the State’s proffered interest of “enabl[ing] the public to identify, and the State to apprehend, petition circulators who engage in misconduct.” Id. at 645-646. The court concluded that the badge requirement “discourage[d] participation in the petition circulation process by forcing name identification without sufficient cause.” Id. at 646.

The district court then granted summary judgment against the ordinance, finding that:

Even if the court could find that the anti-mask ordinance would help to prevent violence, the court could not find that this ordinance is narrowly-tailored to reach that end… The ordinance prohibits anonymity and/or has the effect of directly chilling speech, which amount to serious and far-reaching limitations on free speech and association. The ordinance simply burdens more speech than is necessary to serve Goshen’s purpose of preventing violence.

On the other hand, the 2nd U.S. Circuit Court of Appeals found in 2004’s Church of the Ku Klux Klan v. Kerik that, while wearing robes and masks was certainly expressive conduct, the mask was “optional” and “redundant” to communicate the message of the organization. In their review of Supreme Court precedent, the panel stated:

These Supreme Court decisions establish that the First Amendment is implicated by government efforts to compel disclosure of names in numerous speech-related settings, whether the names of an organization’s members, the names of campaign contributors, the names of producers of political leaflets, or the names of persons who circulate petitions. In contrast, the Supreme Court has never held that freedom of association or the right to engage in anonymous speech entails a right to conceal one’s appearance in a public demonstration. Nor has any Circuit found such a right. We decline the American Knights’ request to extend the holdings of NAACP v. Alabama and its progeny and to hold that the concealment of one’s face while demonstrating is constitutionally protected.

The 2nd Circuit thus refused to strike down the anti-mask law, finding that “[b]ecause the anti-mask law regulates the conduct of mask wearing, and does so in a constitutionally legitimate manner… We hold that New York’s anti-mask law is not facially unconstitutional.” This left some ambiguity in the law regarding when, exactly, the First Amendment protects anonymous public expression.

In his terrific storytelling, Kosseff details the history of how online message boards with pseudonymous “cyber-smearing” of publicly traded companies led to the first applications of these precedents to the online world. In two major state court cases (2000’s Dendrite International Inc. v. Doe in New Jersey and 2005’s Cahill v. John Doe-Number One in Delaware), a balanced approach to when courts would quash a subpoena to unmask anonymous online posters was laid out. In Cahill, in particular, the court made five main points in establishing its balancing test:

  1. There is a First Amendment right to anonymous speech, citing the caselaw reviewed above; 
  2. The right to anonymous speech applies to the internet;
  3. The First Amendment does not protect defamation;
  4. The Dendrite standard goes too far in protecting anonymous speech (“The concern that animates the Dendrite standard, at first glance, makes perfect sense: if subpoenas can be obtained merely by filing suit, people will be reluctant to speak their mind knowing that their anonymity is tenuous and that retribution for whatever they might say is all the more likely. This is hardly a frivolous consideration. Nevertheless, the Dendrite standard goes further than is necessary to protect the anonymous speaker and, by doing so, unfairly limits access to the civil justice system as a means to redress the harm to reputation caused by defamatory speech. Specifically, under Dendrite, the plaintiff is put to the nearly impossible task of demonstrating as a matter of law that a publication is defamatory before he serves his complaint or even knows the identity of the defendant(s)”); and 
  5. An alternative America Online standard is proposed, which would “require[] the plaintiff to satisfy the court that he has a ‘good faith basis to contend that [he] may be the victim of [actionable] conduct,’ and that the identity information being sought is ‘centrally needed to advance that claim.’”

As Kosseff explains, this test has been applied by federal and state courts for subpoenas, striking a fair balance that protects anonymous online speech while still allowing those harmed by illegal conduct or speech to unmask bad actors. 

But it nonetheless still leaves thorny questions regarding what it means for online platforms that continue to host defamatory (or otherwise illegal) speech even after it has been adjudicated as such. In cases such as 2018’s Hassell v. Bird, Section 230 immunity has been found to continue to protect such entities.

Striking the Balance: Conditioning Section 230 Immunity on Unmasking Bad Actors

In order to properly hold bad online actors accountable, some kind of reform of Section 230 immunity seems necessary. The proposal here is that Section 230 be amended to include a duty for interactive computer services to take reasonable steps to identify users who engage in reasonably foreseeable illegal conduct or speech. This duty should be tempered by two things: a hard duty to remove content only arises after the adjudication of illegal conduct or speech, and the actual unmasking of users is subject to the Cahill standard.

If an interactive computer service fails to take reasonable steps to either remove conduct or speech adjudicated to be illegal, or does not take steps to help ascertain the identity of bad actors sufficiently pled against, then they step into the shoes of the defendants via intermediary-liability principles, subject to whatever actual laws are pled by the plaintiffs.

This approach attempts to balance the need for accountability with the importance of anonymous online speech. Consistent with Supreme Court precedent, this would allow a wide range of speech and expressive conduct—even behind an online mask, such as a pseudonym. But it would also prevent bad actors from hiding behind such masks when they cause cognizable legal harm. Only when online platforms purposefully set up barriers to accountability should they be held liable themselves.

In such cases, intermediary liability makes the most sense, because online services have the ability to monitor and control their platforms. When they are the “least-cost avoider” and the threat of collateral censorship is low, the duty should fall on them.

But the standard should sound in negligence rather than strict liability, because the overdeterrence of beneficial speech (including anonymous speech) would be inconsistent with First Amendment values and the economic rationale that undergirds it (namely, to promote the production and spread of information, which is a public good). As we put it in our blog post on Twitter v. Taamneh:

In some circumstances, intermediaries (like Twitter) may be the least-cost avoider, such as when information costs are sufficiently low that effective monitoring and control of end users is possible, and when pseudonymity makes remedies against end users ineffective…. But there are costs to imposing such liability—including, importantly, “collateral censorship” of user-generated content by online social-media platforms. This manifests in platforms acting more defensively—taking down more speech, and generally moving in a direction that would make the Internet less amenable to open, public discussion—in an effort to avoid liability. Indeed, a core reason that Section 230 exists in the first place is to reduce these costs.

From an economic perspective, liability should be imposed on the party or parties best positioned to deter the harms in question, so long as the social costs incurred by, and as a result of, enforcement do not exceed the social gains realized. In other words, there is a delicate balance that must be struck to determine when intermediary liability makes sense in a given case. On the one hand, we want illicit content to be deterred, and on the other, we want to preserve the open nature of the Internet. The costs generated from the over-deterrence of legal, beneficial speech is why intermediary liability for user-generated content can’t be applied on a strict-liability basis, and why some bad content will always exist in the system.

Conclusion

The best answer to online speech harms will usually be to go after the bad actors themselves. But in an online world of anonymous and pseudonymous speech, that can be quite difficult to do. A reform of Section 230 immunity that conditions its protections on a duty of care to take reasonable steps to identify bad actors who are reasonably foreseeable, when combined with a subpoena standard that balances the interests of anonymous speech with the need for legal accountability, would go a long way toward properly adjusting the balance of interests.