Archives For First amendment

[The following post was adapted from the International Center for Law & Economics White Paper “Polluting Words: Is There a Coasean Case to Regulate Offensive Speech?]

Words can wound. They can humiliate, anger, insult.

University students—or, at least, a vociferous minority of them—are keen to prevent this injury by suppressing offensive speech. To ensure campuses are safe places, they militate for the cancellation of talks by speakers with opinions they find offensive, often successfully. And they campaign to get offensive professors fired from their jobs.

Off campus, some want this safety to be extended to the online world and, especially, to the users of social media platforms such as Twitter and Facebook. In the United States, this would mean weakening the legal protections of offensive speech provided by Section 230 of the Communications Decency Act (as President Joe Biden has recommended) or by the First Amendment and. In the United Kingdom, the Online Safety Bill is now before Parliament. If passed, it will give a U.K. government agency the power to dictate the content-moderation policies of social media platforms.

You don’t need to be a woke university student or grandstanding politician to suspect that society suffers from an overproduction of offensive speech. Basic economics provides a reason to suspect it—the reason being that offense is an external cost of speech. The cost is borne not by the speaker but by his audience. And when people do not bear all the costs of an action, they do it too much.

Jack tweets “women don’t have penises.” This offends Jill, who is someone with a penis who considers herself (or himself, if Jack is right) to be a woman. And it offends many others, who agree with Jill that Jack is indulging in ugly transphobic biological essentialism. Lacking Bill Clinton’s facility for feeling the pain of others, Jack does not bear this cost. So, even if it exceeds whatever benefit Jack gets from saying that women don’t have penises, he will still say it. In other words, he will say it even when doing so makes society altogether worse off.

It shouldn’t be allowed!

That’s what we normally say when actions harm others more than they benefit the agent. The law normally conforms to John Stuart Mill’s “Harm Principle” by restricting activities—such as shooting people or treating your neighbours to death metal at 130 decibels at 2 a.m.—with material external costs. Those who seek legal reform to restrict offensive speech are surely doing no more than following an accepted general principle.

But it’s not so simple. As Ronald Coase pointed out in his famous 1960 article “The Problem of Social Cost,” externalities are a reciprocal problem. If Wayne had no neighbors, his playing death metal at 130 decibels at 2 a.m. would have no external costs. Their choice of address is equally a source of the problem. Similarly, if Jill weren’t a Twitter user, she wouldn’t have been offended by Jack’s tweet about who has a penis, since she wouldn’t have encountered it. Externalities are like tangos: they always have at least two perpetrators.

So, the legal question, “who should have a right to what they want?”—Wayne to his loud music or his neighbors to their sleep; Jack to expressing his opinion about women or Jill to not hearing such opinions—cannot be answered by identifying the party who is responsible for the external cost. Both parties are responsible.

How, then, should the question be answered? In the same paper, Coase the showed that, in certain circumstances, who the courts favor will make no difference to what ends up happening, and that what ends up happening will be efficient. Suppose the court says that Wayne cannot bother his neighbors with death metal at 2 a.m. If Wayne would be willing to pay $100,000 to keep doing it and his neighbors, combined, would put up with it for anything more than $95,000, then they should be able to arrive at a mutually beneficial deal whereby Wayne pays them something between $95,000 and $100,000 to forgo their right to stop him making his dreadful noise.

That’s not exactly right. If negotiating a deal would cost more than $5,000, then no mutually beneficial deal is possible and the rights-trading won’t happen. Transaction costs being less than the difference between the two parties’ valuations is the circumstance in which the allocation of legal rights makes no difference to how resources get used, and where efficiency will be achieved, in any event.

But it is an unusual circumstance, especially when the external cost is suffered by many people. When the transaction cost is too high, efficiency does depend on the allocation of rights by courts or legislatures. As Coase argued, when this is so, efficiency will be served if a right to the disputed resource is granted to the party with the higher cost of avoiding the externality.

Given the (implausible) valuations Wayne and his neighbors place on the amount of noise in their environment at 2 a.m., efficiency is served by giving Wayne the right to play his death metal, unless he could soundproof his house or play his music at a much lower volume or take some other avoidance measure that costs him less than the $90,000 cost to his neighbours.

And given that Jack’s tweet about penises offends a large open-ended group of people, with whom Jack therefore cannot negotiate, it looks like they should be given the right not to be offended by Jack’s comment and he should be denied the right to make it. Coasean logic supports the woke censors!          

But, again, it’s not that simple—for two reasons.

The first is that, although those are offended may be harmed by the offending speech, they needn’t necessarily be. Physical pain is usually harmful, but not when experienced by a sexual masochist (in the right circumstances, of course). Similarly, many people take masochistic pleasure in being offended. You can tell they do, because they actively seek out the sources of their suffering. They are genuinely offended, but the offense isn’t harming them, just as the sexual masochist really is in physical pain but isn’t harmed by it. Indeed, real pain and real offense are required, respectively, for the satisfaction of the sexual masochist and the offense masochist.

How many of the offended are offense masochists? Where the offensive speech can be avoided at minimal cost, the answer must be most. Why follow Jordan Peterson on Twitter when you find his opinions offensive unless you enjoy being offended by him? Maybe some are keeping tabs on the dreadful man so that they can better resist him, and they take the pain for that reason rather than for masochistic glee. But how could a legislator or judge know? For all they know, most of those offended by Jordan Peterson are offense masochists and the offense he causes is a positive externality.

The second reason Coasean logic doesn’t support the would-be censors is that social media platforms—the venues of offensive speech that they seek to regulate—are privately owned. To see why this is significant, consider not offensive speech, but an offensive action, such as openly masturbating on a bus.

This is prohibited by law. But it is not the mere act that is illegal. You are allowed to masturbate in the privacy of your bedroom. You may not masturbate on a bus because those who are offended by the sight of it cannot easily avoid it. That’s why it is illegal to express obscenities about Jesus on a billboard erected across the road from a church but not at a meeting of the Angry Atheists Society. The laws that prohibit offensive speech in such circumstances—laws against public nuisance, harassment, public indecency, etc.—are generally efficient. The cost they impose on the offenders is less than the benefits to the offended.

But they are unnecessary when the giving and taking of offense occur within a privately owned place. Suppose no law prohibited masturbating on a bus. It still wouldn’t be allowed on buses owned by a profit-seeker. Few people want to masturbate on buses and most people who ride on buses seek trips that are masturbation-free. A prohibition on masturbation will gain the owner more customers than it loses him. The prohibition is simply another feature of the product offered by the bus company. Nice leather seats, punctual departures, and no wankers (literally). There is no more reason to believe that the bus company’s passenger-conduct rules will be inefficient than that its other product features will be and, therefore, no more reason to legally stipulate them.

The same goes for the content-moderation policies of social media platforms. They are just another product feature offered by a profit-seeking firm. If they repel more customers than they attract (or, more accurately, if they repel more advertising revenue than they attract), they would be inefficient. But then, of course, the company would not adopt them.

Of course, the owner of a social media platform might not be a pure profit-maximiser. For example, he might forgo $10 million in advertising revenue for the sake of banning speakers he personally finds offensive. But the outcome is still efficient. Allowing the speech would have cost more by way of the owner’s unhappiness than the lost advertising would have been worth.  And such powerful feelings in the owner of a platform create an opportunity for competitors who do not share his feelings. They can offer a platform that does not ban the offensive speakers and, if enough people want to hear what they have to say, attract users and the advertising revenue that comes with them. 

If efficiency is your concern, there is no problem for the authorities to solve. Indeed, the idea that the authorities would do a better job of deciding content-moderation rules is not merely absurd, but alarming. Politicians and the bureaucrats who answer to them or are appointed by them would use the power not to promote efficiency, but to promote agendas congenial to them. Jurisprudence in liberal democracies—and, especially, in America—has been suspicious of governmental control of what may be said. Nothing about social media provides good reason to become any less suspicious.

In his recent concurrence in Biden v. Knight, Justice Clarence Thomas sketched a roadmap for how to regulate social-media platforms. The animating factor for Thomas, much like for other conservatives, appears to be a sense that Big Tech has exhibited anti-conservative bias in its moderation decisions, most prominently by excluding former President Donald Trump from Twitter and Facebook. The opinion has predictably been greeted warmly by conservative champions of social-media regulation, who believe it shows how states and the federal government can proceed on this front.

While much of the commentary to date has been on whether Thomas got the legal analysis right, or on the uncomfortable fit of common-carriage law to social media, the deeper question of the First Amendment’s protection of private ordering has received relatively short shrift.

Conservatives’ main argument has been that Big Tech needs to be reined in because it is restricting the speech of private individuals. While conservatives traditionally have defended the state-action doctrine and the right to editorial discretion, they now readily find exceptions to both in order to justify regulating social-media companies. But those two First Amendment doctrines have long enshrined an important general principle: private actors can set the rules for speech on their own property. I intend to analyze this principle from a law & economics perspective and show how it benefits society.

Who Balances the Benefits and Costs of Speech?

Like virtually any other human activity, there are benefits and costs to speech and it is ultimately subjective individual preference that determines the value that speech has. The First Amendment protects speech from governmental regulation, with only limited exceptions, but that does not mean all speech is acceptable or must be tolerated. Under the state-action doctrine, the First Amendment only prevents the government from restricting speech.

Some purported defenders of the principle of free speech no longer appear to see a distinction between restraints on speech imposed by the government and those imposed by private actors. But this is surely mistaken, as no one truly believes all speech protected by the First Amendment should be without consequence. In truth, most regulation of speech has always come by informal means—social mores enforced by dirty looks or responsive speech from others.

Moreover, property rights have long played a crucial role in determining speech rules within any given space. If a man were to come into my house and start calling my wife racial epithets, I would not only ask that person to leave but would exercise my right as a property owner to eject the trespasser—if necessary, calling the police to assist me. I similarly could not expect to go to a restaurant and yell at the top of my lungs about political issues and expect them—even as “common carriers” or places of public accommodation—to allow me to continue.

As Thomas Sowell wrote in Knowledge and Decisions:

The fact that different costs and benefits must be balanced does not in itself imply who must balance them―or even that there must be a single balance for all, or a unitary viewpoint (one “we”) from which the issue is categorically resolved.

Knowledge and Decisions, p. 240

When it comes to speech, the balance that must be struck is between one individual’s desire for an audience and that prospective audience’s willingness to play the role. Asking government to use regulation to make categorical decisions for all of society is substituting centralized evaluation of the costs and benefits of access to communications for the individual decisions of many actors. Rather than incremental decisions regarding how and under what terms individuals may relate to one another—which can evolve over time in response to changes in what individuals find acceptable—government by its nature can only hand down categorical guidelines: “you must allow x, y, and z speech.”

This is particularly relevant in the sphere of social media. Social-media companies are multi-sided platforms. They are profit-seeking, to be sure, but the way they generate profits is by acting as intermediaries between users and advertisers. If they fail to serve their users well, those users could abandon the platform. Without users, advertisers would have no interest in buying ads. And without advertisers, there is no profit to be made. Social-media companies thus need to maximize the value of their platform by setting rules that keep users engaged.

In the cases of Facebook, Twitter, and YouTube, the platforms have set content-moderation standards that restrict many kinds of speech that are generally viewed negatively by users, even if the First Amendment would foreclose the government from regulating those same types of content. This is a good thing. Social-media companies balance the speech interests of different kinds of users to maximize the value of the platform and, in turn, to maximize benefits to all.

Herein lies the fundamental difference between private action and state action: one is voluntary, and the other based on coercion. If Facebook or Twitter suspends a user for violating community rules, it represents termination of a previously voluntary association. If the government kicks someone out of a public forum for expressing legal speech, that is coercion. The state-action doctrine recognizes this fundamental difference and creates a bright-line rule that courts may police when it comes to speech claims. As Sowell put it:

The courts’ role as watchdogs patrolling the boundaries of governmental power is essential in order that others may be secure and free on the other side of those boundaries. But what makes watchdogs valuable is precisely their ability to distinguish those people who are to be kept at bay and those who are to be left alone. A watchdog who could not make that distinction would not be a watchdog at all, but simply a general menace.

Knowledge and Decisions, p. 244

Markets Produce the Best Moderation Policies

The First Amendment also protects the right of editorial discretion, which means publishers, platforms, and other speakers are free from carrying or transmitting government-compelled speech. Even a newspaper with near-monopoly power cannot be compelled by a right-of-reply statute to carry responses by political candidates to editorials it has published. In other words, not only is private regulation of speech not state action, but in many cases, private regulation is protected by the First Amendment.

There is no reason to think that social-media companies today are in a different position than was the newspaper in Miami Herald v. Tornillo. These companies must determine what, how, and where content is presented within their platform. While this right of editorial discretion protects the moderation decisions of social-media companies, its benefits accrue to society at-large.

Social-media companies’ abilities to differentiate themselves based on functionality and moderation policies are important aspects of competition among them. How each platform is used may differ depending on those factors. In fact, many consumers use multiple social-media platforms throughout the day for different purposes. Market competition, not government power, has enabled internet users (including conservatives!) to have more avenues than ever to get their message out.

Many conservatives remain unpersuaded by the power of markets in this case. They see multiple platforms all engaging in very similar content-moderation policies when it comes to certain touchpoint issues, and thus allege widespread anti-conservative bias and collusion. Neither of those claims have much factual support, but more importantly, the similarity of content-moderation standards may simply be common responses to similar demand structures—not some nefarious and conspiratorial plot.

In other words, if social-media users demand less of the kinds of content commonly considered to be hate speech, or less misinformation on certain important issues, platforms will do their best to weed those things out. Platforms won’t always get these determinations right, but it is by no means clear that forcing them to carry all “legal” speech—which would include not just misinformation and hate speech, but pornographic material, as well—would better serve social-media users. There are always alternative means to debate contestable issues of the day, even if it may be more costly to access them.

Indeed, that content-moderation policies make it more difficult to communicate some messages is precisely the point of having them. There is a subset of protected speech to which many users do not wish to be subject. Moreover, there is no inherent right to have an audience on a social-media platform.

Conclusion

Much of the First Amendment’s economic value lies in how it defines roles in the market for speech. As a general matter, it is not the government’s place to determine what speech should be allowed in private spaces. Instead, the private ordering of speech emerges through the application of social mores and property rights. This benefits society, as it allows individuals to create voluntary relationships built on marginal decisions about what speech is acceptable when and where, rather than centralized decisions made by a governing few and that are difficult to change over time.

In what has become regularly scheduled programming on Capitol Hill, Facebook CEO Mark Zuckerberg, Twitter CEO Jack Dorsey, and Google CEO Sundar Pichai will be subject to yet another round of congressional grilling—this time, about the platforms’ content-moderation policies—during a March 25 joint hearing of two subcommittees of the House Energy and Commerce Committee.

The stated purpose of this latest bit of political theatre is to explore, as made explicit in the hearing’s title, “social media’s role in promoting extremism and misinformation.” Specific topics are expected to include proposed changes to Section 230 of the Communications Decency Act, heightened scrutiny by the Federal Trade Commission, and misinformation about COVID-19—the subject of new legislation introduced by Rep. Jennifer Wexton (D-Va.) and Sen. Mazie Hirono (D-Hawaii).

But while many in the Democratic majority argue that social media companies have not done enough to moderate misinformation or hate speech, it is a problem with no realistic legal fix. Any attempt to mandate removal of speech on grounds that it is misinformation or hate speech, either directly or indirectly, would run afoul of the First Amendment.

Much of the recent focus has been on misinformation spread on social media about the 2020 election and the COVID-19 pandemic. The memorandum for the March 25 hearing sums it up:

Facebook, Google, and Twitter have long come under fire for their role in the dissemination and amplification of misinformation and extremist content. For instance, since the beginning of the coronavirus disease of 2019 (COVID-19) pandemic, all three platforms have spread substantial amounts of misinformation about COVID-19. At the outset of the COVID-19 pandemic, disinformation regarding the severity of the virus and the effectiveness of alleged cures for COVID-19 was widespread. More recently, COVID-19 disinformation has misrepresented the safety and efficacy of COVID-19 vaccines.

Facebook, Google, and Twitter have also been distributors for years of election disinformation that appeared to be intended either to improperly influence or undermine the outcomes of free and fair elections. During the November 2016 election, social media platforms were used by foreign governments to disseminate information to manipulate public opinion. This trend continued during and after the November 2020 election, often fomented by domestic actors, with rampant disinformation about voter fraud, defective voting machines, and premature declarations of victory.

It is true that, despite social media companies’ efforts to label and remove false content and bar some of the biggest purveyors, there remains a considerable volume of false information on social media. But U.S. Supreme Court precedent consistently has limited government regulation of false speech to distinct categories like defamation, perjury, and fraud.

The Case of Stolen Valor

The court’s 2011 decision in United States v. Alvarez struck down as unconstitutional the Stolen Valor Act of 2005, which made it a federal crime to falsely claim to have earned a military medal. A four-justice plurality opinion written by Justice Anthony Kennedy, along with a two-justice concurrence, both agreed that a statement being false did not, by itself, exclude it from First Amendment protection. 

Kennedy’s opinion noted that while the government may impose penalties for false speech connected with the legal process (perjury or impersonating a government official); with receiving a benefit (fraud); or with harming someone’s reputation (defamation); the First Amendment does not sanction penalties for false speech, in and of itself. The plurality exhibited particular skepticism toward the notion that government actors could be entrusted as a “Ministry of Truth,” empowered to determine what categories of false speech should be made illegal:

Permitting the government to decree this speech to be a criminal offense, whether shouted from the rooftops or made in a barely audible whisper, would endorse government authority to compile a list of subjects about which false statements are punishable. That governmental power has no clear limiting principle. Our constitutional tradition stands against the idea that we need Oceania’s Ministry of Truth… Were this law to be sustained, there could be an endless list of subjects the National Government or the States could single out… Were the Court to hold that the interest in truthful discourse alone is sufficient to sustain a ban on speech, absent any evidence that the speech was used to gain a material advantage, it would give government a broad censorial power unprecedented in this Court’s cases or in our constitutional tradition. The mere potential for the exercise of that power casts a chill, a chill the First Amendment cannot permit if free speech, thought, and discourse are to remain a foundation of our freedom. [EMPHASIS ADDED]

As noted in the opinion, declaring false speech illegal constitutes a content-based restriction subject to “exacting scrutiny.” Applying that standard, the court found “the link between the Government’s interest in protecting the integrity of the military honors system and the Act’s restriction on the false claims of liars like respondent has not been shown.” 

While finding that the government “has not shown, and cannot show, why counterspeech would not suffice to achieve its interest,” the plurality suggested a more narrowly tailored solution could be simply to publish Medal of Honor recipients in an online database. In other words, the government could overcome the problem of false speech by promoting true speech. 

In 2012, President Barack Obama signed an updated version of the Stolen Valor Act that limited its penalties to situations where a misrepresentation is shown to result in receipt of some kind of benefit. That places the false speech in the category of fraud, consistent with the Alvarez opinion.

A Social Media Ministry of Truth

Applying the Alvarez standard to social media, the government could (and already does) promote its interest in public health or election integrity by publishing true speech through official channels. But there is little reason to believe the government at any level could regulate access to misinformation. Anything approaching an outright ban on accessing speech deemed false by the government not only would not be the most narrowly tailored way to deal with such speech, but it is bound to have chilling effects even on true speech.

The analysis doesn’t change if the government instead places Big Tech itself in the position of Ministry of Truth. Some propose making changes to Section 230, which currently immunizes social media companies from liability for user speech (with limited exceptions), regardless what moderation policies the platform adopts. A hypothetical change might condition Section 230’s liability shield on platforms agreeing to moderate certain categories of misinformation. But that would still place the government in the position of coercing platforms to take down speech. 

Even the “fix” of making social media companies liable for user speech they amplify through promotions on the platform, as proposed by Sen. Mark Warner’s (D-Va.) SAFE TECH Act, runs into First Amendment concerns. The aim of the bill is to regard sponsored content as constituting speech made by the platform, thus opening the platform to liability for the underlying misinformation. But any such liability also would be limited to categories of speech that fall outside First Amendment protection, like fraud or defamation. This would not appear to include most of the types of misinformation on COVID-19 or election security that animate the current legislative push.

There is no way for the government to regulate misinformation, in and of itself, consistent with the First Amendment. Big Tech companies are free to develop their own policies against misinformation, but the government may not force them to do so. 

Extremely Limited Room to Regulate Extremism

The Big Tech CEOs are also almost certain to be grilled about the use of social media to spread “hate speech” or “extremist content.” The memorandum for the March 25 hearing sums it up like this:

Facebook executives were repeatedly warned that extremist content was thriving on their platform, and that Facebook’s own algorithms and recommendation tools were responsible for the appeal of extremist groups and divisive content. Similarly, since 2015, videos from extremists have proliferated on YouTube; and YouTube’s algorithm often guides users from more innocuous or alternative content to more fringe channels and videos. Twitter has been criticized for being slow to stop white nationalists from organizing, fundraising, recruiting and spreading propaganda on Twitter.

Social media has often played host to racist, sexist, and other types of vile speech. While social media companies have community standards and other policies that restrict “hate speech” in some circumstances, there is demand from some public officials that they do more. But under a First Amendment analysis, regulating hate speech on social media would fare no better than the regulation of misinformation.

The First Amendment doesn’t allow for the regulation of “hate speech” as its own distinct category. Hate speech is, in fact, as protected as any other type of speech. There are some limited exceptions, as the First Amendment does not protect incitement, true threats of violence, or “fighting words.” Some of these flatly do not apply in the online context. “Fighting words,” for instance, applies only in face-to-face situations to “those personally abusive epithets which, when addressed to the ordinary citizen, are, as a matter of common knowledge, inherently likely to provoke violent reaction.”

One relevant precedent is the court’s 1992 decision in R.A.V. v. St. Paul, which considered a local ordinance in St. Paul, Minnesota, prohibiting public expressions that served to cause “outrage, alarm, or anger with respect to racial, gender or religious intolerance.” A juvenile was charged with violating the ordinance when he created a makeshift cross and lit it on fire in front of a black family’s home. The court unanimously struck down the ordinance as a violation of the First Amendment, finding it an impermissible content-based restraint that was not limited to incitement or true threats.

By contrast, in 2003’s Virginia v. Black, the Supreme Court upheld a Virginia law outlawing cross burnings done with the intent to intimidate. The court’s opinion distinguished R.A.V. on grounds that the Virginia statute didn’t single out speech regarding disfavored topics. Instead, it was aimed at speech that had the intent to intimidate regardless of the victim’s race, gender, religion, or other characteristic. But the court was careful to limit government regulation of hate speech to instances that involve true threats or incitement.

When it comes to incitement, the legal standard was set by the court’s landmark Brandenberg v. Ohio decision in 1969, which laid out that:

the constitutional guarantees of free speech and free press do not permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action. [EMPHASIS ADDED]

In other words, while “hate speech” is protected by the First Amendment, specific types of speech that convey true threats or fit under the related doctrine of incitement are not. The government may regulate those types of speech. And they do. In fact, social media users can be, and often are, charged with crimes for threats made online. But the government can’t issue a per se ban on hate speech or “extremist content.”

Just as with misinformation, the government also can’t condition Section 230 immunity on platforms removing hate speech. Insofar as speech is protected under the First Amendment, the government can’t specifically condition a government benefit on its removal. Even the SAFE TECH Act’s model for holding platforms accountable for amplifying hate speech or extremist content would have to be limited to speech that amounts to true threats or incitement. This is a far narrower category of hateful speech than the examples that concern legislators. 

Social media companies do remain free under the law to moderate hateful content as they see fit under their terms of service. Section 230 immunity is not dependent on whether companies do or don’t moderate such content, or on how they define hate speech. But government efforts to step in and define hate speech would likely run into First Amendment problems unless they stay focused on unprotected threats and incitement.

What Can the Government Do?

One may fairly ask what it is that governments can do to combat misinformation and hate speech online. The answer may be a law that requires takedowns by court order of speech after it is declared illegal, as proposed by the PACT Act, sponsored in the last session by Sens. Brian Schatz (D-Hawaii) and John Thune (R-S.D.). Such speech may, in some circumstances, include misinformation or hate speech.

But as outlined above, the misinformation that the government can regulate is limited to situations like fraud or defamation, while the hate speech it can regulate is limited to true threats and incitement. A narrowly tailored law that looked to address those specific categories may or may not be a good idea, but it would likely survive First Amendment scrutiny, and may even prove a productive line of discussion with the tech CEOs.

President Donald Trump has repeatedly called for repeal of Section 230. But while Trump and fellow conservatives decry Big Tech companies for their alleged anti-conservative bias, including at yet more recent hearings, their issue is not actually with Section 230. It’s with the First Amendment. 

Conservatives can’t actually do anything directly about how social media platforms moderate content because it is the First Amendment that grants those platforms a right to editorial discretion. Even FCC Commissioner Brendan Carr, who strongly opposes “Big Tech censorship,” recognizes this

By the same token, even if one were to grant that conservatives are right about the bias of moderators at these large social media platforms, it does not follow that removal of Section 230 immunity would alter that bias. In fact, in a world without Section 230 immunity, there still would be no legal cause of action for political bias. 

The truth is that conservatives use Section 230 immunity for leverage over social media platforms. The hope is that, because social media platforms desire the protections of civil immunity for third-party content, they will follow whatever conditions the government puts on their editorial discretion. But the attempt to end-run the First Amendment’s protections is also unconstitutional.

There is no cause of action for political bias by online platforms if we repeal Section 230

Consider the counterfactual: if there were no Section 230 to immunize them from liability, under what law would platforms face a viable cause of action for political bias? Conservative critics never answer this question. Instead, they focus on the irrelevant distinction between publishers and platforms. Or they talk about how Section 230 is a giveaway to Big Tech. But none consider the actual relationship between Section 230 immunity and alleged political bias.

But let’s imagine we’ve done what President Trump has called for and repealed Section 230. Where does that leave conservatives?

Unfortunately, it leaves them without any cause of action. There is no law passed by Congress or any state legislature, no regulation promulgated by the Federal Communications Commission or the Federal Trade Commission, no common law tort action that can be asserted against online platforms to force them to carry speech they don’t wish to carry. 

The difficulties of pursuing a contract claim for political bias

The best argument for conservatives is that, without Section 230 immunity, online platforms could be more easily held to any contractual restraints in their terms of service. If a platform promises, for instance, that it will moderate speech in a politically neutral way, a user could make the case that the platform violated its terms of service if it acted with political bias in her particular case.

For the vast majority of users, it is unclear whether there are damages from having a post fact-checked or removed. But for users who share in advertising revenue, the concrete injury from a moderation decision is more obvious. PragerU, for example, has (unsuccessfully) sued Google for being put in Restricted Mode on YouTube, which reduces its reach and advertising revenue. 

Even where there is a concrete injury that gets a case into court, that doesn’t necessarily mean there is a valid contract claim. In PragerU’s case against Google, a California court dismissed contract claims because the YouTube terms of service contract was written to allow the platform to retain discretion over what is published. Specifically, the court found that there can be no implied covenant of good faith and fair dealing where “YouTube reserves the right to remove Content without prior notice” and to “discontinue any aspect of the Service at any time.”

Breach-of-contract claims for moderation practices are highly dependent on what is actually promised in the terms of service. For instance, under Facebook’s TOS the company retains the right “to remove or restrict access to content that is in violation” of its community standards. Facebook does provide a process for users to request further review, but retains the right to remove content. The community standards also give Facebook broad discretion to determine, among other things, what counts as hate speech or false news. It is exceedingly unlikely that a court would ever have a basis to find a contract violation by Facebook if the company can reasonably point to a user’s violation of its terms of service. 

For example, in Ebeid v. Facebook, the U.S. Northern District of California dismissed fraud and breach of contract claims, finding the plaintiff failed to allege what contractual provision Facebook breached, that Facebook retained discretion over what ads would be posted, and that the plaintiff suffered no damages because no money was taken to be spent on the ads. The court also dismissed an implied covenant of good faith and fair dealing claim because Facebook retained the right to “remove or disapprove any post or ad at Facebook’s sole discretion.”

While the conservative critique has been that social media platforms do too much moderation—in the form of politically biased removals, fact-checking, and demonetization—others believe platforms do far too little to restrain bad conduct by users. But as long as social media platforms retain editorial discretion in their terms of service and make no other promises that can be relied upon by their users, there is little basis for a contract claim. 

The First Amendment protects the moderation policies of social media platforms, and there is no way around this

With no reasonable cause of action for political bias under the law, conservatives dangle the threat of making changes to Section 230 immunity that could prove costly to the social media platforms in order to extract concessions from the platforms to alter their practices.

This is why there are no serious efforts to actually repeal Section 230, as President Trump has asked for repeatedly. Instead, several bills propose to amend Section 230, while a rulemaking by the FCC seeks to clarify its meaning. 

But none of these proposed bills would directly affect platforms’ ability to make “biased” moderation decisions. Put simply: the First Amendment protects social media platforms’ editorial discretion. They may set rules to use their platforms, just as any private person may set rules for their own property. If I kick someone off my property for saying racist things, the First Amendment (as well as regular property law) protects my right to do so. Only under extremely limited circumstances can the government change this baseline rule and survive constitutional scrutiny.

Social media platforms’ right to editorial discretion is the same as that enjoyed by newspapers. In Miami Herald Publishing Co. v. Tornillo, the Supreme Court found:

The choice of material to go into a newspaper, and the decisions made as to limitations on the size and content of the paper, and treatment of public issues and public officials—whether fair or unfair—constitute the exercise of editorial control and judgment. It has yet to be demonstrated how governmental regulation of this crucial process can be exercised consistent with First Amendment guarantees of a free press as they have evolved to this time. 

Social media platforms, just like any other property owner, have the right to determine what they want displayed on their property. In other words, Facebook, Google, and Twitter have the right to moderate content on news feeds, search results, and timelines. The attempted constitutional end-run—threatening to remove immunity for third-party content unrelated to political bias, like defamation and other tortious acts, unless social media platforms give up their right to editorial discretion over political speech—is just as unconstitutional as directly imposing “fairness” requirements on social media platforms.

The Supreme Court has held that Congress may not leverage a government benefit to regulate a speech interest outside of the benefit’s scope. This is called the unconstitutional conditions doctrine. It basically delineates the level of regulation the government can undertake through subsidizing behavior. The government can’t condition a government benefit on giving up editorial discretion over political speech.

The point of Section 230 immunity is to remedy the moderator’s dilemma set up by Stratton Oakmont v. Prodigy, which held that if a platform chose to moderate third-party speech at all, they would be liable for what was not removed. Section 230 is not about compelling political neutrality on platforms, because it can’t be consistent with the First Amendment. Civil immunity for third-party speech online is an important benefit for social media platforms because it holds they are not liable for the acts of third-parties, with limited exceptions. Without it, platforms would restrict opportunities for third-parties to post out of fear of liability

In sum, the government may not condition enjoyment of a government benefit upon giving up a constitutionally protected right. Section 230 immunity is a clear government benefit. The right to editorial discretion is clearly protected by the First Amendment. Because the entire point of conservative Section 230 reform efforts is to compel social media platforms to carry speech they otherwise desire to remove, it fails this basic test.

Conclusion

Fundamentally, the conservative push to reform Section 230 in response to the alleged anti-conservative bias of major social media platforms is not about policy. Really, it’s about waging a culture war against the perceived “liberal elites” from Silicon Valley, just as there is an ongoing culture war against perceived “liberal elites” in the mainstream media, Hollywood, and academia. But fighting this culture war is not worth giving up conservative principles of free speech, limited government, and free markets.

Over at the Federalist Society’s blog, there has been an ongoing debate about what to do about Section 230. While there has long-been variety in what we call conservatism in the United States, the most prominent strains have agreed on at least the following: Constitutionally limited government, free markets, and prudence in policy-making. You would think all of these values would be important in the Section 230 debate. It seems, however, that some are willing to throw these principles away in pursuit of a temporary political victory over perceived “Big Tech censorship.” 

Constitutionally Limited Government: Congress Shall Make No Law

The First Amendment of the United States Constitution states: “Congress shall make no law… abridging the freedom of speech.” Originalists on the Supreme Court have noted that this makes clear that the Constitution protects against state action, not private action. In other words, the Constitution protects a negative conception of free speech, not a positive conception.

Despite this, some conservatives believe that Section 230 should be about promoting First Amendment values by mandating private entities are held to the same standards as the government. 

For instance, in his Big Tech and the Whole First Amendment, Craig Parshall of the American Center for Law and Justice (ACLJ) stated:

What better example of objective free speech standards could we have than those First Amendment principles decided by justices appointed by an elected president and confirmed by elected members of the Senate, applying the ideals laid down by our Founders? I will take those over the preferences of brilliant computer engineers any day.

In other words, he thinks Section 230 should be amended to only give Big Tech the “subsidy” of immunity if it commits to a First Amendment-like editorial regime. To defend the constitutionality of such “restrictions on Big Tech”, he points to the Turner intermediate scrutiny standard, in which the Supreme Court upheld must-carry provisions against cable networks. In particular, Parshall latches on to the “bottleneck monopoly” language from the case to argue that Big Tech is similarly situated to cable providers at the time of the case.

Turner, however, turned more on the “special characteristics of the cable medium” that gave it the bottleneck power than the market power itself. As stated by the Supreme Court:

When an individual subscribes to cable, the physical connection between the television set and the cable network gives the cable operator bottleneck, or gatekeeper, control over most (if not all) of the television programming that is channeled into the subscriber’s home. Hence, simply by virtue of its ownership of the essential pathway for cable speech, a cable operator can prevent its subscribers from obtaining access to programming it chooses to exclude. A cable operator, unlike speakers in other media, can thus silence the voice of competing speakers with a mere flick of the switch.

Turner v. FCC, 512 U.S. 622, 656 (1994).

None of the Big Tech companies has the comparable ability to silence competing speakers with a flick of the switch. In fact, the relationship goes the other way on the Internet. Users can (and do) use multiple Big Tech companies’ services, as well as those of competitors which are not quite as big. Users are the ones who can switch with a click or a swipe. There is no basis for treating Big Tech companies any differently than other First Amendment speakers.

Like newspapers, Big Tech companies must use their editorial discretion to determine what is displayed and where. Just like those newspapers, Big Tech has the First Amendment right to editorial discretion. This, not Section 230, is the bedrock law that gives Big Tech companies the right to remove content.

Thus, when Rachel Bovard of the Internet Accountability Project argues that the FCC should remove the ability of tech platforms to engage in viewpoint discrimination, she makes a serious error in arguing it is Section 230 that gives them the right to remove content.

Immediately upon noting that the NTIA petition seeks clarification on the relationship between (c)(1) and (c)(2), Bovard moves right to concern over the removal of content. “Unfortunately, embedded in that section [(c)(2)] is a catch-all phrase, ‘otherwise objectionable,’ that gives tech platforms discretion to censor anything that they deem ‘otherwise objectionable.’ Such broad language lends itself in practice to arbitrariness.” 

In order for CDA 230 to “give[] tech platforms discretion to censor,” they would have to not have that discretion absent CDA 230. Bovard totally misses the point of the First Amendment argument, stating:

Yet DC’s tech establishment frequently rejects this argument, choosing instead to focus on the First Amendment right of corporations to suppress whatever content they so choose, never acknowledging that these choices, when made at scale, have enormous ramifications. . . . 

But this argument intentionally sidesteps the fact that Sec. 230 is not required by the First Amendment, and that its application to tech platforms privileges their First Amendment behavior in a unique way among other kinds of media corporations. Newspapers also have a First Amendment right to publish what they choose—but they are subject to defamation and libel laws for content they write, or merely publish. Media companies also make First Amendment decisions subject to a thicket of laws and regulations that do not similarly encumber tech platforms.

There is the merest kernel of truth in the lines quoted above. Newspapers are indeed subject to defamation and libel laws for what they publish. But, as should be obvious, liability for publication entails actually publishing something. And what some conservatives are concerned about is platforms’ ability to not publish something: to take down conservative content.

It might be simpler if the First Amendment treated published speech and unpublished speech the same way. But it doesn’t. One can be liable for what one speaks, writes, or publishes on behalf of others. Indeed, even with the full protection of the First Amendment, there is no question that newspapers can be held responsible for delicts caused by content they publish. But no newspaper has ever been held responsible for anything they didn’t publish.

Free Markets: Competition as the Bulwark Against Abuses, not Regulation

Conservatives have long believed in the importance of property rights, exchange, and the power of the free market to promote economic growth. Competition is seen as the protector of the consumer, not big government regulators. In the latter half of the twentieth century into the twenty-first century, conservatives have fought for capitalism over socialism, free markets over regulation, and competition over cronyism. But in the name of combating anti-conservative bias online, they are willing to throw these principles away.

The bedrock belief in the right of property owners to decide the terms of how they want to engage with others is fundamental to American conservatism. As stated by none other than Bovard (along with co-author Jim Demint in their book Conservative: Knowing What to Keep):

Capitalism is nothing more or less than the extension of individual freedom from the political and cultural realms to the economy. Just as government isn’t supposed to tell you how to pray, or what to think, or what sports teams to follow or books to read, it’s not supposed to tell you what to do with your own money and property.

Conservatives normally believe that it is the free choices of consumers and producers in the marketplace that maximize consumer welfare, rather than the choices of politicians and bureaucrats. Competition, in other words, is what protects us from abuses in the marketplace. Again as Bovard and Demint rightly put it:

Under the free enterprise system, money is not redistributed by a central government bureau. It goes wherever people see value. Those who create value are rewarded which then signals to the rest of the economy to up their game. It’s continuous democracy.

To get around this, both Parshall and Bovard make much of the “market dominance” of tech platforms. The essays take the position that tech platforms have nearly unassailable monopoly power which makes them unaccountable. Bovard claims that “mega-corporations have as much power as the government itself—and in some ways, more power, because theirs is unchecked and unaccountable.” Parshall even connects this to antitrust law, stating:  

This brings us to another kind of innovation, one that’s hidden from the public view. It has to do with how Big Tech companies use both algorithms plus human review during content moderation. This review process has resulted in the targeting, suppression, or down-ranking of primarily conservative content. As such, this process, should it continue, should be considered a kind of suppressive “innovation” in a quasi-antitrust analysis.

How the process harms “consumer welfare” is obvious. A more competitive market could produce social media platforms designing more innovational content moderation systems that honor traditional free speech and First Amendment norms while still offering features and connectivity akin to the huge players.

Antitrust law, in theory, would be a good way to handle issues of market power and consumer harm that results from non-price effects. But it is difficult to see how antitrust could handle the issue of political bias well:

Just as with privacy and other product qualities, the analysis becomes increasingly complex first when tradeoffs between price and quality are introduced, and then even more so when tradeoffs between what different consumer groups perceive as quality is added. In fact, it is more complex than privacy. All but the most exhibitionistic would prefer more to less privacy, all other things being equal. But with political media consumption, most would prefer to have more of what they want to read available, even if it comes at the expense of what others may want. There is no easy way to understand what consumer welfare means in a situation where one group’s preferences need to come at the expense of another’s in moderation decisions.

Neither antitrust nor quasi-antitrust regimes are well-suited to dealing with the perceived harm of anti-conservative bias. However unfulfilling this is to some conservatives, competition and choice are better answers to perceived political bias than the heavy hand of government. 

Prudence: Awareness of Unintended Consequences

Another bedrock principle of conservatism is to be aware of unintended consequences when making changes to long-standing laws and policies. In regulatory matters, cost-benefit analysis is employed to evaluate whether policies are improving societal outcomes. Using economic thinking to understand the likely responses to changes in regulation is fundamental to American conservatism. Or as Bovard and Demint’s book title suggests, conservatism is about knowing what to keep. 

Bovard has argued that since conservatism is a set of principles, not a dogmatic ideology, it can be in favor of fighting against the collectivism of Big Tech companies imposing their political vision upon the world. Conservatism, in this Kirkian sense, doesn’t require particular policy solutions. But this analysis misses what has worked about Section 230 and how the very tech platforms she decries have greatly benefited society. Prudence means understanding what has worked and only changing what has worked in a way that will improve upon it.

The benefits of Section 230 immunity in promoting platforms for third-party speech are clear. It is not an overstatement to say that Section 230 contains “The Twenty-Six Words that Created the Internet.” It is important to note that Section 230 is not only available to Big Tech companies. It is available to all online platforms who host third-party speech. Any reform efforts at Section 230 must know what to keep.In a sense, Section (c)(1) of Section 230 does, indeed, provide greater protection for published content online than the First Amendment on its own would offer: it extends the First Amendment’s permissible scope of published content for which an online service cannot be held liable to include otherwise actionable third-party content.

But let’s be clear about the extent of this protection. It doesn’t protect anything a platform itself publishes, or even anything in which it has a significant hand in producing. Why don’t offline newspapers enjoy this “handout” (though the online versions clearly do for comments)? Because they don’t need it, and because — yes, it’s true — it comes at a cost. How much third-party content would newspapers publish without significant input from the paper itself if only they were freed from the risk of liability for such content? None? Not much? The New York Times didn’t build and sustain its reputation on the slapdash publication of unedited ramblings by random commentators. But what about classifieds? Sure. There would be more classified ads, presumably. More to the point, newspapers would exert far less oversight over the classified ads, saving themselves the expense of moderating this one, small corner of their output.

There is a cost to traditional newspapers from being denied the extended protections of Section 230. But the effect is less third-party content in parts of the paper that they didn’t wish to have the same level of editorial control. If Section 230 is a “subsidy” as critics put it, then what it is subsidizing is the hosting of third-party speech. 

The Internet would look vastly different if it was just the online reproduction of the offline world. If tech platforms were responsible for all third-party speech to the degree that newspapers are for op-eds, then they would likely moderate it to the same degree, making sure there is nothing which could expose them to liability before publishing. This means there would be far less third-party speech on the Internet.

In fact, it could be argued that it is smaller platforms who would be most affected by the repeal of Section 230 immunity. Without it, it is likely that only the biggest tech platforms would have the necessary resources to dedicate to content moderation in order to avoid liability.

Proposed Section 230 reforms will likely have unintended consequences in reducing third-party speech altogether, including conservative speech. For instance, a few bills have proposed only allowing moderation for reasons defined by statute if the platform has an “objectively reasonable belief” that the speech fits under such categories. This would likely open up tech platforms to lawsuits over the meaning of “objectively reasonable belief” that could deter them from wanting to host third-party speech altogether. Similarly, lawsuits for “selective enforcement” of a tech platform’s terms of service could lead them to either host less speech or change their terms of service.

This could actually exacerbate the issue of political bias. Allegedly anti-conservative tech platforms could respond to a “good faith” requirement in enforcing its terms of service by becoming explicitly biased. If the terms of service of a tech platform state grounds which would exclude conservative speech, a requirement of “good faith” enforcement of those terms of service will do nothing to prevent the bias. 

Conclusion

Conservatives would do well to return to their first principles in the Section 230 debate. The Constitution’s First Amendment, respect for free markets and property rights, and appreciation for unintended consequences in changing tech platform incentives all caution against the current proposals to condition Section 230 immunity on platforms giving up editorial discretion. Whether or not tech platforms engage in anti-conservative bias, there’s nothing conservative about abdicating these principles for the sake of political expediency.

Twitter’s decision to begin fact-checking the President’s tweets caused a long-simmering distrust between conservatives and online platforms to boil over late last month. This has led some conservatives to ask whether Section 230, the ‘safe harbour’ law that protects online platforms from certain liability stemming from content posted on their websites by users, is allowing online platforms to unfairly target conservative speech. 

In response to Twitter’s decision, along with an Executive Order released by the President that attacked Section 230, Senator Josh Hawley (R – MO) offered a new bill targeting online platforms, the “Limiting Section 230 Immunity to Good Samaritans Act”. This would require online platforms to engage in “good faith” moderation according to clearly stated terms of service – in effect, restricting Section 230’s protections to online platforms deemed to have done enough to moderate content ‘fairly’.  

While seemingly a sensible standard, if enacted, this approach would violate the First Amendment as an unconstitutional condition to a government benefit, thereby  undermining long-standing conservative principles and the ability of conservatives to be treated fairly online. 

There is established legal precedent that Congress may not grant benefits on conditions that violate Constitutionally-protected rights. In Rumsfeld v. FAIR, the Supreme Court stated that a law that withheld funds from universities that did not allow military recruiters on campus would be unconstitutional if it constrained those universities’ First Amendment rights to free speech. Since the First Amendment protects the right to editorial discretion, including the right of online platforms to make their own decisions on moderation, Congress may not condition Section 230 immunity on platforms taking a certain editorial stance it has dictated. 

Aware of this precedent, the bill attempts to circumvent the obstacle by taking away Section 230 immunity for issues unrelated to anti-conservative bias in moderation. Specifically, Senator Hawley’s bill attempts to condition immunity for platforms on having terms of service for content moderation, and making them subject to lawsuits if they do not act in “good faith” in policing them. 

It’s not even clear that the bill would do what Senator Hawley wants it to. The “good faith” standard only appears to apply to the enforcement of an online platform’s terms of service. It can’t, under the First Amendment, actually dictate what those terms of service say. So an online platform could, in theory, explicitly state in their terms of service that they believe some forms of conservative speech are “hate speech” they will not allow.

Mandating terms of service on content moderation is arguably akin to disclosures like labelling requirements, because it makes clear to platforms’ customers what they’re getting. There are, however, some limitations under the commercial speech doctrine as to what government can require. Under National Institute of Family & Life Advocates v. Becerra, a requirement for terms of service outlining content moderation policies would be upheld unless “unjustified or unduly burdensome.” A disclosure mandate alone would not be unconstitutional. 

But it is clear from the statutory definition of “good faith” that Senator Hawley is trying to overwhelm online platforms with lawsuits on the grounds that they have enforced these rules selectively and therefore not in “good faith”.

These “selective enforcement” lawsuits would make it practically impossible for platforms to moderate content at all, because they would open them up to being sued for any moderation, including moderation  completely unrelated to any purported anti-conservative bias. Any time a YouTuber was aggrieved about a video being pulled down as too sexually explicit, for example, they could file suit and demand that Youtube release information on whether all other similarly situated users were treated the same way. Any time a post was flagged on Facebook, for example for engaging in online bullying or for spreading false information, it could similarly lead to the same situation. 

This would end up requiring courts to act as the arbiter of decency and truth in order to even determine whether online platforms are “selectively enforcing” their terms of service.

Threatening liability for all third-party content is designed to force online platforms to give up moderating content on a perceived political basis. The result will be far less content moderation on a whole range of other areas. It is precisely this scenario that Section 230 was designed to prevent, in order to encourage platforms to moderate things like pornography that would otherwise proliferate on their sites, without exposing themselves to endless legal challenge.

It is likely that this would be unconstitutional as well. Forcing online platforms to choose between exercising their First Amendment rights to editorial discretion and retaining the benefits of Section 230 is exactly what the “unconstitutional conditions” jurisprudence is about. 

This is why conservatives have long argued the government has no business compelling speech. They opposed the “fairness doctrine” which required that radio stations provide a “balanced discussion”, and in practice allowed courts or federal agencies to determine content  until President Reagan overturned it. Later, President Bush appointee and then-FTC Chairman Tim Muris rejected a complaint against Fox News for its “Fair and Balanced” slogan, stating:

I am not aware of any instance in which the Federal Trade Commission has investigated the slogan of a news organization. There is no way to evaluate this petition without evaluating the content of the news at issue. That is a task the First Amendment leaves to the American people, not a government agency.

And recently conservatives were arguing businesses like Masterpiece Cakeshop should not be compelled to exercise their First Amendment rights against their will. All of these cases demonstrate once the state starts to try to stipulate what views can and cannot be broadcast by private organisations, conservatives will be the ones who suffer.

Senator Hawley’s bill fails to acknowledge this. Worse, it fails to live up to the Constitution, and would trample over the rights to freedom of speech that it gives. Conservatives should reject it.

In the wake of the launch of Facebook’s content oversight board, Republican Senator Josh Hawley and FCC Commissioner Brendan Carr, among others, have taken to Twitter to levy criticisms at the firm and, in the process, demonstrate just how far the Right has strayed from its first principles around free speech and private property. For his part, Commissioner Carr’s thread makes the case that the members of the board are highly partisan and mostly left-wing and can’t be trusted with the responsibility of oversight. While Senator Hawley took the approach that the Board’s very existence is just further evidence of the need to break Facebook up. 

Both Hawley and Carr have been lauded in rightwing circles, but in reality their positions contradict conservative notions of the free speech and private property protections given by the First Amendment.  

This blog post serves as a sequel to a post I wrote last year here at TOTM explaining how There’s nothing “conservative” about Trump’s views on free speech and the regulation of social media. As I wrote there:

I have noted in several places before that there is a conflict of visions when it comes to whether the First Amendment protects a negative or positive conception of free speech. For those unfamiliar with the distinction: it comes from philosopher Isaiah Berlin, who identified negative liberty as freedom from external interference, and positive liberty as freedom to do something, including having the power and resources necessary to do that thing. Discussions of the First Amendment’s protection of free speech often elide over this distinction.

With respect to speech, the negative conception of liberty recognizes that individual property owners can control what is said on their property, for example. To force property owners to allow speakers/speech on their property that they don’t desire would actually be a violation of their liberty — what the Supreme Court calls “compelled speech.” The First Amendment, consistent with this view, generally protects speech from government interference (with very few, narrow exceptions), while allowing private regulation of speech (again, with very few, narrow exceptions).

Commissioner Carr’s complaint and Senator Hawley’s antitrust approach of breaking up Facebook has much more in common with the views traditionally held by left-wing Democrats on the need for the government to regulate private actors in order to promote speech interests. Originalists and law & economics scholars, on the other hand, have consistently taken the opposite point of view that the First Amendment protects against government infringement of speech interests, including protecting the right to editorial discretion. While there is clearly a conflict of visions in First Amendment jurisprudence, the conservative (and, in my view, correct) point of view should not be jettisoned by Republicans to achieve short-term political gains.

The First Amendment restricts government action, not private action

The First Amendment, by its very text, only applies to government action: “Congress shall make no law . . . abridging the freedom of speech.” This applies to the “State[s]” through the Fourteenth Amendment. There is extreme difficulty in finding any textual hook to say the First Amendment protects against private action, like that of Facebook. 

Originalists have consistently agreed. Most recently, in Manhattan Community Access Corp. v. Halleck, Justice Kavanaugh—on behalf of the conservative bloc and the Court—wrote:

Ratified in 1791, the First Amendment provides in relevant part that “Congress shall make no law . . . abridging the freedom of speech.” Ratified in 1868, the Fourteenth Amendment makes the First Amendment’s Free Speech Clause applicable against the States: “No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law . . . .” §1. The text and original meaning of those Amendments, as well as this Court’s longstanding precedents, establish that the Free Speech Clause prohibits only governmental abridgment of speech. The Free Speech Clause does not prohibit private abridgment of speech… In accord with the text and structure of the Constitution, this Court’s state-action doctrine distinguishes the government from individuals and private entities. By enforcing that constitutional boundary between the governmental and the private, the state-action doctrine protects a robust sphere of individual liberty. (Emphasis added).

This was true at the adoption of the First Amendment and remains true today in a high-tech world. Federal district courts have consistently dismissed First Amendment lawsuits against Facebook on the grounds there is no state action. 

For instance, in Nyawba v. Facebook, the plaintiff initiated a civil rights lawsuit against Facebook for restricting his use of the platform. The U.S. District Court for the Southern District of Texas dismissed the case, noting 

Because the First Amendment governs only governmental restrictions on speech, Nyabwa has not stated a cause of action against FaceBook… Like his free speech claims, Nyabwa’s claims for violation of his right of association and violation of his due process rights are claims that may be vindicated against governmental actors pursuant to § 1983, but not a private entity such as FaceBook.

Similarly, in Young v. Facebook, the U.S. District Court for the Northern District of California rejected a claim that Facebook violated the First Amendment by deactivating the plaintiff’s Facebook page. The court declined to subject Facebook to the First Amendment analysis, stating that “because Young has not alleged any action under color of state law, she fails to state a claim under § 1983.”

The First Amendment restricts antitrust actions against Facebook, not Facebook’s editorial discretion over its platform

Far from restricting Facebook, the First Amendment actually restricts government actions aimed at platforms like Facebook when they engage in editorial discretion by moderating content. If an antitrust plaintiff was to act on the impulse to “break up” Facebook because of alleged political bias in its editorial discretion, the lawsuit would be running headlong into the First Amendment’s protections.

There is no basis for concluding online platforms do not have editorial discretion under the law. In fact, the position of Facebook here is very similar to the newspaper in Miami Herald Publishing Co. v. Tornillo, in which the Supreme Court considered a state law giving candidates for public office a right to reply in newspapers to editorials written about them. The Florida Supreme Court upheld the statute, finding it furthered the “broad societal interest in the free flow of information to the public.” The U.S. Supreme Court, despite noting the level of concentration in the newspaper industry, nonetheless reversed. The Court explicitly found the newspaper had a First Amendment right to editorial discretion:

The choice of material to go into a newspaper, and the decisions made as to limitations on the size and content of the paper, and treatment of public issues and public officials — whether fair or unfair — constitute the exercise of editorial control and judgment. It has yet to be demonstrated how governmental regulation of this crucial process can be exercised consistent with First Amendment guarantees of a free press as they have evolved to this time. 

Online platforms have the same First Amendment protections for editorial discretion. For instance, in both Search King v. Google and Langdon v. Google, two different federal district courts ruled Google’s search results are subject to First Amendment protections, both citing Tornillo

In Zhang v. Baidu.com, another district court went so far as to grant a Chinese search engine the right to editorial discretion in limiting access to democracy movements in China. The court found that the search engine “inevitably make[s] editorial judgments about what information (or kinds of information) to include in the results and how and where to display that information.” Much like the search engine in Zhang, Facebook is clearly making editorial judgments about what information shows up in newsfeed and where to display it. 

None of this changes because the generally applicable law is antitrust rather than some other form of regulation. For instance, in Tornillo, the Supreme Court took pains to distinguish the case from an earlier antitrust case against newspapers, Associated Press v. United States, which found that there was no broad exemption from antitrust under the First Amendment.

The Court foresaw the problems relating to government-enforced access as early as its decision in Associated Press v. United States, supra. There it carefully contrasted the private “compulsion to print” called for by the Association’s bylaws with the provisions of the District Court decree against appellants which “does not compel AP or its members to permit publication of anything which their `reason’ tells them should not be published.”

In other words, the Tornillo and Associated Press establish the government may not compel speech through regulation, including an antitrust remedy. 

Once it is conceded that there is a speech interest here, the government must justify the use of antitrust law to compel Facebook to display the speech of users in the newsfeeds of others under the strict scrutiny test of the First Amendment. In other words, the use of antitrust law must be narrowly tailored to a compelling government interest. Even taking for granted that there may be a compelling government interest in facilitating a free and open platform (which is by no means certain), it is clear that this would not be narrowly tailored action. 

First, “breaking up” Facebook is clearly overbroad as compared to the goal of promoting free speech on the platform. There is no need to break it up just because it has an Oversight Board that engages in editorial responsibilities. There are many less restrictive means, including market competition, which has greatly expanded consumer choice for communications and connections. Second, antitrust does not even really have a remedy for free speech issues complained of here, as it would require courts to engage in long-term oversight and engage in compelled speech foreclosed by Associated Press

Note that this makes good sense from a law & economics perspective. Platforms like Facebook should be free to regulate the speech on their platforms as they see fit and consumers are free to decide which platforms they wish to use based upon that information. While there are certainly network effects to social media, the plethora of options currently available with low switching costs suggests that there is no basis for antitrust action against Facebook because consumers are unable to speak. In other words, the least restrictive means test of the First Amendment is best fulfilled by market competition in this case.

If there were a basis for antitrust intervention against Facebook, either through merger review or as a standalone monopoly claim, the underlying issue would be harm to competition. While this would have implications for speech concerns (which may be incorporated into an analysis through quality-adjusted price), it is inconceivable how an antitrust remedy could be formed on speech issues consistent with the First Amendment. 

Conclusion

Despite now well-worn complaints by so-called conservatives in and out of the government about the baneful influence of Facebook and other Big Tech companies, the First Amendment forecloses government actions to violate the editorial discretion of these companies. Even if Commissioner Carr is right, this latest call for antitrust enforcement against Facebook by Senator Hawley should be rejected for principled conservative reasons.

Monday July 22, ICLE filed a regulatory comment arguing the leased access requirements enforced by the FCC are unconstitutional compelled speech that violate the First Amendment. 

When the DC Circuit Court of Appeals last reviewed the constitutionality of leased access rules in Time Warner v. FCC, cable had so-called “bottleneck power” over the marketplace for video programming and, just a few years prior, the Supreme Court had subjected other programming regulations to intermediate scrutiny in Turner v. FCC

Intermediate scrutiny is a lower standard than the strict scrutiny usually required for First Amendment claims. Strict scrutiny requires a regulation of speech to be narrowly tailored to a compelling state interest. Intermediate scrutiny only requires a regulation to further an important or substantial governmental interest unrelated to the suppression of free expression, and the incidental restriction speech must be no greater than is essential to the furtherance of that interest.

But, since the decisions in Time Warner and Turner, there have been dramatic changes in the video marketplace (including the rise of the Internet!) and cable no longer has anything like “bottleneck power.” Independent programmers have many distribution options to get content to consumers. Since the justification for intermediate scrutiny is no longer an accurate depiction of the competitive marketplace, the leased rules should be subject to strict scrutiny.

And, if subject to strict scrutiny, the leased access rules would not survive judicial review. Even accepting that there is a compelling governmental interest, the rules are not narrowly tailored to that end. Not only are they essentially obsolete in the highly competitive video distribution marketplace, but antitrust law would be better suited to handle any anticompetitive abuses of market power by cable operators. There is no basis for compelling the cable operators to lease some of their channels to unaffiliated programmers.

Our full comments are here

[Note: A group of 50 academics and 27 organizations, including both myself and ICLE, recently released a statement of principles for lawmakers to consider in discussions of Section 230.]

In a remarkable ruling issued earlier this month, the Third Circuit Court of Appeals held in Oberdorf v. Amazon that, under Pennsylvania products liability law, Amazon could be found liable for a third party vendor’s sale of a defective product via Amazon Marketplace. This ruling comes in the context of Section 230 of the Communications Decency Act, which is broadly understood as immunizing platforms against liability for harmful conduct posted to their platforms by third parties (Section 230 purists may object to myu use of “platform” as approximation for the statute’s term of “interactive computer services”; I address this concern by acknowledging it with this parenthetical). This immunity has long been a bedrock principle of Internet law; it has also long been controversial; and those controversies are very much at the fore of discussion today. 

The response to the opinion has been mixed, to say the least. Eric Goldman, for instance, has asked “are we at the end of online marketplaces?,” suggesting that they “might in the future look like a quaint artifact of the early 21st century.” Kate Klonick, on the other hand, calls the opinion “a brilliant way of both holding tech responsible for harms they perpetuate & making sure we preserve free speech online.”

My own inclination is that both Eric and Kate overstate their respective positions – though neither without reason. The facts of Oberdorf cabin the effects of the holding both to Pennsylvania law and to situations where the platform cannot identify the seller. This suggests that the effects will be relatively limited. 

But, and what I explore in this post, the opinion does elucidate a particular and problematic feature of section 230: that it can be used as a liability shield for harmful conduct. The judges in Oberdorf seem ill-inclined to extend Section 230’s protections to a platform that can easily be used by bad actors as a liability shield. Riffing on this concern, I argue below that Section 230 immunity be proportional to platforms’ ability to reasonably identify speakers using their platforms to engage in harmful speech or conduct.

This idea is developed in more detail in the last section of this post – including responding to the obvious (and overwrought) objections to it. But first it offers some background on Section 230, the Oberdorf and related cases, the Third Circuit’s analysis in Oberdorf, and the recent debates about Section 230. 

Section 230

“Section 230” refers to a portion of the Communications Decency Act that was added to the Communications Act by the 1996 Telecommunications Act, codified at 47 U.S.C. 230. (NB: that’s a sentence that only a communications lawyer could love!) It is widely recognized as – and discussed even by those who disagree with this view as – having been critical to the growth of the modern Internet. As Jeff Kosseff labels it in his recent book, the key provision of section 230 comprises the “26 words that created the Internet.” That section, 230(c)(1), states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” (For those not familiar with it, Kosseff’s book is worth a read – or for the Cliff’s Notes version see here, here, here, here, here, or here.)

Section 230 was enacted to do two things. First, section (c)(1) makes clear that platforms are not liable for user-generated content. In other words, if a user of Facebook, Amazon, the comments section of a Washington Post article, a restaurant review site, a blog that focuses on the knitting of cat-themed sweaters, or any other “interactive computer service,” posts something for which that user may face legal liability, the platform hosting that user’s speech does not face liability for that speech. 

And second, section (c)(2) makes clear that platforms are free to moderate content uploaded by their users, and that they face no liability for doing so. This section was added precisely to repudiate a case that had held that once a platform (in that case, Prodigy) decided to moderate user-generated content, it undertook an obligation to do so. That case meant that platforms faced a Hobson’s choice: either don’t moderate content and don’t risk liability, or moderate all content and face liability for failure to do so well. There was no middle ground: a platform couldn’t say, for instance, “this one post is particularly problematic, so we are going to take it down – but this doesn’t mean that we are going to pervasively moderate content.”

Together, these two provisions stand generally for the proposition that online platforms are not liable for content created by their users, but they are free to moderate that content without facing liability for doing so. It recognized, on the one hand, that it was impractical (i.e., the Internet economy could not function) to require that platforms moderate all user-generated content, so section (c)(1) says that they don’t need to; but, on the other hand, it recognizes that it is desirable for platforms to moderate problematic content to the best of their ability, so section (c)(2) says that they won’t be punished (i.e., lose the immunity granted by section (c)(1) if they voluntarily elect to moderate content). 

Section 230 is written in broad – and has been interpreted by the courts in even broader – terms. Section (c)(1) says that platforms cannot be held liable for the content generated by their users, full stop. The only exceptions are for copyrighted content and content that violates federal criminal law. There is no “unless it is really bad” exception, or a “the platform may be liable if the user-generated content causes significant tangible harm” exception, or an “unless the platform knows about it” exception, or even an “unless the platform makes money off of and actively facilitates harmful content” exception. So long as the content is generated by the user (not by the platform itself), Section 230 shields the platform from liability. 

Oberdorf v. Amazon

This background leads us to the Third Circuit’s opinion in Oberdorf v. Amazon. The opinion is remarkable because it is one of only a few cases in which a court has, despite Section 230, found a platform liable for the conduct of a third party facilitated through the use of that platform. 

Prior to the Third Circuit’s recent opinion, the best known previous case is the 9th Circuit’s Model Mayhem opinion. In that case, the court found that Model Mayhem, a website that helps match models with modeling jobs, had a duty to warn models about individuals who were known to be using the website to find women to sexually assault. 

It is worth spending another moment on the Model Mayhem opinion before returning to the Third Circuit’s Oberdorf opinion. The crux of the 9th Circuit’s opinion in the Model Mayhem case was that the state of Florida (where the assaults occurred) has a duty-to-warn law, which creates a duty between the platform and the user. This duty to warn was triggered by the case-specific fact that the platform had actual knowledge that two of its users were predatorily using the site to find women to assault. Once triggered, this duty to warn exists between the platform and the user. Because the platform faces liability directly for its failure to warn, it is not shielded by section 230 (which only shields the platform from liability for the conduct of the third parties using the platform to engage in harmful conduct). 

In its opinion, the Third Circuit offered a similar analysis – but in a much broader context. 

The Oberdorf case involves a defective dog leash sold to Ms. Oberdorf by a seller doing business as The Furry Gang on Amazon Marketplace. The leash malfunctioned, hitting Ms. Oberdorf in the face and causing permanent blindness in one eye. When she attempted to sue The Furry Gang, she discovered that they were no longer doing business on Amazon Marketplace – and that Amazon did not have sufficient information about their identity for Ms. Oberdorf to bring suit against them.

Undeterred, Ms. Oberdorf sued Amazon under Pennsylvania product liability law, arguing that Amazon was the seller of the defective leash, so was liable for her injuries. Part of Amazon’s defense was that the actual seller, The Furry Gang, was a user of their Marketplace platform – the sale resulted from the storefront generated by The Furry Gang and merely hosted by Amazon Marketplace. Under this theory, Section 230 would bar Amazon from liability for the sale that resulted from the seller’s user-generated storefront. 

The Third Circuit judges had none of that argument. All three judges agreed that under Pennsylvania law, the products liability relationship existed between Ms. Oberdorf and Amazon, so Section 230 did not apply. The two-judge majority found Amazon liable to Ms. Oberford under this law – the dissenting judge would have found Amazon’s conduct insufficient as a basis for liability.

This opinion, in other words, follows in the footsteps of the Ninth Circuit’s Model Mayhem opinion in holding that state law creates a duty directly between the harmed user and the platform, and that that duty isn’t affected by Section 230. But Oberdorf is potentially much broader in impact than Model Mayhem. States are more likely to have broader product liability laws than duty to warn laws. Even more impactful, product liability laws are generally strict liability laws, whereas duty to warn laws are generally triggered by an actual knowledge requirement.

The Third Circuit’s Focus on Agency and Liability Shields

The understanding of Oberdorf described above is that it is the latest in a developing line of cases holding that claims based on state law duties that require platforms to protect users from third party harms can survive Section 230 defenses. 

But there is another, critical, issue in the background of the case that appears to have affected the court’s thinking – and that, I argue, should be a path forward for Section 230. The judges writing for the Third Circuit majority draw attention to

the extensive record evidence that Amazon fails to vet third-party vendors for amenability to legal process. The first factor [of analysis for application of the state’s products liability law] weighs in favor of strict liability not because The Furry Gang cannot be located and/or may be insolvent, but rather because Amazon enables third-party vendors such as The Furry Gang to structure and/or conceal themselves from liability altogether.

This is important for analysis under the Pennsylvania product liability law, which has a marketing chain provision that allows injured consumers to seek redress up the marketing chain if the direct seller of a defective product is insolvent or otherwise unavailable for suit. But the court’s language focuses on Amazon’s design of Marketplace and the ease with which Marketplace can be used by merchants as a liability shield. 

This focus is unsurprising: the law generally does not allow one party to shield another from liability without assuming liability for the shielded party’s conduct. Indeed, this is pretty basic vicarious liability, agency, first-year law school kind of stuff. It is unsurprising that judges would balk at an argument that Amazon could design its platform in a way that makes it impossible for harmed parties to sue a tortfeasor without Amazon in turn assuming liability for any potentially tortious conduct. 

Section 230 is having a bad day

As most who have read this far are almost certainly aware, Section 230 is a big, controversial, political mess right now. Politicians from Josh Hawley to Nancy Pelosi have suggested curtailing Section 230. President Trump just held his “Social Media Summit.” And countries around the world are imposing near-impossible obligations on platforms to remove or otherwise moderate potentially problematic content – obligations that are anathema to Section 230 as they increasingly reflect and influence discussions in the United States. 

To be clear, almost all of the ideas floating around about how to change Section 230 are bad. That is an understatement: they are potentially devastating to the Internet – both to the economic ecosystem and the social ecosystem that have developed and thrived largely because of Section 230.

To be clear, there is also a lot of really, disgustingly, problematic content online – and social media platforms, in particular, have facilitated a great deal of legitimately problematic conduct. But deputizing them to police that conduct and to make real-time decisions about speech that is impossible to evaluate in real time is not a solution to these problems. And to the extent that some platforms may be able to do these things, the novel capabilities of a few platforms to obligations for all would only serve to create entry barriers for smaller platforms and to stifle innovation. 

This is why a group of 50 academics and 27 organizations released a statement of principles last week to inform lawmakers about key considerations to take into account when discussing how Section 230 may be changed. The purpose of these principles is to acknowledge that some change to Section 230 may be appropriate – may even be needed at this juncture – but that such changes should be careful and modest, carefully considered so as to not disrupt the vast benefits for society that Section 230 has made possible and is needed to keep vital.

The Third Circuit offers a Third Way on 230 

The Third Circuit’s opinion offers a modest way that Section 230 could be changed – and, I would say, improved – to address some of the real harms that it enables without undermining the important purposes that it serves. To wit, Section 230’s immunity could be attenuated by an obligation to facilitate the identification of users on that platform, subject to legal process, in proportion to the size and resources available to the platform, the technological feasibility of such identification, the foreseeability of the platform being used to facilitate harmful speech or conduct, and the expected importance (as defined from a First Amendment perspective) of speech on that platform.

In other words, if there are readily available ways to establish some form of identify for users – for instance, by email addresses on widely-used platforms, social media accounts, logs of IP addresses – and there is reason to expect that users of the platform could be subject to suit – for instance, because they’re engaged in commercial activities or the purpose of the platform is to provide a forum for speech that is likely to legally actionable – then the platform needs to be reasonably able to provide reasonable information about speakers subject to legal action in order to avail itself of any Section 230 defense. Stated otherwise, platforms need to be able to reasonably comply with so-called unmasking subpoenas issued in the civil context to the extent such compliance is feasible for the platform’s size, sophistication, resources, &c.

An obligation such as this would have been at best meaningless and at worst devastating at the time Section 230 was adopted. But 25 years later, the Internet is a very different place. Most users have online accounts – email addresses, social media profiles, &c – that can serve as some form of online identification.

More important, we now have evidence of a growing range of harmful conduct and speech that can occur online, and of platforms that use Section 230 as a shield to protect those engaging in such speech or conduct from litigation. Such speakers are clear bad actors who are clearly abusing Section 230 facilitate bad conduct. They should not be able to do so.

Many of the traditional proponents of Section 230 will argue that this idea is a non-starter. Two of the obvious objections are that it would place a disastrous burden on platforms especially start-ups and smaller platforms, and that it would stifle socially valuable anonymous speech. Both are valid concerns, but also accommodated by this proposal.

The concern that modest user-identification requirements would be disastrous to platforms made a great deal of sense in the early years of the Internet, both the law and technology around user identification were less developed. Today, there is a wide-range of low-cost, off-the-shelf, techniques to establish a user’s identity to some level of precision – from logging of IP addresses, to requiring a valid email address to an established provider, registration with an established social media identity, or even SMS-authentication. None of these is perfect; they present a range of cost and sophistication to implement and a range of offer a range of ease of identification.

The proposal offered here is not that platforms be able to identify their speaker – it’s better described as that they not deliberately act as a liability shield. It’s requirement is that platforms implement reasonable identity technology in proportion to their size, sophistication, and the likelihood of harmful speech on their platforms. A small platform for exchanging bread recipes would be fine to maintain a log of usernames and IP addresses. A large, well-resourced, platform hosting commercial activity (such as Amazon Marketplace) may be expected to establish a verified identity for the merchants it hosts. A forum known for hosting hate speech would be expected to have better identification records – it is entirely foreseeable that its users would be subject to legal action. A forum of support groups for marginalized and disadvantaged communities would face a lower obligation than a forum of similar size and sophistication known for hosting legally-actionable speech.

This proportionality approach also addresses the anonymous speech concern. Anonymous speech is often of great social and political value. But anonymity can also be used for, and as made amply clear in contemporary online discussion can bring out the worst of, speech that is socially and politically destructive. Tying Section 230’s immunity to the nature of speech on a platform gives platforms an incentive to moderate speech – to make sure that anonymous speech is used for its good purposes while working to prevent its use for its lesser purposes. This is in line with one of the defining goals of Section 230. 

The challenge, of course, has been how to do this without exposing platforms to potentially crippling liability if they fail to effectively moderate speech. This is why Section 230 took the approach that it did, allowing but not requiring moderation. This proposal’s user-identification requirement shifts that balance from “allowing but not requiring” to “encouraging but not requiring.” Platforms are under no legal obligation to moderate speech, but if they elect not to, they need to make reasonable efforts to ensure that their users engaging in problematic speech can be identified by parties harmed by their speech or conduct. In an era in which sites like 8chan expressly don’t maintain user logs in order to shield themselves from known harmful speech, and Amazon Marketplace allows sellers into the market who cannot be sued by injured consumers, this is a common-sense change to the law.

It would also likely have substantially the same effect as other proposals for Section 230 reform, but without the significant challenges those suggestions face. For instance, Danielle Citron & Ben Wittes have proposed that courts should give substantive meaning to Section 230’s “Good Samaritan” language in section (c)(2)’s subheading, or, in the alternative, that section (c)(1)’s immunity require that platforms “take[] reasonable steps to prevent unlawful uses of its services.” This approach is problematic on both First Amendment and process grounds, because it requires courts to evaluate the substantive content and speech decisions that platforms engage in. It effectively tasks platforms with undertaking the task of the courts in developing a (potentially platform-specific) law of content moderations – and threatens them with a loss of Section 230 immunity is they fail effectively to do so.

By contrast, this proposal would allow, and even encourage, platforms to engage in such moderation, but offers them a gentler, more binary, and procedurally-focused safety valve to maintain their Section 230 immunity. If a user engages in harmful speech or conduct and the platform can assist plaintiffs and courts in bringing legal action against the user in the courts, then the “moderation” process occurs in the courts through ordinary civil litigation. 

To be sure, there are still some uncomfortable and difficult substantive questions – has a platform implemented reasonable identification technologies, is the speech on the platform of the sort that would be viewed as requiring (or otherwise justifying protection of the speaker’s) anonymity, and the like. But these are questions of a type that courts are accustomed to, if somewhat uncomfortable with, addressing. They are, for instance, the sort of issues that courts address in the context of civil unmasking subpoenas.

This distinction is demonstrated in the comparison between Sections 230 and 512. Section 512 is an exception to 230 for copyrighted materials that was put into place by the 1998 Digital Millennium Copyright Act. It takes copyrighted materials outside of the scope of Section 230 and requires platforms to put in place a “notice and takedown” regime in order to be immunized for hosting copyrighted content uploaded by users. This regime has proved controversial, among other reasons, because it effectively requires platforms to act as courts in deciding whether a given piece of content is subject to a valid copyright claim. The Citron/Wittes proposal effectively subjects platforms to a similar requirement in order to maintain Section 230 immunity; the identity-technology proposal, on the other hand, offers an intermediate requirement.

Indeed, the principal effect of this intermediate requirement is to maintain the pre-platform status quo. IRL, if one person says or does something harmful to another person, their recourse is in court. This is true in public and in private; it’s true if the harmful speech occurs on the street, in a store, in a public building, or a private home. If Donny defames Peggy in Hank’s house, Peggy sues Donny in court; she doesn’t sue Hank, and she doesn’t sue Donny in the court of Hank. To the extent that we think of platforms as the fora where people interact online – as the “place” of the Internet – this proposal is intended to ensure that those engaging in harmful speech or conduct online can be hauled into court by the aggrieved parties, and to facilitate the continued development of platforms without disrupting the functioning of this system of adjudication.

Conclusion

Section 230 is, and has long been, the most important and one of the most controversial laws of the Internet. It is increasingly under attack today from a disparate range of voices across the political and geographic spectrum — voices that would overwhelming reject Section 230’s pro-innovation treatment of platforms and in its place attempt to co-opt those platforms as government-compelled (and, therefore, controlled) content moderators. 

In light of these demands, academics and organizations that understand the importance of Section 230, but also recognize the increasing pressures to amend it, have recently released a statement of principles for legislators to consider as they think about changes to Section 230.

Into this fray, the Third Circuit’s opinion in Oberdorf offers a potential change: making Section 230’s immunity for platforms proportional to their ability to reasonably identify speakers that use the platform to engage in harmful speech or conduct. This would restore the status quo ante, under which intermediaries and agents cannot be used as litigation shields without themselves assuming responsibility for any harmful conduct. This shielding effect was not an intended goal of Section 230, and it has been the cause of Section 230’s worst abuses. It was allowed at the time Section 230 was adopted because the used-identity requirements such as proposed here would not have been technologically reasonable at the time Section 230 was adopted. But technology has changed and, today, these requirements would impose only a moderate  burden on platforms today

The US Senate Subcommittee on Antitrust, Competition Policy, and Consumer Rights recently held hearings to see what, if anything, the U.S. might learn from the approaches of other countries regarding antitrust and consumer protection. US lawmakers would do well to be wary of examples from other jurisdictions, however, that are rooted in different legal and cultural traditions. Shortly before the hearing, for example, Australia’s Competition and Consumer Protection Commission (ACCC) announced that it was exploring broad new regulations, predicated on theoretical harms, that would threaten both consumer welfare and individuals’ rights to free expression that are completely at odds with American norms.

The ACCC seeks vast discretion to shape the way that online platforms operate — a regulatory venture that threatens to undermine the value which companies provide to consumers. Even more troubling are its plans to regulate free expression on the Internet, which if implemented in the US, would contravene Americans’ First Amendment guarantees to free speech.

The ACCC’s errors are fundamental, starting with the contradictory assertion that:

Australian law does not prohibit a business from possessing significant market power or using its efficiencies or skills to “out compete” its rivals. But when their dominant position is at risk of creating competitive or consumer harm, governments should stay ahead of the game and act to protect consumers and businesses through regulation.

Thus, the ACCC recognizes that businesses may work to beat out their rivals and thus gain in market share. However, this is immediately followed by the caveat that the state may prevent such activity, when such market gains are merely “at risk” of coming at the expense of consumers or business rivals. Thus, the ACCC does not need to show that harm has been done, merely that it might take place — even if the products and services being provided otherwise benefit the public.

The ACCC report then uses this fundamental error as the basis for recommending content regulation of digital platforms like Facebook and Google (who have apparently been identified by Australia’s clairvoyant PreCrime Antitrust unit as being guilty of future violations). It argues that the lack of transparency and oversight in the algorithms these companies employ could result in a range of possible social and economic damages, despite the fact that consumers continue to rely on these products. These potential issues include prioritization of the content and products of the host company, under-serving of ads within their products, and creation of “filter bubbles” that conceal content from particular users thereby limiting their full range of choice.

The focus of these concerns is the kind and quality of  information that users are receiving as a result of the “media market” that results from the “ranking and display of news and journalistic content.” As a remedy for its hypothesised concerns, the ACCC has proposed a new regulatory authority tasked with overseeing the operation of the platforms’ algorithms. The ACCC claims this would ensure that search and newsfeed results are balanced and of high quality. This policy would undermine consumer welfare  in pursuit of remedying speculative harms.

Rather than the search results or news feeds being determined by the interaction between the algorithm and the user, the results would instead be altered to comply with criteria established by the ACCC. Yet, this would substantially undermine the value of these services.  The competitive differentiation between, say, Google and Bing lies in their unique, proprietary search algorithms. The ACCC’s intervention would necessarily remove some of this differentiation between online providers, notionally to improve the “quality” of results. But such second-guessing by regulators would quickly undermine the actual quality–and utility — of these services to users.

A second, but more troubling prospect is the threat of censorship that emerges from this kind of regime. Any agency granted a mandate to undertake such algorithmic oversight, and override or reconfigure the product of online services, thereby controls the content consumers may access. Such regulatory power thus affects not only what users can read, but what media outlets might be able to say in order to successfully offer curated content. This sort of control is deeply problematic since users are no longer merely faced with a potential “filter bubble” based on their own preferences interacting with a single provider, but with a pervasive set of speech controls promulgated by the government. The history of such state censorship is one which has demonstrated strong harms to both social welfare and rule of law, and should not be emulated.

Undoubtedly antitrust and consumer protection laws should be continually reviewed and revised. However, if we wish to uphold the principles upon which the US was founded and continue to protect consumer welfare, the US should avoid following the path Australia proposes to take.

It is a bedrock principle underlying the First Amendment that the government may not penalize private speech merely because it disapproves of the message it conveys.

The Federal Circuit handed down a victory for free expression today — in the commercial context no less. At issue was the Lanham Act’s § 2(a) prohibition of trademark registrations that

[c]onsist[] of or comprise[] immoral, deceptive, or scandalous matter; or matter which may disparage or falsely suggest a connection with persons, living or dead, institutions, beliefs, or national symbols, or bring them into contempt, or disrepute.

The court, sitting en banc, held that the “disparaging” provision is an unconstitutional violation of free expression, and that trademarks will indeed be protected by the First Amendment. Although it declined to decide whether the other prohibitions actually violated the First Amendment, the opinion contained a very strong suggestion to future panels that this opinion likely applies in that context as well.

In many respects the opinion was not all that surprising (particularly if you’ve read my thoughts on the subject here and here ). However given that it was a predecessor Court of Customs and Patent Appeals decision, In Re McGinley, that once held that First Amendment concerns were not implicated at all by § 2(a) because “it is clear that the … refusal to register appellant’s mark does not affect his right to use it” — totally ignoring of course the chilling effects on speech — it was by no means certain that this decision would come out correctly decided.

Today’s holding vacated a decision from a three-judge panel that, earlier this year, upheld the ill-fated “disparaging” prohibition. From just a cursory reading of § 2(a), it should be a no-brainer that it clearly implicates the content of speech — if not a particular view point — and should get at least some First Amendment scrutiny. However, the earlier three-judge opinion  gave all of three paragraphs to this consideration — one of which was just a quotation from McGinley. There, the three-judge panel rather tersely concluded that the First Amendment argument was “foreclosed by our precedent.”

Thus it was with pleasure that I read the Federal Circuit as it today acknowledged that “[m]ore than thirty years have passed since the decision in McGinley, and in that time both the McGinley decision and our reliance on it have been widely criticized[.]” The core of the First Amendment analysis is fairly straightforward: barring “disparaging” marks from registration is neither content neutral nor viewpoint neutral, and is therefore subject to strict scrutiny (which it fails). The court notes that McGinley’s First Amendment analysis was “cursory” (to put it mildly), and was decided before a fully developed body of commercial speech doctrine had emerged. Overall, the opinion is a good example of subtle, probing First Amendment analysis, wherein the court really grasps that merely labeling speech as “commercial” does not somehow magically strip away any protected expressive content.

In fact, perhaps the most important and interesting material has to do with this commercial speech analysis. The court acknowledges that the government’s policy against “disparaging” marks is targeting the expressive aspects of trademarks and not the more easily regulable “transactional” aspects (such as product information, pricing, etc.)— to look at § 2(a) otherwise would not make sense as the government is rather explicitly trying to stop certain messages because of their noncommercial aspects. And the court importantly acknowledges the Supreme Court’s admonition that “[a] consumer’s concern for the free flow of commercial speech often may be far keener than his concern for urgent political dialogue” ( although I might go so far as to hazard a guess that commercial speech is more important that political speech, most of the time, to most people, but perhaps I am just cynical).

The upshot of the Federal Circuit’s new view of trademarks and “commercial speech” reinforces the notion that regulations and laws that are directed toward “commercial speech” need to be very narrowly focused on the actual “commercial” message — pricing, source, etc. — and cannot veer into controlling the “expressive” aspects without justification under strict scrutiny. Although there is nothing terrible new or shocking here, the opinion ties together a variety of the commercial speech doctrines, gives much needed clarity to trademark registration, and reaffirms a sensible view of commercial speech law.

And, although I may be reading too deeply based on my preferences, I think the opinion is quietly staking out a useful position for commercial speech cases going forward—at least to a speech maximalist like myself. In particular, it explicitly relies upon the “unconstitutional conditions” doctrine for the proposition that the benefits of government programs cannot be granted upon a condition that a party only engage in “good” or “approved” commercial speech.  As the world becomes increasingly interested in hate speech regulation,  and our college campuses more interested in preparing a generation of”safe spacers” than of critically thinking adults, this will undoubtedly become an important arrow in a speech defender’s quiver.

Last July, the Eastern District of Virginia upheld the cancellation of various trademarks of the Washington Redskins on the grounds that the marks were disparaging to Native Americans. I am neither a fan of football, nor of offensive names for sports teams–what I am is a fan of free speech. Although the Redskins may be well advised to change their team name, interfering with both the team’s right to free speech as well as its property right in the registered mark is the wrong way–both legally and in principle–to achieve socially desirable ends.

Various theories have been advanced, but the really interesting part of the dispute–a topic upon which I published a paper this year–is the likelihood that the Lanham Act’s prohibition of immoral, scandalous, or disparaging marks runs afoul of the First Amendment. I was cheered to see this week that the First Amendment Lawyers Association filed an amicus brief largely along the lines of my paper. However, there were a couple of points that I still feel deserve more attention when thinking about the § 2(a) (the Lanham Act’s so-called “morality clauses”).

Trademarks Are Not License Plates

The district court tried to sidestep the First Amendment issue by declaring that the trademarks themselves are not at issue, but merely the right to register the trademarks. To reach its result, the court relied on the recent Walker case wherein the Supreme Court declared that Texas was at liberty to prevent Confederate flags from appearing on its license plates, since license plates could be considered the speech of the government.

However, there is an important distinction between license plates and trademarks. License plates are a good totally of government manufacture. One cannot drive a car on a public road without applying to the government for permission and affixing a government registration tag on the vehicle. The plate is not a blank slate upon which one may express one’s self, but is a state-issued information placard used for law enforcement purposes.

Trademarks, arising as they do from actual use, preexist federal recognition. The Lanham Act merely provides a mechanism for registering trademarks that happen to be used in interstate commerce. The federal government then chooses to recognize that trademark when contested or offered for registration.

This is a major distinction: the social field of trademarks already exists – the federal government has chosen to regulate and provide an enforcement mechanism for these property rights and speech acts when used in interstate commerce. Thus it is the market for trademarks that constitutes the forum, and not the physically recorded government register. Given that the government has interfered in a preexisting market in a way in which it protects some state-created trademark property rights, but not others, is it proper to regulate speech by virtue of its content? I think not.

Further, license plates are obviously government property to anyone who looks at them. Plates bear the very name of the state directly on their face. The system of trademark registration is a largely invisible process that only becomes relevant during legal proceedings. When the public looks at a given trademark I would argue that the state’s imprimatur is certainly one of the last things of which they would think.

Thus, a restriction on “immoral” or “disparaging” trademarks constitutes viewpoint discrimination. Eugene Volokh echoed this sentiment when he wrote on the refusal to register “Stop the Islamisation of America”:

Trademark registration … is a government benefit program open to a wide array of speakers with little quality judgment. Like other such programs … it should be seen as a form of “limited public forum,” in which the government may impose content-based limits but not viewpoint-based ones. An exclusion of marks that disparage groups while allowing marks that praise those groups strikes me as viewpoint discrimination.

The Lanham Act endows registrants with government-guaranteed legal rights in connection with the words and symbols by which they are recognized in society. Particularly in a globalized, interconnected society, the brand of an entity is a significant component of how it speaks to society. Discriminating against marks as “immoral” or “disparaging” can be nothing short of viewpoint discrimination.

Commercial Speech Is Protected Speech

As everyone is well aware, the First Amendment provides broad protection for a wide spectrum of speech. The definition of speech itself is likewise broad, including not only words, but also non-verbal gestures and symbols. Any governmental curtailing of such speech will be “presumptively invalid,” with the burden of rebutting that presumption on the government.

When speech is undertaken as part of commerce it does not magically lose any political, social or religious dimension it had when in a noncommercial context. Cartoons issued bearing the image of the Prophet as part of a commercial magazine are surely a political statement deserving of protection. The situation is the same if an organization adopts a logo that is derisive to a particular political or religious ideology – that publication is making a protected, expressive statement through its branding.

At first glance, one might think that defenders of § 2(a) would attempt to qualify scandalous and immoral trademarks as “obscene” and thereby render them subject to censorship. But, in McGinley the Federal Circuit explicitly refused to apply the obscenity standards from the Supreme Court to §2(a) on the grounds that the Lanham Act does not itself use the word “obscenity.” Instead, the Federal Circuit, following the TTAB, was of the opinion that “[w]hat is denied are the benefits provided by the Lanham Act which enhance the value of a mark” and that the appellant still had legal recourse under state common law. Therefore, so the court in McGinley reasoned, since the right to use the mark is not actually abridged, no expression is abridged. And this is the primary basis upon which the district court in Pro-Football built its argument that no First Amendment concerns were implicated in canceling the Redskins trademark.

This of course willfully ignores once again the notion that in intervening in the field of trademarks, and in favoring certain speakers over others, courts effectively allows the Lanham Act to amplify preferred speech and burden disfavored speech. This is true whether or not we classify the trademark right as a bundle of procedural rights (which in turn make speech competitively possible) or as pure speech directly.

That said, it’s much more in keeping with the tradition of the First Amendment to understand trademarks as a protected category of commercial speech. The Supreme Court has noted that otherwise commercial information may at times be more urgent than even political dialog, and that information relating to a financial incentive was not necessarily commercial for First Amendment purposes. “[S]ignificant societal interests are served by such speech.” This is so because even entirely commercial speech “may often carry information of import to significant issues of the day.”

Even were commercial speech not fully protected–as I believe it to be–the Supreme Court has also recognized that commercial speech may be so intertwined with noncommercial speech so as to make them inseparable for First Amendment purposes. In particular, commercial messages do more than merely provide information about the characteristics of goods and services:

[S]olicitation is characteristically intertwined with informative and perhaps persuasive speech seeking support for particular causes or for particular views on economic, political, or social issues, and for the reality that without solicitation the flow of such information and advocacy would likely cease.

The analogy to trademarks is rather clear in this context. Although trademarks may refer to a particular product or service, that product or service is not of necessity a purely commercial object. Further, even if the product or service is a commercial object, the trademark itself can be, or can become, a symbolic referent and not a mere sales pitch. Consider, for instance, Mickey Mouse. The iconic mouse ears certainly represent a vast commercial empire generally, and specifically operate as a functional trademark for Mickey Mouse cartoons and merchandise. However, is there not much more of cultural significance to the mark than mere commercial value? The mouse ears represent something culturally – about childhood, about America, and about art – that is much more than merely a piece of pricing or quality information.

The Unconstitutional Conditions Doctrine Prevents Trading Rights for Privileges

The district court (and Federal Circuit, for that matter) have missed a very important dimension in summarily dismissing First Amendment concerns of trademark holders. These courts dismiss owners of “immoral” or “disparaging” trademarks on the belief that no actual harm is done – the mark holders still own the mark, and, as far as the court is concerned, no speech has been suppressed. However, trademark registration, in addition to providing a forum in which to speak, also provides real procedural benefits for the mark holder. For instance, businesses and individuals enjoy a nationwide recognition of their presence and can vindicate their interests in federal courts. Without the federal registration that is presumptively supplied to marks that are not “immoral” or “scandalous,” an individual can find himself attempting to protect his interests in a mark in the courts of every state in which he does business.

However, under the unconstitutional conditions doctrine even though the benefits of trademark registration are not constitutionally guaranteed rights, those benefits cannot be offered in exchange for a trademark owner’s loss of actually guaranteed rights. Thus, the tight link between trademark registration and First Amendment protections that the courts just keep ignoring.

Its also worth noting that this doctrine did not emerge in constitutional jurisprudence until after the period in which the Lanham Act was drafted. Instead, the Lanham Act era was characterized by the rights-privileges distinction–made famous by then Chief Justice of the Massachusetts Supreme Judicial Court Oliver Wendell Holmes. In McAuliffe, a police officer sued for reinstatement after he was dismissed for his participation in a political organization. In dismissing the case, Chief Justice Holmes held that “[t]he petitioner may have a constitutional right to talk politics, but he has no constitutional right to be a policeman.” This quote from Holmes captures precisely the sense in which the Federal Circuit dismisses the First Amendment concerns of mark holders. 

In contrast to this rather antiquated view, the Supreme Court has recently reaffirmed the proposition that “the government may not deny a benefit to a person because he exercises a constitutional right.” Although this principle contains exceptions, it has been applied to a wide variety of situations including refusal to renew teaching contracts over First Amendment-protected speech acts, and infringement of the right to travel by refusing to adequately extend healthcare benefits to sick persons who had not been residents of a county for at least a year.

Basically, the best defense one can offer for § 2(a) is rooted in an outmoded view of the First Amendment that is, to put it mildly, unconstitutional. We don’t shut down speakers who offend us (at least for the time being), and we should stop attacking trademarks that we find to be immoral.