Archives For section 230

[The following post was adapted from the International Center for Law & Economics White Paper “Polluting Words: Is There a Coasean Case to Regulate Offensive Speech?]

Words can wound. They can humiliate, anger, insult.

University students—or, at least, a vociferous minority of them—are keen to prevent this injury by suppressing offensive speech. To ensure campuses are safe places, they militate for the cancellation of talks by speakers with opinions they find offensive, often successfully. And they campaign to get offensive professors fired from their jobs.

Off campus, some want this safety to be extended to the online world and, especially, to the users of social media platforms such as Twitter and Facebook. In the United States, this would mean weakening the legal protections of offensive speech provided by Section 230 of the Communications Decency Act (as President Joe Biden has recommended) or by the First Amendment and. In the United Kingdom, the Online Safety Bill is now before Parliament. If passed, it will give a U.K. government agency the power to dictate the content-moderation policies of social media platforms.

You don’t need to be a woke university student or grandstanding politician to suspect that society suffers from an overproduction of offensive speech. Basic economics provides a reason to suspect it—the reason being that offense is an external cost of speech. The cost is borne not by the speaker but by his audience. And when people do not bear all the costs of an action, they do it too much.

Jack tweets “women don’t have penises.” This offends Jill, who is someone with a penis who considers herself (or himself, if Jack is right) to be a woman. And it offends many others, who agree with Jill that Jack is indulging in ugly transphobic biological essentialism. Lacking Bill Clinton’s facility for feeling the pain of others, Jack does not bear this cost. So, even if it exceeds whatever benefit Jack gets from saying that women don’t have penises, he will still say it. In other words, he will say it even when doing so makes society altogether worse off.

It shouldn’t be allowed!

That’s what we normally say when actions harm others more than they benefit the agent. The law normally conforms to John Stuart Mill’s “Harm Principle” by restricting activities—such as shooting people or treating your neighbours to death metal at 130 decibels at 2 a.m.—with material external costs. Those who seek legal reform to restrict offensive speech are surely doing no more than following an accepted general principle.

But it’s not so simple. As Ronald Coase pointed out in his famous 1960 article “The Problem of Social Cost,” externalities are a reciprocal problem. If Wayne had no neighbors, his playing death metal at 130 decibels at 2 a.m. would have no external costs. Their choice of address is equally a source of the problem. Similarly, if Jill weren’t a Twitter user, she wouldn’t have been offended by Jack’s tweet about who has a penis, since she wouldn’t have encountered it. Externalities are like tangos: they always have at least two perpetrators.

So, the legal question, “who should have a right to what they want?”—Wayne to his loud music or his neighbors to their sleep; Jack to expressing his opinion about women or Jill to not hearing such opinions—cannot be answered by identifying the party who is responsible for the external cost. Both parties are responsible.

How, then, should the question be answered? In the same paper, Coase the showed that, in certain circumstances, who the courts favor will make no difference to what ends up happening, and that what ends up happening will be efficient. Suppose the court says that Wayne cannot bother his neighbors with death metal at 2 a.m. If Wayne would be willing to pay $100,000 to keep doing it and his neighbors, combined, would put up with it for anything more than $95,000, then they should be able to arrive at a mutually beneficial deal whereby Wayne pays them something between $95,000 and $100,000 to forgo their right to stop him making his dreadful noise.

That’s not exactly right. If negotiating a deal would cost more than $5,000, then no mutually beneficial deal is possible and the rights-trading won’t happen. Transaction costs being less than the difference between the two parties’ valuations is the circumstance in which the allocation of legal rights makes no difference to how resources get used, and where efficiency will be achieved, in any event.

But it is an unusual circumstance, especially when the external cost is suffered by many people. When the transaction cost is too high, efficiency does depend on the allocation of rights by courts or legislatures. As Coase argued, when this is so, efficiency will be served if a right to the disputed resource is granted to the party with the higher cost of avoiding the externality.

Given the (implausible) valuations Wayne and his neighbors place on the amount of noise in their environment at 2 a.m., efficiency is served by giving Wayne the right to play his death metal, unless he could soundproof his house or play his music at a much lower volume or take some other avoidance measure that costs him less than the $90,000 cost to his neighbours.

And given that Jack’s tweet about penises offends a large open-ended group of people, with whom Jack therefore cannot negotiate, it looks like they should be given the right not to be offended by Jack’s comment and he should be denied the right to make it. Coasean logic supports the woke censors!          

But, again, it’s not that simple—for two reasons.

The first is that, although those are offended may be harmed by the offending speech, they needn’t necessarily be. Physical pain is usually harmful, but not when experienced by a sexual masochist (in the right circumstances, of course). Similarly, many people take masochistic pleasure in being offended. You can tell they do, because they actively seek out the sources of their suffering. They are genuinely offended, but the offense isn’t harming them, just as the sexual masochist really is in physical pain but isn’t harmed by it. Indeed, real pain and real offense are required, respectively, for the satisfaction of the sexual masochist and the offense masochist.

How many of the offended are offense masochists? Where the offensive speech can be avoided at minimal cost, the answer must be most. Why follow Jordan Peterson on Twitter when you find his opinions offensive unless you enjoy being offended by him? Maybe some are keeping tabs on the dreadful man so that they can better resist him, and they take the pain for that reason rather than for masochistic glee. But how could a legislator or judge know? For all they know, most of those offended by Jordan Peterson are offense masochists and the offense he causes is a positive externality.

The second reason Coasean logic doesn’t support the would-be censors is that social media platforms—the venues of offensive speech that they seek to regulate—are privately owned. To see why this is significant, consider not offensive speech, but an offensive action, such as openly masturbating on a bus.

This is prohibited by law. But it is not the mere act that is illegal. You are allowed to masturbate in the privacy of your bedroom. You may not masturbate on a bus because those who are offended by the sight of it cannot easily avoid it. That’s why it is illegal to express obscenities about Jesus on a billboard erected across the road from a church but not at a meeting of the Angry Atheists Society. The laws that prohibit offensive speech in such circumstances—laws against public nuisance, harassment, public indecency, etc.—are generally efficient. The cost they impose on the offenders is less than the benefits to the offended.

But they are unnecessary when the giving and taking of offense occur within a privately owned place. Suppose no law prohibited masturbating on a bus. It still wouldn’t be allowed on buses owned by a profit-seeker. Few people want to masturbate on buses and most people who ride on buses seek trips that are masturbation-free. A prohibition on masturbation will gain the owner more customers than it loses him. The prohibition is simply another feature of the product offered by the bus company. Nice leather seats, punctual departures, and no wankers (literally). There is no more reason to believe that the bus company’s passenger-conduct rules will be inefficient than that its other product features will be and, therefore, no more reason to legally stipulate them.

The same goes for the content-moderation policies of social media platforms. They are just another product feature offered by a profit-seeking firm. If they repel more customers than they attract (or, more accurately, if they repel more advertising revenue than they attract), they would be inefficient. But then, of course, the company would not adopt them.

Of course, the owner of a social media platform might not be a pure profit-maximiser. For example, he might forgo $10 million in advertising revenue for the sake of banning speakers he personally finds offensive. But the outcome is still efficient. Allowing the speech would have cost more by way of the owner’s unhappiness than the lost advertising would have been worth.  And such powerful feelings in the owner of a platform create an opportunity for competitors who do not share his feelings. They can offer a platform that does not ban the offensive speakers and, if enough people want to hear what they have to say, attract users and the advertising revenue that comes with them. 

If efficiency is your concern, there is no problem for the authorities to solve. Indeed, the idea that the authorities would do a better job of deciding content-moderation rules is not merely absurd, but alarming. Politicians and the bureaucrats who answer to them or are appointed by them would use the power not to promote efficiency, but to promote agendas congenial to them. Jurisprudence in liberal democracies—and, especially, in America—has been suspicious of governmental control of what may be said. Nothing about social media provides good reason to become any less suspicious.

In recent years, a diverse cross-section of advocates and politicians have leveled criticisms at Section 230 of the Communications Decency Act and its grant of legal immunity to interactive computer services. Proposed legislative changes to the law have been put forward by both Republicans and Democrats.

It remains unclear whether Congress (or the courts) will amend Section 230, but any changes are bound to expand the scope, uncertainty, and expense of content risks. That’s why it’s important that such changes be developed and implemented in ways that minimize their potential to significantly disrupt and harm online activity. This piece focuses on those insurable content risks that most frequently result in litigation and considers the effect of the direct and indirect costs caused by frivolous suits and lawfare, not just the ultimate potential for a court to find liability. The experience of the 1980s asbestos-litigation crisis offers a warning of what could go wrong.

Enacted in 1996, Section 230 was intended to promote the Internet as a diverse medium for discourse, cultural development, and intellectual activity by shielding interactive computer services from legal liability when blocking or filtering access to obscene, harassing, or otherwise objectionable content. Absent such immunity, a platform hosting content produced by third parties could be held equally responsible as the creator for claims alleging defamation or invasion of privacy.

In the current legislative debates, Section 230’s critics on the left argue that the law does not go far enough to combat hate speech and misinformation. Critics on the right claim the law protects censorship of dissenting opinions. Legal challenges to the current wording of Section 230 arise primarily from what constitutes an “interactive computer service,” “good faith” restriction of content, and the grant of legal immunity, regardless of whether the restricted material is constitutionally protected. 

While Congress and various stakeholders debate various alternate statutory frameworks, several test cases simultaneously have been working their way through the judicial system and some states have either passed or are considering legislation to address complaints with Section 230. Some have suggested passing new federal legislation classifying online platforms as common carriers as an alternate approach that does not involve amending or repealing Section 230. Regardless of the form it may take, change to the status quo is likely to increase the risk of litigation and liability for those hosting or publishing third-party content.

The Nature of Content Risk

The class of individuals and organizations exposed to content risk has never been broader. Any information, content, or communication that is created, gathered, compiled, or amended can be considered “material” which, when disseminated to third parties, may be deemed “publishing.” Liability can arise from any step in that process. Those who republish material are generally held to the same standard of liability as if they were the original publisher. (See, e.g., Rest. (2d) of Torts § 578 with respect to defamation.)

Digitization has simultaneously reduced the cost and expertise required to publish material and increased the potential reach of that material. Where it was once limited to books, newspapers, and periodicals, “publishing” now encompasses such activities as creating and updating a website; creating a podcast or blog post; or even posting to social media. Much of this activity is performed by individuals and businesses who have only limited experience with the legal risks associated with publishing.

This is especially true regarding the use of third-party material, which is used extensively by both sophisticated and unsophisticated platforms. Platforms that host third-party-generated content—e.g., social media or websites with comment sections—have historically engaged in only limited vetting of that content, although this is changing. When combined with the potential to reach consumers far beyond the original platform and target audience—lasting digital traces that are difficult to identify and remove—and the need to comply with privacy and other statutory requirements, the potential for all manner of “publishers” to incur legal liability has never been higher.

Even sophisticated legacy publishers struggle with managing the litigation that arises from these risks. There are a limited number of specialist counsel, which results in higher hourly rates. Oversight of legal bills is not always effective, as internal counsel often have limited resources to manage their daily responsibilities and litigation. As a result, legal fees often make up as much as two-thirds of the average claims cost. Accordingly, defense spending and litigation management are indirect, but important, risks associated with content claims.

Effective risk management is any publisher’s first line of defense. The type and complexity of content risk management varies significantly by organization, based on its size, resources, activities, risk appetite, and sophistication. Traditional publishers typically have a formal set of editorial guidelines specifying policies governing the creation of content, pre-publication review, editorial-approval authority, and referral to internal and external legal counsel. They often maintain a library of standardized contracts; have a process to periodically review and update those wordings; and a process to verify the validity of a potential licensor’s rights. Most have formal controls to respond to complaints and to retraction/takedown requests.

Insuring Content Risks

Insurance is integral to most publishers’ risk-management plans. Content coverage is present, to some degree, in most general liability policies (i.e., for “advertising liability”). Specialized coverage—commonly referred to as “media” or “media E&O”—is available on a standalone basis or may be packaged with cyber-liability coverage. Terms of specialized coverage can vary significantly, but generally provides at least basic coverage for the three primary content risks of defamation, copyright infringement, and invasion of privacy.

Insureds typically retain the first dollar loss up to a specific dollar threshold. They may also retain a coinsurance percentage of every dollar thereafter in partnership with their insurer. For example, an insured may be responsible for the first $25,000 of loss, and for 10% of loss above that threshold. Such coinsurance structures often are used by insurers as a non-monetary tool to help control legal spending and to incentivize an organization to employ effective oversight of counsel’s billing practices.

The type and amount of loss retained will depend on the insured’s size, resources, risk profile, risk appetite, and insurance budget. Generally, but not always, increases in an insured’s retention or an insurer’s attachment (e.g., raising the threshold to $50,000, or raising the insured’s coinsurance to 15%) will result in lower premiums. Most insureds will seek the smallest retention feasible within their budget. 

Contract limits (the maximum coverage payout available) will vary based on the same factors. Larger policyholders often build a “tower” of insurance made up of multiple layers of the same or similar coverage issued by different insurers. Two or more insurers may partner on the same “quota share” layer and split any loss incurred within that layer on a pre-agreed proportional basis.  

Navigating the strategic choices involved in developing an insurance program can be complex, depending on an organization’s risks. Policyholders often use commercial brokers to aide them in developing an appropriate risk-management and insurance strategy that maximizes coverage within their budget and to assist with claims recoveries. This is particularly important for small and mid-sized insureds who may lack the sophistication or budget of larger organizations. Policyholders and brokers try to minimize the gaps in coverage between layers and among quota-share participants, but such gaps can occur, leaving a policyholder partially self-insured.

An organization’s options to insure its content risk may also be influenced by the dynamics of the overall insurance market or within specific content lines. Underwriters are not all created equal; it is a challenging responsibility requiring a level of prediction, and some underwriters may fail to adequately identify and account for certain risks. It can also be challenging to accurately measure risk aggregation and set appropriate reserves. An insurer’s appetite for certain lines and the availability of supporting reinsurance can fluctuate based on trends in the general capital markets. Specialty media/content coverage is a small niche within the global commercial insurance market, which makes insurers in this line more sensitive to these general trends.

Litigation Risks from Changes to Section 230

A full repeal or judicial invalidation of Section 230 generally would make every platform responsible for all the content they disseminate, regardless of who created the material requiring at least some additional editorial review. This would significantly disadvantage those platforms that host a significant volume of third-party content. Internet service providers, cable companies, social media, and product/service review companies would be put under tremendous strain, given the daily volume of content produced. To reduce the risk that they serve as a “deep pocket” target for plaintiffs, they would likely adopt more robust pre-publication screening of content and authorized third-parties; limit public interfaces; require registration before a user may publish content; employ more reactive complaint response/takedown policies; and ban problem users more frequently. Small and mid-sized enterprises (SMEs), as well as those not focused primarily on the business of publishing, would likely avoid many interactive functions altogether. 

A full repeal would be, in many ways, a blunderbuss approach to dealing with criticisms of Section 230, and would cause as many or more problems as it solves. In the current polarized environment, it also appears unlikely that Congress will reach bipartisan agreement on amended language for Section 230, or to classify interactive computer services as common carriers, given that the changes desired by the political left and right are so divergent. What may be more likely is that courts encounter a test case that prompts them to clarify the application of the existing statutory language—i.e., whether an entity was acting as a neutral platform or a content creator, whether its conduct was in “good faith,” and whether the material is “objectionable” within the meaning of the statute.

A relatively greater frequency of litigation is almost inevitable in the wake of any changes to the status quo, whether made by Congress or the courts. Major litigation would likely focus on those social-media platforms at the center of the Section 230 controversy, such as Facebook and Twitter, given their active role in these issues, deep pockets and, potentially, various admissions against interest helpful to plaintiffs regarding their level of editorial judgment. SMEs could also be affected in the immediate wake of a change to the statute or its interpretation. While SMEs are likely to be implicated on a smaller scale, the impact of litigation could be even more damaging to their viability if they are not adequately insured.

Over time, the boundaries of an amended Section 230’s application and any consequential effects should become clearer as courts develop application criteria and precedent is established for different fact patterns. Exposed platforms will likely make changes to their activities and risk-management strategies consistent with such developments. Operationally, some interactive features—such as comment sections or product and service reviews—may become less common.

In the short and medium term, however, a period of increased and unforeseen litigation to resolve these issues is likely to prove expensive and damaging. Insurers of content risks are likely to bear the brunt of any changes to Section 230, because these risks and their financial costs would be new, uncertain, and not incorporated into historical pricing of content risk. 

Remembering the Asbestos Crisis

The introduction of a new exposure or legal risk can have significant financial effects on commercial insurance carriers. New and revised risks must be accounted for in the assumptions, probabilities, and load factors used in insurance pricing and reserving models. Even small changes in those values can have large aggregate effects, which may undermine confidence in those models, complicate obtaining reinsurance, or harm an insurer’s overall financial health.

For example, in the 1980s, certain courts adopted the triple-trigger and continuous trigger methods[1] of determining when a policyholder could access coverage under an “occurrence” policy for asbestos claims. As a result, insurers paid claims under policies dating back to the early 1900s and, in some cases, under all policies from that date until the date of the claim. Such policies were written when mesothelioma related to asbestos was unknown and not incorporated into the policy pricing.

Insurers had long-since released reserves from the decades-old policy years, so those resources were not available to pay claims. Nor could underwriters retroactively increase premiums for the intervening years and smooth out the cost of these claims. This created extreme financial stress for impacted insurers and reinsurers, with some ultimately rendered insolvent. Surviving carriers responded by drastically reducing coverage and increasing prices, which resulted in a major capacity shortage that resolved only after the creation of the Bermuda insurance and reinsurance market. 

The asbestos-related liability crisis represented a perfect storm that is unlikely to be replicated. Given the ubiquitous nature of digital content, however, any drastic or misconceived changes to Section 230 protections could still cause significant disruption to the commercial insurance market. 

Content risk is covered, at least in part, by general liability and many cyber policies, but it is not currently a primary focus for underwriters. Specialty media underwriters are more likely to be monitoring Section 230 risk, but the highly competitive market will make it difficult for them to respond to any changes with significant price increases. In addition, the current market environment for U.S. property and casualty insurance generally is in the midst of correcting for years of inadequate pricing, expanding coverage, developing exposures, and claims inflation. It would be extremely difficult to charge an adequate premium increase if the potential severity of content risk were to increase suddenly.

In the face of such risk uncertainty and challenges to adequately increasing premiums, underwriters would likely seek to reduce their exposure to online content risks, i.e., by reducing the scope of coverage, reducing limits, and increasing retentions. How these changes would manifest, and the pain for all involved, would likely depend on how quickly such changes in policyholders’ risk profiles manifest. 

Small or specialty carriers caught unprepared could be forced to exit the market if they experienced a sharp spike in claims or unexpected increase in needed reserves. Larger, multiline carriers may respond by voluntarily reducing or withdrawing their participation in this space. Insurers exposed to ancillary content risk may simply exclude it from cover if adequate price increases are impractical. Such reactions could result in content coverage becoming harder to obtain or unavailable altogether. This, in turn, would incentivize organizations to limit or avoid certain digital activities.

Finding a More Thoughtful Approach

The tension between calls for reform of Section 230 and the potential for disrupting online activity does not mean that political leaders and courts should ignore these issues. Rather, it means that what’s required is a thoughtful, clear, and predictable approach to any changes, with the goal of maximizing the clarity of the changes and their application and minimizing any resulting litigation. Regardless of whether accomplished through legislation or the judicial process, addressing the following issues could minimize the duration and severity of any period of harmful disruption regarding content-risk:

  1. Presumptive immunity – Including an express statement in the definition of “interactive computer service,” or inferring one judicially, to clarify that platforms hosting third-party content enjoy a rebuttable presumption that statutory immunity applies would discourage frivolous litigation as courts establish precedent defining the applicability of any other revisions. 
  1. Specify the grounds for losing immunity – Clarify, at a minimum, what constitutes “good faith” with respect to content restrictions and further clarify what material is or is not “objectionable,” as it relates to newsworthy content or actions that trigger loss of immunity.
  1. Specify the scope and duration of any loss of immunity – Clarify whether the loss of immunity is total, categorical, or specific to the situation under review and the duration of that loss of immunity, if applicable.
  1. Reinstatement of immunity, subject to burden-shifting – Clarify what a platform must do to reinstate statutory immunity on a go-forward basis and clarify that it bears the burden of proving its go-forward conduct entitled it to statutory protection.
  1. Address associated issues – Any clarification or interpretation should address other issues likely to arise, such as the effect and weight to be given to a platform’s application of its community standards, adherence to neutral takedown/complain procedures, etc. Care should be taken to avoid overcorrecting and creating a “heckler’s veto.” 
  1. Deferred effect – If change is made legislatively, the effective date should be deferred for a reasonable time to allow platforms sufficient opportunity to adjust their current risk-management policies, contractual arrangements, content publishing and storage practices, and insurance arrangements in a thoughtful, orderly fashion that accounts for the new rules.

Ultimately, legislative and judicial stakeholders will chart their own course to address the widespread dissatisfaction with Section 230. More important than any of these specific policy suggestions is the principle underpins them: that any changes incorporate due consideration for the potential direct and downstream harm that can be caused if policy is not clear, comprehensive, and designed to minimize unnecessary litigation. 

It is no surprise that, in the years since Section 230 of the Communications Decency Act was passed, the environment and risks associated with digital platforms have evolved or that those changes have created a certain amount of friction in the law’s application. Policymakers should employ a holistic approach when evaluating their legislative and judicial options to revise or clarify the application of Section 230. Doing so in a targeted, predictable fashion should help to mitigate or avoid the risk of increased litigation and other unintended consequences that might otherwise prove harmful to online platforms in the commercial insurance market.

Aaron Tilley is a senior insurance executive with more than 16 years of commercial insurance experience in executive management, underwriting, legal, and claims working in or with the U.S., Bermuda, and London markets. He has served as chief underwriting officer of a specialty media E&O and cyber-liability insurer and as coverage counsel representing international insurers with respect to a variety of E&O and advertising liability claims


[1] The triple-trigger method allowed a policy to be accessed based on the date of the injury-in-fact, manifestation of injury, or exposure to substances known to cause injury. The continuous trigger allowed all policies issued by an insurer, not just one, to be accessed if a triggering event could be established during the policy period.

In what has become regularly scheduled programming on Capitol Hill, Facebook CEO Mark Zuckerberg, Twitter CEO Jack Dorsey, and Google CEO Sundar Pichai will be subject to yet another round of congressional grilling—this time, about the platforms’ content-moderation policies—during a March 25 joint hearing of two subcommittees of the House Energy and Commerce Committee.

The stated purpose of this latest bit of political theatre is to explore, as made explicit in the hearing’s title, “social media’s role in promoting extremism and misinformation.” Specific topics are expected to include proposed changes to Section 230 of the Communications Decency Act, heightened scrutiny by the Federal Trade Commission, and misinformation about COVID-19—the subject of new legislation introduced by Rep. Jennifer Wexton (D-Va.) and Sen. Mazie Hirono (D-Hawaii).

But while many in the Democratic majority argue that social media companies have not done enough to moderate misinformation or hate speech, it is a problem with no realistic legal fix. Any attempt to mandate removal of speech on grounds that it is misinformation or hate speech, either directly or indirectly, would run afoul of the First Amendment.

Much of the recent focus has been on misinformation spread on social media about the 2020 election and the COVID-19 pandemic. The memorandum for the March 25 hearing sums it up:

Facebook, Google, and Twitter have long come under fire for their role in the dissemination and amplification of misinformation and extremist content. For instance, since the beginning of the coronavirus disease of 2019 (COVID-19) pandemic, all three platforms have spread substantial amounts of misinformation about COVID-19. At the outset of the COVID-19 pandemic, disinformation regarding the severity of the virus and the effectiveness of alleged cures for COVID-19 was widespread. More recently, COVID-19 disinformation has misrepresented the safety and efficacy of COVID-19 vaccines.

Facebook, Google, and Twitter have also been distributors for years of election disinformation that appeared to be intended either to improperly influence or undermine the outcomes of free and fair elections. During the November 2016 election, social media platforms were used by foreign governments to disseminate information to manipulate public opinion. This trend continued during and after the November 2020 election, often fomented by domestic actors, with rampant disinformation about voter fraud, defective voting machines, and premature declarations of victory.

It is true that, despite social media companies’ efforts to label and remove false content and bar some of the biggest purveyors, there remains a considerable volume of false information on social media. But U.S. Supreme Court precedent consistently has limited government regulation of false speech to distinct categories like defamation, perjury, and fraud.

The Case of Stolen Valor

The court’s 2011 decision in United States v. Alvarez struck down as unconstitutional the Stolen Valor Act of 2005, which made it a federal crime to falsely claim to have earned a military medal. A four-justice plurality opinion written by Justice Anthony Kennedy, along with a two-justice concurrence, both agreed that a statement being false did not, by itself, exclude it from First Amendment protection. 

Kennedy’s opinion noted that while the government may impose penalties for false speech connected with the legal process (perjury or impersonating a government official); with receiving a benefit (fraud); or with harming someone’s reputation (defamation); the First Amendment does not sanction penalties for false speech, in and of itself. The plurality exhibited particular skepticism toward the notion that government actors could be entrusted as a “Ministry of Truth,” empowered to determine what categories of false speech should be made illegal:

Permitting the government to decree this speech to be a criminal offense, whether shouted from the rooftops or made in a barely audible whisper, would endorse government authority to compile a list of subjects about which false statements are punishable. That governmental power has no clear limiting principle. Our constitutional tradition stands against the idea that we need Oceania’s Ministry of Truth… Were this law to be sustained, there could be an endless list of subjects the National Government or the States could single out… Were the Court to hold that the interest in truthful discourse alone is sufficient to sustain a ban on speech, absent any evidence that the speech was used to gain a material advantage, it would give government a broad censorial power unprecedented in this Court’s cases or in our constitutional tradition. The mere potential for the exercise of that power casts a chill, a chill the First Amendment cannot permit if free speech, thought, and discourse are to remain a foundation of our freedom. [EMPHASIS ADDED]

As noted in the opinion, declaring false speech illegal constitutes a content-based restriction subject to “exacting scrutiny.” Applying that standard, the court found “the link between the Government’s interest in protecting the integrity of the military honors system and the Act’s restriction on the false claims of liars like respondent has not been shown.” 

While finding that the government “has not shown, and cannot show, why counterspeech would not suffice to achieve its interest,” the plurality suggested a more narrowly tailored solution could be simply to publish Medal of Honor recipients in an online database. In other words, the government could overcome the problem of false speech by promoting true speech. 

In 2012, President Barack Obama signed an updated version of the Stolen Valor Act that limited its penalties to situations where a misrepresentation is shown to result in receipt of some kind of benefit. That places the false speech in the category of fraud, consistent with the Alvarez opinion.

A Social Media Ministry of Truth

Applying the Alvarez standard to social media, the government could (and already does) promote its interest in public health or election integrity by publishing true speech through official channels. But there is little reason to believe the government at any level could regulate access to misinformation. Anything approaching an outright ban on accessing speech deemed false by the government not only would not be the most narrowly tailored way to deal with such speech, but it is bound to have chilling effects even on true speech.

The analysis doesn’t change if the government instead places Big Tech itself in the position of Ministry of Truth. Some propose making changes to Section 230, which currently immunizes social media companies from liability for user speech (with limited exceptions), regardless what moderation policies the platform adopts. A hypothetical change might condition Section 230’s liability shield on platforms agreeing to moderate certain categories of misinformation. But that would still place the government in the position of coercing platforms to take down speech. 

Even the “fix” of making social media companies liable for user speech they amplify through promotions on the platform, as proposed by Sen. Mark Warner’s (D-Va.) SAFE TECH Act, runs into First Amendment concerns. The aim of the bill is to regard sponsored content as constituting speech made by the platform, thus opening the platform to liability for the underlying misinformation. But any such liability also would be limited to categories of speech that fall outside First Amendment protection, like fraud or defamation. This would not appear to include most of the types of misinformation on COVID-19 or election security that animate the current legislative push.

There is no way for the government to regulate misinformation, in and of itself, consistent with the First Amendment. Big Tech companies are free to develop their own policies against misinformation, but the government may not force them to do so. 

Extremely Limited Room to Regulate Extremism

The Big Tech CEOs are also almost certain to be grilled about the use of social media to spread “hate speech” or “extremist content.” The memorandum for the March 25 hearing sums it up like this:

Facebook executives were repeatedly warned that extremist content was thriving on their platform, and that Facebook’s own algorithms and recommendation tools were responsible for the appeal of extremist groups and divisive content. Similarly, since 2015, videos from extremists have proliferated on YouTube; and YouTube’s algorithm often guides users from more innocuous or alternative content to more fringe channels and videos. Twitter has been criticized for being slow to stop white nationalists from organizing, fundraising, recruiting and spreading propaganda on Twitter.

Social media has often played host to racist, sexist, and other types of vile speech. While social media companies have community standards and other policies that restrict “hate speech” in some circumstances, there is demand from some public officials that they do more. But under a First Amendment analysis, regulating hate speech on social media would fare no better than the regulation of misinformation.

The First Amendment doesn’t allow for the regulation of “hate speech” as its own distinct category. Hate speech is, in fact, as protected as any other type of speech. There are some limited exceptions, as the First Amendment does not protect incitement, true threats of violence, or “fighting words.” Some of these flatly do not apply in the online context. “Fighting words,” for instance, applies only in face-to-face situations to “those personally abusive epithets which, when addressed to the ordinary citizen, are, as a matter of common knowledge, inherently likely to provoke violent reaction.”

One relevant precedent is the court’s 1992 decision in R.A.V. v. St. Paul, which considered a local ordinance in St. Paul, Minnesota, prohibiting public expressions that served to cause “outrage, alarm, or anger with respect to racial, gender or religious intolerance.” A juvenile was charged with violating the ordinance when he created a makeshift cross and lit it on fire in front of a black family’s home. The court unanimously struck down the ordinance as a violation of the First Amendment, finding it an impermissible content-based restraint that was not limited to incitement or true threats.

By contrast, in 2003’s Virginia v. Black, the Supreme Court upheld a Virginia law outlawing cross burnings done with the intent to intimidate. The court’s opinion distinguished R.A.V. on grounds that the Virginia statute didn’t single out speech regarding disfavored topics. Instead, it was aimed at speech that had the intent to intimidate regardless of the victim’s race, gender, religion, or other characteristic. But the court was careful to limit government regulation of hate speech to instances that involve true threats or incitement.

When it comes to incitement, the legal standard was set by the court’s landmark Brandenberg v. Ohio decision in 1969, which laid out that:

the constitutional guarantees of free speech and free press do not permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action. [EMPHASIS ADDED]

In other words, while “hate speech” is protected by the First Amendment, specific types of speech that convey true threats or fit under the related doctrine of incitement are not. The government may regulate those types of speech. And they do. In fact, social media users can be, and often are, charged with crimes for threats made online. But the government can’t issue a per se ban on hate speech or “extremist content.”

Just as with misinformation, the government also can’t condition Section 230 immunity on platforms removing hate speech. Insofar as speech is protected under the First Amendment, the government can’t specifically condition a government benefit on its removal. Even the SAFE TECH Act’s model for holding platforms accountable for amplifying hate speech or extremist content would have to be limited to speech that amounts to true threats or incitement. This is a far narrower category of hateful speech than the examples that concern legislators. 

Social media companies do remain free under the law to moderate hateful content as they see fit under their terms of service. Section 230 immunity is not dependent on whether companies do or don’t moderate such content, or on how they define hate speech. But government efforts to step in and define hate speech would likely run into First Amendment problems unless they stay focused on unprotected threats and incitement.

What Can the Government Do?

One may fairly ask what it is that governments can do to combat misinformation and hate speech online. The answer may be a law that requires takedowns by court order of speech after it is declared illegal, as proposed by the PACT Act, sponsored in the last session by Sens. Brian Schatz (D-Hawaii) and John Thune (R-S.D.). Such speech may, in some circumstances, include misinformation or hate speech.

But as outlined above, the misinformation that the government can regulate is limited to situations like fraud or defamation, while the hate speech it can regulate is limited to true threats and incitement. A narrowly tailored law that looked to address those specific categories may or may not be a good idea, but it would likely survive First Amendment scrutiny, and may even prove a productive line of discussion with the tech CEOs.

President Donald Trump has repeatedly called for repeal of Section 230. But while Trump and fellow conservatives decry Big Tech companies for their alleged anti-conservative bias, including at yet more recent hearings, their issue is not actually with Section 230. It’s with the First Amendment. 

Conservatives can’t actually do anything directly about how social media platforms moderate content because it is the First Amendment that grants those platforms a right to editorial discretion. Even FCC Commissioner Brendan Carr, who strongly opposes “Big Tech censorship,” recognizes this

By the same token, even if one were to grant that conservatives are right about the bias of moderators at these large social media platforms, it does not follow that removal of Section 230 immunity would alter that bias. In fact, in a world without Section 230 immunity, there still would be no legal cause of action for political bias. 

The truth is that conservatives use Section 230 immunity for leverage over social media platforms. The hope is that, because social media platforms desire the protections of civil immunity for third-party content, they will follow whatever conditions the government puts on their editorial discretion. But the attempt to end-run the First Amendment’s protections is also unconstitutional.

There is no cause of action for political bias by online platforms if we repeal Section 230

Consider the counterfactual: if there were no Section 230 to immunize them from liability, under what law would platforms face a viable cause of action for political bias? Conservative critics never answer this question. Instead, they focus on the irrelevant distinction between publishers and platforms. Or they talk about how Section 230 is a giveaway to Big Tech. But none consider the actual relationship between Section 230 immunity and alleged political bias.

But let’s imagine we’ve done what President Trump has called for and repealed Section 230. Where does that leave conservatives?

Unfortunately, it leaves them without any cause of action. There is no law passed by Congress or any state legislature, no regulation promulgated by the Federal Communications Commission or the Federal Trade Commission, no common law tort action that can be asserted against online platforms to force them to carry speech they don’t wish to carry. 

The difficulties of pursuing a contract claim for political bias

The best argument for conservatives is that, without Section 230 immunity, online platforms could be more easily held to any contractual restraints in their terms of service. If a platform promises, for instance, that it will moderate speech in a politically neutral way, a user could make the case that the platform violated its terms of service if it acted with political bias in her particular case.

For the vast majority of users, it is unclear whether there are damages from having a post fact-checked or removed. But for users who share in advertising revenue, the concrete injury from a moderation decision is more obvious. PragerU, for example, has (unsuccessfully) sued Google for being put in Restricted Mode on YouTube, which reduces its reach and advertising revenue. 

Even where there is a concrete injury that gets a case into court, that doesn’t necessarily mean there is a valid contract claim. In PragerU’s case against Google, a California court dismissed contract claims because the YouTube terms of service contract was written to allow the platform to retain discretion over what is published. Specifically, the court found that there can be no implied covenant of good faith and fair dealing where “YouTube reserves the right to remove Content without prior notice” and to “discontinue any aspect of the Service at any time.”

Breach-of-contract claims for moderation practices are highly dependent on what is actually promised in the terms of service. For instance, under Facebook’s TOS the company retains the right “to remove or restrict access to content that is in violation” of its community standards. Facebook does provide a process for users to request further review, but retains the right to remove content. The community standards also give Facebook broad discretion to determine, among other things, what counts as hate speech or false news. It is exceedingly unlikely that a court would ever have a basis to find a contract violation by Facebook if the company can reasonably point to a user’s violation of its terms of service. 

For example, in Ebeid v. Facebook, the U.S. Northern District of California dismissed fraud and breach of contract claims, finding the plaintiff failed to allege what contractual provision Facebook breached, that Facebook retained discretion over what ads would be posted, and that the plaintiff suffered no damages because no money was taken to be spent on the ads. The court also dismissed an implied covenant of good faith and fair dealing claim because Facebook retained the right to “remove or disapprove any post or ad at Facebook’s sole discretion.”

While the conservative critique has been that social media platforms do too much moderation—in the form of politically biased removals, fact-checking, and demonetization—others believe platforms do far too little to restrain bad conduct by users. But as long as social media platforms retain editorial discretion in their terms of service and make no other promises that can be relied upon by their users, there is little basis for a contract claim. 

The First Amendment protects the moderation policies of social media platforms, and there is no way around this

With no reasonable cause of action for political bias under the law, conservatives dangle the threat of making changes to Section 230 immunity that could prove costly to the social media platforms in order to extract concessions from the platforms to alter their practices.

This is why there are no serious efforts to actually repeal Section 230, as President Trump has asked for repeatedly. Instead, several bills propose to amend Section 230, while a rulemaking by the FCC seeks to clarify its meaning. 

But none of these proposed bills would directly affect platforms’ ability to make “biased” moderation decisions. Put simply: the First Amendment protects social media platforms’ editorial discretion. They may set rules to use their platforms, just as any private person may set rules for their own property. If I kick someone off my property for saying racist things, the First Amendment (as well as regular property law) protects my right to do so. Only under extremely limited circumstances can the government change this baseline rule and survive constitutional scrutiny.

Social media platforms’ right to editorial discretion is the same as that enjoyed by newspapers. In Miami Herald Publishing Co. v. Tornillo, the Supreme Court found:

The choice of material to go into a newspaper, and the decisions made as to limitations on the size and content of the paper, and treatment of public issues and public officials—whether fair or unfair—constitute the exercise of editorial control and judgment. It has yet to be demonstrated how governmental regulation of this crucial process can be exercised consistent with First Amendment guarantees of a free press as they have evolved to this time. 

Social media platforms, just like any other property owner, have the right to determine what they want displayed on their property. In other words, Facebook, Google, and Twitter have the right to moderate content on news feeds, search results, and timelines. The attempted constitutional end-run—threatening to remove immunity for third-party content unrelated to political bias, like defamation and other tortious acts, unless social media platforms give up their right to editorial discretion over political speech—is just as unconstitutional as directly imposing “fairness” requirements on social media platforms.

The Supreme Court has held that Congress may not leverage a government benefit to regulate a speech interest outside of the benefit’s scope. This is called the unconstitutional conditions doctrine. It basically delineates the level of regulation the government can undertake through subsidizing behavior. The government can’t condition a government benefit on giving up editorial discretion over political speech.

The point of Section 230 immunity is to remedy the moderator’s dilemma set up by Stratton Oakmont v. Prodigy, which held that if a platform chose to moderate third-party speech at all, they would be liable for what was not removed. Section 230 is not about compelling political neutrality on platforms, because it can’t be consistent with the First Amendment. Civil immunity for third-party speech online is an important benefit for social media platforms because it holds they are not liable for the acts of third-parties, with limited exceptions. Without it, platforms would restrict opportunities for third-parties to post out of fear of liability

In sum, the government may not condition enjoyment of a government benefit upon giving up a constitutionally protected right. Section 230 immunity is a clear government benefit. The right to editorial discretion is clearly protected by the First Amendment. Because the entire point of conservative Section 230 reform efforts is to compel social media platforms to carry speech they otherwise desire to remove, it fails this basic test.

Conclusion

Fundamentally, the conservative push to reform Section 230 in response to the alleged anti-conservative bias of major social media platforms is not about policy. Really, it’s about waging a culture war against the perceived “liberal elites” from Silicon Valley, just as there is an ongoing culture war against perceived “liberal elites” in the mainstream media, Hollywood, and academia. But fighting this culture war is not worth giving up conservative principles of free speech, limited government, and free markets.

Over at the Federalist Society’s blog, there has been an ongoing debate about what to do about Section 230. While there has long-been variety in what we call conservatism in the United States, the most prominent strains have agreed on at least the following: Constitutionally limited government, free markets, and prudence in policy-making. You would think all of these values would be important in the Section 230 debate. It seems, however, that some are willing to throw these principles away in pursuit of a temporary political victory over perceived “Big Tech censorship.” 

Constitutionally Limited Government: Congress Shall Make No Law

The First Amendment of the United States Constitution states: “Congress shall make no law… abridging the freedom of speech.” Originalists on the Supreme Court have noted that this makes clear that the Constitution protects against state action, not private action. In other words, the Constitution protects a negative conception of free speech, not a positive conception.

Despite this, some conservatives believe that Section 230 should be about promoting First Amendment values by mandating private entities are held to the same standards as the government. 

For instance, in his Big Tech and the Whole First Amendment, Craig Parshall of the American Center for Law and Justice (ACLJ) stated:

What better example of objective free speech standards could we have than those First Amendment principles decided by justices appointed by an elected president and confirmed by elected members of the Senate, applying the ideals laid down by our Founders? I will take those over the preferences of brilliant computer engineers any day.

In other words, he thinks Section 230 should be amended to only give Big Tech the “subsidy” of immunity if it commits to a First Amendment-like editorial regime. To defend the constitutionality of such “restrictions on Big Tech”, he points to the Turner intermediate scrutiny standard, in which the Supreme Court upheld must-carry provisions against cable networks. In particular, Parshall latches on to the “bottleneck monopoly” language from the case to argue that Big Tech is similarly situated to cable providers at the time of the case.

Turner, however, turned more on the “special characteristics of the cable medium” that gave it the bottleneck power than the market power itself. As stated by the Supreme Court:

When an individual subscribes to cable, the physical connection between the television set and the cable network gives the cable operator bottleneck, or gatekeeper, control over most (if not all) of the television programming that is channeled into the subscriber’s home. Hence, simply by virtue of its ownership of the essential pathway for cable speech, a cable operator can prevent its subscribers from obtaining access to programming it chooses to exclude. A cable operator, unlike speakers in other media, can thus silence the voice of competing speakers with a mere flick of the switch.

Turner v. FCC, 512 U.S. 622, 656 (1994).

None of the Big Tech companies has the comparable ability to silence competing speakers with a flick of the switch. In fact, the relationship goes the other way on the Internet. Users can (and do) use multiple Big Tech companies’ services, as well as those of competitors which are not quite as big. Users are the ones who can switch with a click or a swipe. There is no basis for treating Big Tech companies any differently than other First Amendment speakers.

Like newspapers, Big Tech companies must use their editorial discretion to determine what is displayed and where. Just like those newspapers, Big Tech has the First Amendment right to editorial discretion. This, not Section 230, is the bedrock law that gives Big Tech companies the right to remove content.

Thus, when Rachel Bovard of the Internet Accountability Project argues that the FCC should remove the ability of tech platforms to engage in viewpoint discrimination, she makes a serious error in arguing it is Section 230 that gives them the right to remove content.

Immediately upon noting that the NTIA petition seeks clarification on the relationship between (c)(1) and (c)(2), Bovard moves right to concern over the removal of content. “Unfortunately, embedded in that section [(c)(2)] is a catch-all phrase, ‘otherwise objectionable,’ that gives tech platforms discretion to censor anything that they deem ‘otherwise objectionable.’ Such broad language lends itself in practice to arbitrariness.” 

In order for CDA 230 to “give[] tech platforms discretion to censor,” they would have to not have that discretion absent CDA 230. Bovard totally misses the point of the First Amendment argument, stating:

Yet DC’s tech establishment frequently rejects this argument, choosing instead to focus on the First Amendment right of corporations to suppress whatever content they so choose, never acknowledging that these choices, when made at scale, have enormous ramifications. . . . 

But this argument intentionally sidesteps the fact that Sec. 230 is not required by the First Amendment, and that its application to tech platforms privileges their First Amendment behavior in a unique way among other kinds of media corporations. Newspapers also have a First Amendment right to publish what they choose—but they are subject to defamation and libel laws for content they write, or merely publish. Media companies also make First Amendment decisions subject to a thicket of laws and regulations that do not similarly encumber tech platforms.

There is the merest kernel of truth in the lines quoted above. Newspapers are indeed subject to defamation and libel laws for what they publish. But, as should be obvious, liability for publication entails actually publishing something. And what some conservatives are concerned about is platforms’ ability to not publish something: to take down conservative content.

It might be simpler if the First Amendment treated published speech and unpublished speech the same way. But it doesn’t. One can be liable for what one speaks, writes, or publishes on behalf of others. Indeed, even with the full protection of the First Amendment, there is no question that newspapers can be held responsible for delicts caused by content they publish. But no newspaper has ever been held responsible for anything they didn’t publish.

Free Markets: Competition as the Bulwark Against Abuses, not Regulation

Conservatives have long believed in the importance of property rights, exchange, and the power of the free market to promote economic growth. Competition is seen as the protector of the consumer, not big government regulators. In the latter half of the twentieth century into the twenty-first century, conservatives have fought for capitalism over socialism, free markets over regulation, and competition over cronyism. But in the name of combating anti-conservative bias online, they are willing to throw these principles away.

The bedrock belief in the right of property owners to decide the terms of how they want to engage with others is fundamental to American conservatism. As stated by none other than Bovard (along with co-author Jim Demint in their book Conservative: Knowing What to Keep):

Capitalism is nothing more or less than the extension of individual freedom from the political and cultural realms to the economy. Just as government isn’t supposed to tell you how to pray, or what to think, or what sports teams to follow or books to read, it’s not supposed to tell you what to do with your own money and property.

Conservatives normally believe that it is the free choices of consumers and producers in the marketplace that maximize consumer welfare, rather than the choices of politicians and bureaucrats. Competition, in other words, is what protects us from abuses in the marketplace. Again as Bovard and Demint rightly put it:

Under the free enterprise system, money is not redistributed by a central government bureau. It goes wherever people see value. Those who create value are rewarded which then signals to the rest of the economy to up their game. It’s continuous democracy.

To get around this, both Parshall and Bovard make much of the “market dominance” of tech platforms. The essays take the position that tech platforms have nearly unassailable monopoly power which makes them unaccountable. Bovard claims that “mega-corporations have as much power as the government itself—and in some ways, more power, because theirs is unchecked and unaccountable.” Parshall even connects this to antitrust law, stating:  

This brings us to another kind of innovation, one that’s hidden from the public view. It has to do with how Big Tech companies use both algorithms plus human review during content moderation. This review process has resulted in the targeting, suppression, or down-ranking of primarily conservative content. As such, this process, should it continue, should be considered a kind of suppressive “innovation” in a quasi-antitrust analysis.

How the process harms “consumer welfare” is obvious. A more competitive market could produce social media platforms designing more innovational content moderation systems that honor traditional free speech and First Amendment norms while still offering features and connectivity akin to the huge players.

Antitrust law, in theory, would be a good way to handle issues of market power and consumer harm that results from non-price effects. But it is difficult to see how antitrust could handle the issue of political bias well:

Just as with privacy and other product qualities, the analysis becomes increasingly complex first when tradeoffs between price and quality are introduced, and then even more so when tradeoffs between what different consumer groups perceive as quality is added. In fact, it is more complex than privacy. All but the most exhibitionistic would prefer more to less privacy, all other things being equal. But with political media consumption, most would prefer to have more of what they want to read available, even if it comes at the expense of what others may want. There is no easy way to understand what consumer welfare means in a situation where one group’s preferences need to come at the expense of another’s in moderation decisions.

Neither antitrust nor quasi-antitrust regimes are well-suited to dealing with the perceived harm of anti-conservative bias. However unfulfilling this is to some conservatives, competition and choice are better answers to perceived political bias than the heavy hand of government. 

Prudence: Awareness of Unintended Consequences

Another bedrock principle of conservatism is to be aware of unintended consequences when making changes to long-standing laws and policies. In regulatory matters, cost-benefit analysis is employed to evaluate whether policies are improving societal outcomes. Using economic thinking to understand the likely responses to changes in regulation is fundamental to American conservatism. Or as Bovard and Demint’s book title suggests, conservatism is about knowing what to keep. 

Bovard has argued that since conservatism is a set of principles, not a dogmatic ideology, it can be in favor of fighting against the collectivism of Big Tech companies imposing their political vision upon the world. Conservatism, in this Kirkian sense, doesn’t require particular policy solutions. But this analysis misses what has worked about Section 230 and how the very tech platforms she decries have greatly benefited society. Prudence means understanding what has worked and only changing what has worked in a way that will improve upon it.

The benefits of Section 230 immunity in promoting platforms for third-party speech are clear. It is not an overstatement to say that Section 230 contains “The Twenty-Six Words that Created the Internet.” It is important to note that Section 230 is not only available to Big Tech companies. It is available to all online platforms who host third-party speech. Any reform efforts at Section 230 must know what to keep.In a sense, Section (c)(1) of Section 230 does, indeed, provide greater protection for published content online than the First Amendment on its own would offer: it extends the First Amendment’s permissible scope of published content for which an online service cannot be held liable to include otherwise actionable third-party content.

But let’s be clear about the extent of this protection. It doesn’t protect anything a platform itself publishes, or even anything in which it has a significant hand in producing. Why don’t offline newspapers enjoy this “handout” (though the online versions clearly do for comments)? Because they don’t need it, and because — yes, it’s true — it comes at a cost. How much third-party content would newspapers publish without significant input from the paper itself if only they were freed from the risk of liability for such content? None? Not much? The New York Times didn’t build and sustain its reputation on the slapdash publication of unedited ramblings by random commentators. But what about classifieds? Sure. There would be more classified ads, presumably. More to the point, newspapers would exert far less oversight over the classified ads, saving themselves the expense of moderating this one, small corner of their output.

There is a cost to traditional newspapers from being denied the extended protections of Section 230. But the effect is less third-party content in parts of the paper that they didn’t wish to have the same level of editorial control. If Section 230 is a “subsidy” as critics put it, then what it is subsidizing is the hosting of third-party speech. 

The Internet would look vastly different if it was just the online reproduction of the offline world. If tech platforms were responsible for all third-party speech to the degree that newspapers are for op-eds, then they would likely moderate it to the same degree, making sure there is nothing which could expose them to liability before publishing. This means there would be far less third-party speech on the Internet.

In fact, it could be argued that it is smaller platforms who would be most affected by the repeal of Section 230 immunity. Without it, it is likely that only the biggest tech platforms would have the necessary resources to dedicate to content moderation in order to avoid liability.

Proposed Section 230 reforms will likely have unintended consequences in reducing third-party speech altogether, including conservative speech. For instance, a few bills have proposed only allowing moderation for reasons defined by statute if the platform has an “objectively reasonable belief” that the speech fits under such categories. This would likely open up tech platforms to lawsuits over the meaning of “objectively reasonable belief” that could deter them from wanting to host third-party speech altogether. Similarly, lawsuits for “selective enforcement” of a tech platform’s terms of service could lead them to either host less speech or change their terms of service.

This could actually exacerbate the issue of political bias. Allegedly anti-conservative tech platforms could respond to a “good faith” requirement in enforcing its terms of service by becoming explicitly biased. If the terms of service of a tech platform state grounds which would exclude conservative speech, a requirement of “good faith” enforcement of those terms of service will do nothing to prevent the bias. 

Conclusion

Conservatives would do well to return to their first principles in the Section 230 debate. The Constitution’s First Amendment, respect for free markets and property rights, and appreciation for unintended consequences in changing tech platform incentives all caution against the current proposals to condition Section 230 immunity on platforms giving up editorial discretion. Whether or not tech platforms engage in anti-conservative bias, there’s nothing conservative about abdicating these principles for the sake of political expediency.

In the latest congressional hearing, purportedly analyzing Google’s “stacking the deck” in the online advertising marketplace, much of the opening statement and questioning by Senator Mike Lee and later questioning by Senator Josh Hawley focused on an episode of alleged anti-conservative bias by Google in threatening to demonetize The Federalist, a conservative publisher, unless they exercised a greater degree of control over its comments section. The senators connected this to Google’s “dominance,” arguing that it is only because Google’s ad services are essential that Google can dictate terms to a conservative website. A similar impulse motivates Section 230 reform efforts as well: allegedly anti-conservative online platforms wield their dominance to censor conservative speech, either through deplatforming or demonetization.

Before even getting into the analysis of how to incorporate political bias into antitrust analysis, though, it should be noted that there likely is no viable antitrust remedy. Even aside from the Section 230 debate, online platforms like Google are First Amendment speakers who have editorial discretion over their sites and apps, much like newspapers. An antitrust remedy compelling these companies to carry speech they disagree with would almost certainly violate the First Amendment.

But even aside from the First Amendment aspect of this debate, there is no easy way to incorporate concerns about political bias into antitrust. Perhaps the best way to understand this argument in the antitrust sense is as a non-price effects analysis. 

Political bias could be seen by end consumers as an important aspect of product quality. Conservatives have made the case that not only Google, but also Facebook and Twitter, have discriminated against conservative voices. The argument would then follow that consumer welfare is harmed when these dominant platforms leverage their control of the social media marketplace into the marketplace of ideas by censoring voices with whom they disagree. 

While this has theoretical plausibility, there are real practical difficulties. As Geoffrey Manne and I have written previously, in the context of incorporating privacy into antitrust analysis:

The Horizontal Merger Guidelines have long recognized that anticompetitive effects may “be manifested in non-price terms and conditions that adversely affect customers.” But this notion, while largely unobjectionable in the abstract, still presents significant problems in actual application. 

First, product quality effects can be extremely difficult to distinguish from price effects. Quality-adjusted price is usually the touchstone by which antitrust regulators assess prices for competitive effects analysis. Disentangling (allegedly) anticompetitive quality effects from simultaneous (neutral or pro-competitive) price effects is an imprecise exercise, at best. For this reason, proving a product-quality case alone is very difficult and requires connecting the degradation of a particular element of product quality to a net gain in advantage for the monopolist. 

Second, invariably product quality can be measured on more than one dimension. For instance, product quality could include both function and aesthetics: A watch’s quality lies in both its ability to tell time as well as how nice it looks on your wrist. A non-price effects analysis involving product quality across multiple dimensions becomes exceedingly difficult if there is a tradeoff in consumer welfare between the dimensions. Thus, for example, a smaller watch battery may improve its aesthetics, but also reduce its reliability. Any such analysis would necessarily involve a complex and imprecise comparison of the relative magnitudes of harm/benefit to consumers who prefer one type of quality to another.

Just as with privacy and other product qualities, the analysis becomes increasingly complex first when tradeoffs between price and quality are introduced, and then even more so when tradeoffs between what different consumer groups perceive as quality is added. In fact, it is more complex than privacy. All but the most exhibitionistic would prefer more to less privacy, all other things being equal. But with political media consumption, most would prefer to have more of what they want to read available, even if it comes at the expense of what others may want. There is no easy way to understand what consumer welfare means in a situation where one group’s preferences need to come at the expense of another’s in moderation decisions.

Consider the case of The Federalist again. The allegation is that Google is imposing their anticonservative bias by “forcing” the website to clean up its comments section. The argument is that since The Federalist needs Google’s advertising money, it must play by Google’s rules. And since it did so, there is now one less avenue for conservative speech.

What this argument misses is the balance Google and other online services must strike as multi-sided platforms. The goal is to connect advertisers on one side of the platform, to the users on the other. If a site wants to take advantage of the ad network, it seems inevitable that intermediaries like Google will need to create rules about what can and can’t be shown or they run the risk of losing advertisers who don’t want to be associated with certain speech or conduct. For instance, most companies don’t want to be associated with racist commentary. Thus, they will take great pains to make sure they don’t sponsor or place ads in venues associated with racism. Online platforms connecting advertisers to potential consumers must take that into consideration.

Users, like those who frequent The Federalist, have unpriced access to content across those sites and apps which are part of ad networks like Google’s. Other models, like paid subscriptions (which The Federalist also has available), are also possible. But it isn’t clear that conservative voices or conservative consumers have been harmed overall by the option of unpriced access on one side of the platform, with advertisers paying on the other side. If anything, it seems the opposite is the case since conservatives long complained about legacy media having a bias and lauded the Internet as an opportunity to gain a foothold in the marketplace of ideas.

Online platforms like Google must balance the interests of users from across the political spectrum. If their moderation practices are too politically biased in one direction or another, users could switch to another online platform with one click or swipe. Assuming online platforms wish to maximize revenue, they will have a strong incentive to limit political bias from its moderation practices. The ease of switching to another platform which markets itself as more free speech-friendly, like Parler, shows entrepreneurs can take advantage of market opportunities if Google and other online platforms go too far with political bias. 

While one could perhaps argue that the major online platforms are colluding to keep out conservative voices, this is difficult to square with the different moderation practices each employs, as well as the data that suggests conservative voices are consistently among the most shared on Facebook

Antitrust is not a cure-all law. Conservatives who normally understand this need to reconsider whether antitrust is really well-suited for litigating concerns about anti-conservative bias online. 

Twitter’s decision to begin fact-checking the President’s tweets caused a long-simmering distrust between conservatives and online platforms to boil over late last month. This has led some conservatives to ask whether Section 230, the ‘safe harbour’ law that protects online platforms from certain liability stemming from content posted on their websites by users, is allowing online platforms to unfairly target conservative speech. 

In response to Twitter’s decision, along with an Executive Order released by the President that attacked Section 230, Senator Josh Hawley (R – MO) offered a new bill targeting online platforms, the “Limiting Section 230 Immunity to Good Samaritans Act”. This would require online platforms to engage in “good faith” moderation according to clearly stated terms of service – in effect, restricting Section 230’s protections to online platforms deemed to have done enough to moderate content ‘fairly’.  

While seemingly a sensible standard, if enacted, this approach would violate the First Amendment as an unconstitutional condition to a government benefit, thereby  undermining long-standing conservative principles and the ability of conservatives to be treated fairly online. 

There is established legal precedent that Congress may not grant benefits on conditions that violate Constitutionally-protected rights. In Rumsfeld v. FAIR, the Supreme Court stated that a law that withheld funds from universities that did not allow military recruiters on campus would be unconstitutional if it constrained those universities’ First Amendment rights to free speech. Since the First Amendment protects the right to editorial discretion, including the right of online platforms to make their own decisions on moderation, Congress may not condition Section 230 immunity on platforms taking a certain editorial stance it has dictated. 

Aware of this precedent, the bill attempts to circumvent the obstacle by taking away Section 230 immunity for issues unrelated to anti-conservative bias in moderation. Specifically, Senator Hawley’s bill attempts to condition immunity for platforms on having terms of service for content moderation, and making them subject to lawsuits if they do not act in “good faith” in policing them. 

It’s not even clear that the bill would do what Senator Hawley wants it to. The “good faith” standard only appears to apply to the enforcement of an online platform’s terms of service. It can’t, under the First Amendment, actually dictate what those terms of service say. So an online platform could, in theory, explicitly state in their terms of service that they believe some forms of conservative speech are “hate speech” they will not allow.

Mandating terms of service on content moderation is arguably akin to disclosures like labelling requirements, because it makes clear to platforms’ customers what they’re getting. There are, however, some limitations under the commercial speech doctrine as to what government can require. Under National Institute of Family & Life Advocates v. Becerra, a requirement for terms of service outlining content moderation policies would be upheld unless “unjustified or unduly burdensome.” A disclosure mandate alone would not be unconstitutional. 

But it is clear from the statutory definition of “good faith” that Senator Hawley is trying to overwhelm online platforms with lawsuits on the grounds that they have enforced these rules selectively and therefore not in “good faith”.

These “selective enforcement” lawsuits would make it practically impossible for platforms to moderate content at all, because they would open them up to being sued for any moderation, including moderation  completely unrelated to any purported anti-conservative bias. Any time a YouTuber was aggrieved about a video being pulled down as too sexually explicit, for example, they could file suit and demand that Youtube release information on whether all other similarly situated users were treated the same way. Any time a post was flagged on Facebook, for example for engaging in online bullying or for spreading false information, it could similarly lead to the same situation. 

This would end up requiring courts to act as the arbiter of decency and truth in order to even determine whether online platforms are “selectively enforcing” their terms of service.

Threatening liability for all third-party content is designed to force online platforms to give up moderating content on a perceived political basis. The result will be far less content moderation on a whole range of other areas. It is precisely this scenario that Section 230 was designed to prevent, in order to encourage platforms to moderate things like pornography that would otherwise proliferate on their sites, without exposing themselves to endless legal challenge.

It is likely that this would be unconstitutional as well. Forcing online platforms to choose between exercising their First Amendment rights to editorial discretion and retaining the benefits of Section 230 is exactly what the “unconstitutional conditions” jurisprudence is about. 

This is why conservatives have long argued the government has no business compelling speech. They opposed the “fairness doctrine” which required that radio stations provide a “balanced discussion”, and in practice allowed courts or federal agencies to determine content  until President Reagan overturned it. Later, President Bush appointee and then-FTC Chairman Tim Muris rejected a complaint against Fox News for its “Fair and Balanced” slogan, stating:

I am not aware of any instance in which the Federal Trade Commission has investigated the slogan of a news organization. There is no way to evaluate this petition without evaluating the content of the news at issue. That is a task the First Amendment leaves to the American people, not a government agency.

And recently conservatives were arguing businesses like Masterpiece Cakeshop should not be compelled to exercise their First Amendment rights against their will. All of these cases demonstrate once the state starts to try to stipulate what views can and cannot be broadcast by private organisations, conservatives will be the ones who suffer.

Senator Hawley’s bill fails to acknowledge this. Worse, it fails to live up to the Constitution, and would trample over the rights to freedom of speech that it gives. Conservatives should reject it.

As the initial shock of the COVID quarantine wanes, the Techlash waxes again bringing with it a raft of renewed legislative proposals to take on Big Tech. Prominent among these is the EARN IT Act (the Act), a bipartisan proposal to create a new national commission responsible for proposing best practices designed to mitigate the proliferation of child sexual abuse material (CSAM) online. The Act’s proposal is seemingly simple, but its fallout would be anything but.

Section 230 of the Communications Decency Act currently provides online services like Facebook and Google with a robust protection from liability that could arise as a result of the behavior of their users. Under the Act, this liability immunity would be conditioned on compliance with “best practices” that are produced by the new commission and adopted by Congress.  

Supporters of the Act believe that the best practices are necessary in order to ensure that platform companies effectively police CSAM. While critics of the Act assert that it is merely a backdoor for law enforcement to achieve its long-sought goal of defeating strong encryption. 

The truth of EARN IT—and how best to police CSAM—is more complicated. Ultimately, Congress needs to be very careful not to exceed its institutional capabilities by allowing the new commission to venture into areas beyond its (and Congress’s) expertise.

More can be done about illegal conduct online

On its face, conditioning Section 230’s liability protections on certain platform conduct is not necessarily objectionable. There is undoubtedly some abuse of services online, and it is also entirely possible that the incentives for finding and policing CSAM are not perfectly aligned with other conflicting incentives private actors face. It is, of course, first the responsibility of the government to prevent crime, but it is also consistent with past practice to expect private actors to assist such policing when feasible. 

By the same token, an immunity shield is necessary in some form to facilitate user generated communications and content at scale. Certainly in 1996 (when Section 230 was enacted), firms facing conflicting liability standards required some degree of immunity in order to launch their services. Today, the control of runaway liability remains important as billions of user interactions take place on platforms daily. Related, the liability shield also operates as a way to promote good samaritan self-policing—a measure that surely helps avoid actual censorship by governments, as opposed to the spurious claims made by those like Senator Hawley.

In this context, the Act is ambiguous. It creates a commission composed of a fairly wide cross-section of interested parties—from law enforcement, to victims, to platforms, to legal and technical experts—to recommend best practices. That hardly seems a bad thing, as more minds considering how to design a uniform approach to controlling CSAM would be beneficial—at least theoretically.

In practice, however, there are real pitfalls to imbuing any group of such thinkers—especially ones selected by political actors—with an actual or de facto final say over such practices. Much of this domain will continue to be mercurial, the rules necessary for one type of platform may not translate well into general principles, and it is possible that a public board will make recommendations that quickly tax Congress’s institutional limits. To the extent possible, Congress should be looking at ways to encourage private firms to work together to develop best practices in light of their unique knowledge about their products and their businesses. 

In fact, Facebook has already begun experimenting with an analogous idea in its recently announced Oversight Board. There, Facebook is developing a governance structure by giving the Oversight Board the ability to review content moderation decisions on the Facebook platform. 

So far as the commission created by the Act works to create best practices that align the incentives of firms with the removal of CSAM, it has a lot to offer. Yet, a better solution than the Act would be for Congress to establish policy that works with the private processes already in development.

Short of a more ideal solution, it is critical, however, that the Act establish the boundaries of the commission’s remit very clearly and keep it from venturing into technical areas outside of its expertise. 

The complicated problem of encryption (and technology)

The Act has a major problem insofar as the commission has a fairly open ended remit to recommend best practices, and this liberality can ultimately result in dangerous unintended consequences.

The Act only calls for two out of nineteen members to have some form of computer science background. A panel of non-technical experts should not design any technology—encryption or otherwise. 

To be sure, there are some interesting proposals to facilitate access to encrypted materials (notably, multi-key escrow systems and self-escrow). But such recommendations are beyond the scope of what the commission can responsibly proffer.

If Congress proceeds with the Act, it should put an explicit prohibition in the law preventing the new commission from recommending rules that would interfere with the design of complex technology, such as by recommending that encryption be weakened to provide access to law enforcement, mandating particular network architectures, or modifying the technical details of data storage.

Congress is right to consider if there is better policy to be had for aligning the incentives of the platforms with the deterrence of CSAM—including possible conditional access to Section 230’s liability shield.But just because there is a policy balance to be struck between policing CSAM and platform liability protection doesn’t mean that the new commission is suited to vetting, adopting and updating technical standards – it clearly isn’t. Conversely, to the extent that encryption and similarly complex technologies could be subject to broad policy change it should be through an explicit and considered democratic process, and not as a by-product of the Act. 

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Jacob Grier, (Freelance writer and spirits consultant in Portland, Oregon, and the author of The Rediscovery of Tobacco: Smoking, Vaping, and the Creative Destruction of the Cigarette).]

The COVID-19 pandemic and the shutdown of many public-facing businesses has resulted in many sudden shifts in demand for common goods. The demand for hand sanitizer has drastically increased for hospitals, businesses, and individuals. At the same time, demand for distilled spirits has fallen substantially, as the closure of bars, restaurants, and tasting rooms has cut craft distillers off from their primary buyers. Since ethanol is a key ingredient in both spirits and sanitizer, this situation presents an obvious opportunity for distillers to shift their production from the former to the latter. Hundreds of distilleries have made this transition, but it has not without obstacles. Some of these reflect a real scarcity of needed supplies, but other constraints have been externally imposed by government regulations and the tax code.

Producing sanitizer

The World Health Organization provides guidelines and recipes for locally producing hand sanitizer. The relevant formulation for distilleries calls for only four ingredients: high-proof ethanol (96%), hydrogen peroxide (3%), glycerol (98%), and sterile distilled or boiled water. Distilleries are well-positioned to produce or obtain ethanol and water. Glycerol is used in only small amounts and does not currently appear to be a substantial constraint on production. Hydrogen peroxide is harder to come by, but distilleries are adapting and cooperating to ensure supply. Skip Tognetti, owner of Letterpress Distilling in Seattle, Washington, reports that one local distiller obtained a drum of 34% hydrogen peroxide, which stretches a long way when diluted to a concentration of 3%. Local distillers have been sharing this drum so that they can all produce sanitizer.

Another constraint is finding containers in which to the put the finished product. Not all containers are suitable for holding high-proof alcoholic solutions, and supplies of those that are recommended for sanitizer are scarce. The fact that many of these bottles are produced in China has reportedly also limited the supply. Distillers are therefore having to get creative; Tognetti reports looking into shampoo bottles, and in Chicago distillers have re-purposed glass beer growlers. For informal channels, some distillers have allowed consumers to bring their own containers to fill with sanitizer for personal use. Food and Drug Administration labeling requirements have also prevented the use of travel-size bottles, since the bottles are too small to display the necessary information.

The raw materials for producing ethanol are also coming from some unexpected sources. Breweries are typically unable to produce alcohol at high enough proof for sanitizer, but multiple breweries in Chicago are donating beer that distilleries can bring up to the required purity. Beer giant Anheuser-Busch is also producing sanitizer with the ethanol removed from its alcohol-free beers.

In many cases, the sanitizer is donated or sold at low-cost to hospitals and other essential services, or to local consumers. Online donations have helped to fund some of these efforts, and at least one food and beverage testing lab has stepped up to offer free testing to breweries and distilleries producing sanitizer to ensure compliance with WHO guidelines. Distillers report that the regulatory landscape has been somewhat confusing in recent weeks, and posts in a Facebook group have provided advice for how to get through the FDA’s registration process. In general, distillers going through the process report that agencies have been responsive. Tom Burkleaux of New Deal Distilling in Portland, Oregon says he “had to do some mighty paperwork,” but that the FDA and the Oregon Board of Pharmacy were both quick to process applications, with responses coming in just a few hours or less.

In general, the redirection of craft distilleries to producing hand sanitizer is an example of private businesses responding to market signals and the evident challenges of the health crisis to produce much-needed goods; in some cases, sanitizer represents one of their only sources of revenue during the shutdown, providing a lifeline for small businesses. The Distilled Spirits Council currently lists nearly 600 distilleries making sanitizer in the United States.

There is one significant obstacle that has hindered the production of sanitizer, however: an FDA requirement that distilleries obtain extra ingredients to denature their alcohol.

Denaturing sanitizer

According to the WHO, the four ingredients mentioned above are all that are needed to make sanitizer. In fact, WHO specifically notes that it in most circumstances it is inadvisable to add anything else: “it is not recommended to add any bittering agents to reduce the risk of ingestion of the handrubs” except in cases where there is a high probably of accidental ingestion. Further, “[…] there is no published information on the compatibility and deterrent potential of such chemicals when used in alcohol-based handrubs to discourage their abuse. It is important to note that such additives may make the products toxic and add to production costs.”

Denaturing agents are used to render alcohol either too bitter or too toxic to consume, deterring abuse by adults or accidental ingestion by children. In ordinary circumstances, there are valid reasons to denature sanitizer. In the current pandemic, however, the denaturing requirement is a significant bottleneck in production.

The federal Tax and Trade Bureau is the primary agency regulating alcohol production in the United States. The TTB took action early to encourage distilleries to produce sanitizer, officially releasing guidance on March 18 instructing them that they are free to commence production without prior authorization or formula approval, so long as they are making sanitizer in accordance with WHO guidelines. On March 23, the FDA issued its own emergency authorization of hand sanitizer production; unlike the WHO, FDA guidance does require the use of denaturants. As a result, on March 26 the TTB issued new guidance to be consistent with the FDA.

Under current rules, only sanitizer made with denatured alcohol is exempt from the federal excise tax on beverage alcohol. Federal excise taxes begin at $2.70 per gallon for low-volume distilleries and reach up to $13.50 per gallon, significantly increasing the cost of producing hand sanitizer; state excise taxes can raise these costs even higher.

More importantly, denaturing agents are scarce. In a Twitter thread on March 25, Tognetti noted the difficulty of obtaining them:

To be clear, if I didn’t have to track down denaturing agents (there are several, but isopropyl alcohol is the most common), I could turn out 200 gallons of finished hand sanitizer TODAY.

(As an additional concern, the Distilled Spirits Council notes that the extremely bitter or toxic nature of denaturing agents may impose additional costs on distillers given the need to thoroughly cleanse them from their equipment.)

Congress attempted to address these concerns in the CARES Act, the coronavirus relief package. Section 2308 explicitly waives the federal excise tax on distilled spirits used for the production of sanitizer, however it leaves the formula specification in the hands of the FDA. Unless the agency revises its guidance, production in the US will be constrained by the requirement to add denaturing agents to the plentiful supply of ethanol, or distilleries will risk being targeted with enforcement actions if they produce perfectly usable sanitizer without denaturing their alcohol.

Local distilleries provide agile production capacity

In recent days, larger spirits producers including Pernod-Ricard, Diageo, and Bacardi have announced plans to produce sanitizer. Given their resources and economies of scale, they may end up taking over a significant part of the market. Yet small, local distilleries have displayed the agility necessary to rapidly shift production. It’s worth noting that many of these distilleries did not exist until fairly recently. According to the American Craft Spirits Association, there were fewer than 100 craft distilleries operating in the United States in 2005. By 2018, there were more than 1,800. This growth is the result of changing consumer interests, but also the liberalization of state and local laws to permit distilleries and tasting rooms. That many of these distilleries have the capacity to produce sanitizer in a time of emergency is a welcome, if unintended, consequence of this liberalization.

[Note: A group of 50 academics and 27 organizations, including both myself and ICLE, recently released a statement of principles for lawmakers to consider in discussions of Section 230.]

In a remarkable ruling issued earlier this month, the Third Circuit Court of Appeals held in Oberdorf v. Amazon that, under Pennsylvania products liability law, Amazon could be found liable for a third party vendor’s sale of a defective product via Amazon Marketplace. This ruling comes in the context of Section 230 of the Communications Decency Act, which is broadly understood as immunizing platforms against liability for harmful conduct posted to their platforms by third parties (Section 230 purists may object to myu use of “platform” as approximation for the statute’s term of “interactive computer services”; I address this concern by acknowledging it with this parenthetical). This immunity has long been a bedrock principle of Internet law; it has also long been controversial; and those controversies are very much at the fore of discussion today. 

The response to the opinion has been mixed, to say the least. Eric Goldman, for instance, has asked “are we at the end of online marketplaces?,” suggesting that they “might in the future look like a quaint artifact of the early 21st century.” Kate Klonick, on the other hand, calls the opinion “a brilliant way of both holding tech responsible for harms they perpetuate & making sure we preserve free speech online.”

My own inclination is that both Eric and Kate overstate their respective positions – though neither without reason. The facts of Oberdorf cabin the effects of the holding both to Pennsylvania law and to situations where the platform cannot identify the seller. This suggests that the effects will be relatively limited. 

But, and what I explore in this post, the opinion does elucidate a particular and problematic feature of section 230: that it can be used as a liability shield for harmful conduct. The judges in Oberdorf seem ill-inclined to extend Section 230’s protections to a platform that can easily be used by bad actors as a liability shield. Riffing on this concern, I argue below that Section 230 immunity be proportional to platforms’ ability to reasonably identify speakers using their platforms to engage in harmful speech or conduct.

This idea is developed in more detail in the last section of this post – including responding to the obvious (and overwrought) objections to it. But first it offers some background on Section 230, the Oberdorf and related cases, the Third Circuit’s analysis in Oberdorf, and the recent debates about Section 230. 

Section 230

“Section 230” refers to a portion of the Communications Decency Act that was added to the Communications Act by the 1996 Telecommunications Act, codified at 47 U.S.C. 230. (NB: that’s a sentence that only a communications lawyer could love!) It is widely recognized as – and discussed even by those who disagree with this view as – having been critical to the growth of the modern Internet. As Jeff Kosseff labels it in his recent book, the key provision of section 230 comprises the “26 words that created the Internet.” That section, 230(c)(1), states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” (For those not familiar with it, Kosseff’s book is worth a read – or for the Cliff’s Notes version see here, here, here, here, here, or here.)

Section 230 was enacted to do two things. First, section (c)(1) makes clear that platforms are not liable for user-generated content. In other words, if a user of Facebook, Amazon, the comments section of a Washington Post article, a restaurant review site, a blog that focuses on the knitting of cat-themed sweaters, or any other “interactive computer service,” posts something for which that user may face legal liability, the platform hosting that user’s speech does not face liability for that speech. 

And second, section (c)(2) makes clear that platforms are free to moderate content uploaded by their users, and that they face no liability for doing so. This section was added precisely to repudiate a case that had held that once a platform (in that case, Prodigy) decided to moderate user-generated content, it undertook an obligation to do so. That case meant that platforms faced a Hobson’s choice: either don’t moderate content and don’t risk liability, or moderate all content and face liability for failure to do so well. There was no middle ground: a platform couldn’t say, for instance, “this one post is particularly problematic, so we are going to take it down – but this doesn’t mean that we are going to pervasively moderate content.”

Together, these two provisions stand generally for the proposition that online platforms are not liable for content created by their users, but they are free to moderate that content without facing liability for doing so. It recognized, on the one hand, that it was impractical (i.e., the Internet economy could not function) to require that platforms moderate all user-generated content, so section (c)(1) says that they don’t need to; but, on the other hand, it recognizes that it is desirable for platforms to moderate problematic content to the best of their ability, so section (c)(2) says that they won’t be punished (i.e., lose the immunity granted by section (c)(1) if they voluntarily elect to moderate content). 

Section 230 is written in broad – and has been interpreted by the courts in even broader – terms. Section (c)(1) says that platforms cannot be held liable for the content generated by their users, full stop. The only exceptions are for copyrighted content and content that violates federal criminal law. There is no “unless it is really bad” exception, or a “the platform may be liable if the user-generated content causes significant tangible harm” exception, or an “unless the platform knows about it” exception, or even an “unless the platform makes money off of and actively facilitates harmful content” exception. So long as the content is generated by the user (not by the platform itself), Section 230 shields the platform from liability. 

Oberdorf v. Amazon

This background leads us to the Third Circuit’s opinion in Oberdorf v. Amazon. The opinion is remarkable because it is one of only a few cases in which a court has, despite Section 230, found a platform liable for the conduct of a third party facilitated through the use of that platform. 

Prior to the Third Circuit’s recent opinion, the best known previous case is the 9th Circuit’s Model Mayhem opinion. In that case, the court found that Model Mayhem, a website that helps match models with modeling jobs, had a duty to warn models about individuals who were known to be using the website to find women to sexually assault. 

It is worth spending another moment on the Model Mayhem opinion before returning to the Third Circuit’s Oberdorf opinion. The crux of the 9th Circuit’s opinion in the Model Mayhem case was that the state of Florida (where the assaults occurred) has a duty-to-warn law, which creates a duty between the platform and the user. This duty to warn was triggered by the case-specific fact that the platform had actual knowledge that two of its users were predatorily using the site to find women to assault. Once triggered, this duty to warn exists between the platform and the user. Because the platform faces liability directly for its failure to warn, it is not shielded by section 230 (which only shields the platform from liability for the conduct of the third parties using the platform to engage in harmful conduct). 

In its opinion, the Third Circuit offered a similar analysis – but in a much broader context. 

The Oberdorf case involves a defective dog leash sold to Ms. Oberdorf by a seller doing business as The Furry Gang on Amazon Marketplace. The leash malfunctioned, hitting Ms. Oberdorf in the face and causing permanent blindness in one eye. When she attempted to sue The Furry Gang, she discovered that they were no longer doing business on Amazon Marketplace – and that Amazon did not have sufficient information about their identity for Ms. Oberdorf to bring suit against them.

Undeterred, Ms. Oberdorf sued Amazon under Pennsylvania product liability law, arguing that Amazon was the seller of the defective leash, so was liable for her injuries. Part of Amazon’s defense was that the actual seller, The Furry Gang, was a user of their Marketplace platform – the sale resulted from the storefront generated by The Furry Gang and merely hosted by Amazon Marketplace. Under this theory, Section 230 would bar Amazon from liability for the sale that resulted from the seller’s user-generated storefront. 

The Third Circuit judges had none of that argument. All three judges agreed that under Pennsylvania law, the products liability relationship existed between Ms. Oberdorf and Amazon, so Section 230 did not apply. The two-judge majority found Amazon liable to Ms. Oberford under this law – the dissenting judge would have found Amazon’s conduct insufficient as a basis for liability.

This opinion, in other words, follows in the footsteps of the Ninth Circuit’s Model Mayhem opinion in holding that state law creates a duty directly between the harmed user and the platform, and that that duty isn’t affected by Section 230. But Oberdorf is potentially much broader in impact than Model Mayhem. States are more likely to have broader product liability laws than duty to warn laws. Even more impactful, product liability laws are generally strict liability laws, whereas duty to warn laws are generally triggered by an actual knowledge requirement.

The Third Circuit’s Focus on Agency and Liability Shields

The understanding of Oberdorf described above is that it is the latest in a developing line of cases holding that claims based on state law duties that require platforms to protect users from third party harms can survive Section 230 defenses. 

But there is another, critical, issue in the background of the case that appears to have affected the court’s thinking – and that, I argue, should be a path forward for Section 230. The judges writing for the Third Circuit majority draw attention to

the extensive record evidence that Amazon fails to vet third-party vendors for amenability to legal process. The first factor [of analysis for application of the state’s products liability law] weighs in favor of strict liability not because The Furry Gang cannot be located and/or may be insolvent, but rather because Amazon enables third-party vendors such as The Furry Gang to structure and/or conceal themselves from liability altogether.

This is important for analysis under the Pennsylvania product liability law, which has a marketing chain provision that allows injured consumers to seek redress up the marketing chain if the direct seller of a defective product is insolvent or otherwise unavailable for suit. But the court’s language focuses on Amazon’s design of Marketplace and the ease with which Marketplace can be used by merchants as a liability shield. 

This focus is unsurprising: the law generally does not allow one party to shield another from liability without assuming liability for the shielded party’s conduct. Indeed, this is pretty basic vicarious liability, agency, first-year law school kind of stuff. It is unsurprising that judges would balk at an argument that Amazon could design its platform in a way that makes it impossible for harmed parties to sue a tortfeasor without Amazon in turn assuming liability for any potentially tortious conduct. 

Section 230 is having a bad day

As most who have read this far are almost certainly aware, Section 230 is a big, controversial, political mess right now. Politicians from Josh Hawley to Nancy Pelosi have suggested curtailing Section 230. President Trump just held his “Social Media Summit.” And countries around the world are imposing near-impossible obligations on platforms to remove or otherwise moderate potentially problematic content – obligations that are anathema to Section 230 as they increasingly reflect and influence discussions in the United States. 

To be clear, almost all of the ideas floating around about how to change Section 230 are bad. That is an understatement: they are potentially devastating to the Internet – both to the economic ecosystem and the social ecosystem that have developed and thrived largely because of Section 230.

To be clear, there is also a lot of really, disgustingly, problematic content online – and social media platforms, in particular, have facilitated a great deal of legitimately problematic conduct. But deputizing them to police that conduct and to make real-time decisions about speech that is impossible to evaluate in real time is not a solution to these problems. And to the extent that some platforms may be able to do these things, the novel capabilities of a few platforms to obligations for all would only serve to create entry barriers for smaller platforms and to stifle innovation. 

This is why a group of 50 academics and 27 organizations released a statement of principles last week to inform lawmakers about key considerations to take into account when discussing how Section 230 may be changed. The purpose of these principles is to acknowledge that some change to Section 230 may be appropriate – may even be needed at this juncture – but that such changes should be careful and modest, carefully considered so as to not disrupt the vast benefits for society that Section 230 has made possible and is needed to keep vital.

The Third Circuit offers a Third Way on 230 

The Third Circuit’s opinion offers a modest way that Section 230 could be changed – and, I would say, improved – to address some of the real harms that it enables without undermining the important purposes that it serves. To wit, Section 230’s immunity could be attenuated by an obligation to facilitate the identification of users on that platform, subject to legal process, in proportion to the size and resources available to the platform, the technological feasibility of such identification, the foreseeability of the platform being used to facilitate harmful speech or conduct, and the expected importance (as defined from a First Amendment perspective) of speech on that platform.

In other words, if there are readily available ways to establish some form of identify for users – for instance, by email addresses on widely-used platforms, social media accounts, logs of IP addresses – and there is reason to expect that users of the platform could be subject to suit – for instance, because they’re engaged in commercial activities or the purpose of the platform is to provide a forum for speech that is likely to legally actionable – then the platform needs to be reasonably able to provide reasonable information about speakers subject to legal action in order to avail itself of any Section 230 defense. Stated otherwise, platforms need to be able to reasonably comply with so-called unmasking subpoenas issued in the civil context to the extent such compliance is feasible for the platform’s size, sophistication, resources, &c.

An obligation such as this would have been at best meaningless and at worst devastating at the time Section 230 was adopted. But 25 years later, the Internet is a very different place. Most users have online accounts – email addresses, social media profiles, &c – that can serve as some form of online identification.

More important, we now have evidence of a growing range of harmful conduct and speech that can occur online, and of platforms that use Section 230 as a shield to protect those engaging in such speech or conduct from litigation. Such speakers are clear bad actors who are clearly abusing Section 230 facilitate bad conduct. They should not be able to do so.

Many of the traditional proponents of Section 230 will argue that this idea is a non-starter. Two of the obvious objections are that it would place a disastrous burden on platforms especially start-ups and smaller platforms, and that it would stifle socially valuable anonymous speech. Both are valid concerns, but also accommodated by this proposal.

The concern that modest user-identification requirements would be disastrous to platforms made a great deal of sense in the early years of the Internet, both the law and technology around user identification were less developed. Today, there is a wide-range of low-cost, off-the-shelf, techniques to establish a user’s identity to some level of precision – from logging of IP addresses, to requiring a valid email address to an established provider, registration with an established social media identity, or even SMS-authentication. None of these is perfect; they present a range of cost and sophistication to implement and a range of offer a range of ease of identification.

The proposal offered here is not that platforms be able to identify their speaker – it’s better described as that they not deliberately act as a liability shield. It’s requirement is that platforms implement reasonable identity technology in proportion to their size, sophistication, and the likelihood of harmful speech on their platforms. A small platform for exchanging bread recipes would be fine to maintain a log of usernames and IP addresses. A large, well-resourced, platform hosting commercial activity (such as Amazon Marketplace) may be expected to establish a verified identity for the merchants it hosts. A forum known for hosting hate speech would be expected to have better identification records – it is entirely foreseeable that its users would be subject to legal action. A forum of support groups for marginalized and disadvantaged communities would face a lower obligation than a forum of similar size and sophistication known for hosting legally-actionable speech.

This proportionality approach also addresses the anonymous speech concern. Anonymous speech is often of great social and political value. But anonymity can also be used for, and as made amply clear in contemporary online discussion can bring out the worst of, speech that is socially and politically destructive. Tying Section 230’s immunity to the nature of speech on a platform gives platforms an incentive to moderate speech – to make sure that anonymous speech is used for its good purposes while working to prevent its use for its lesser purposes. This is in line with one of the defining goals of Section 230. 

The challenge, of course, has been how to do this without exposing platforms to potentially crippling liability if they fail to effectively moderate speech. This is why Section 230 took the approach that it did, allowing but not requiring moderation. This proposal’s user-identification requirement shifts that balance from “allowing but not requiring” to “encouraging but not requiring.” Platforms are under no legal obligation to moderate speech, but if they elect not to, they need to make reasonable efforts to ensure that their users engaging in problematic speech can be identified by parties harmed by their speech or conduct. In an era in which sites like 8chan expressly don’t maintain user logs in order to shield themselves from known harmful speech, and Amazon Marketplace allows sellers into the market who cannot be sued by injured consumers, this is a common-sense change to the law.

It would also likely have substantially the same effect as other proposals for Section 230 reform, but without the significant challenges those suggestions face. For instance, Danielle Citron & Ben Wittes have proposed that courts should give substantive meaning to Section 230’s “Good Samaritan” language in section (c)(2)’s subheading, or, in the alternative, that section (c)(1)’s immunity require that platforms “take[] reasonable steps to prevent unlawful uses of its services.” This approach is problematic on both First Amendment and process grounds, because it requires courts to evaluate the substantive content and speech decisions that platforms engage in. It effectively tasks platforms with undertaking the task of the courts in developing a (potentially platform-specific) law of content moderations – and threatens them with a loss of Section 230 immunity is they fail effectively to do so.

By contrast, this proposal would allow, and even encourage, platforms to engage in such moderation, but offers them a gentler, more binary, and procedurally-focused safety valve to maintain their Section 230 immunity. If a user engages in harmful speech or conduct and the platform can assist plaintiffs and courts in bringing legal action against the user in the courts, then the “moderation” process occurs in the courts through ordinary civil litigation. 

To be sure, there are still some uncomfortable and difficult substantive questions – has a platform implemented reasonable identification technologies, is the speech on the platform of the sort that would be viewed as requiring (or otherwise justifying protection of the speaker’s) anonymity, and the like. But these are questions of a type that courts are accustomed to, if somewhat uncomfortable with, addressing. They are, for instance, the sort of issues that courts address in the context of civil unmasking subpoenas.

This distinction is demonstrated in the comparison between Sections 230 and 512. Section 512 is an exception to 230 for copyrighted materials that was put into place by the 1998 Digital Millennium Copyright Act. It takes copyrighted materials outside of the scope of Section 230 and requires platforms to put in place a “notice and takedown” regime in order to be immunized for hosting copyrighted content uploaded by users. This regime has proved controversial, among other reasons, because it effectively requires platforms to act as courts in deciding whether a given piece of content is subject to a valid copyright claim. The Citron/Wittes proposal effectively subjects platforms to a similar requirement in order to maintain Section 230 immunity; the identity-technology proposal, on the other hand, offers an intermediate requirement.

Indeed, the principal effect of this intermediate requirement is to maintain the pre-platform status quo. IRL, if one person says or does something harmful to another person, their recourse is in court. This is true in public and in private; it’s true if the harmful speech occurs on the street, in a store, in a public building, or a private home. If Donny defames Peggy in Hank’s house, Peggy sues Donny in court; she doesn’t sue Hank, and she doesn’t sue Donny in the court of Hank. To the extent that we think of platforms as the fora where people interact online – as the “place” of the Internet – this proposal is intended to ensure that those engaging in harmful speech or conduct online can be hauled into court by the aggrieved parties, and to facilitate the continued development of platforms without disrupting the functioning of this system of adjudication.

Conclusion

Section 230 is, and has long been, the most important and one of the most controversial laws of the Internet. It is increasingly under attack today from a disparate range of voices across the political and geographic spectrum — voices that would overwhelming reject Section 230’s pro-innovation treatment of platforms and in its place attempt to co-opt those platforms as government-compelled (and, therefore, controlled) content moderators. 

In light of these demands, academics and organizations that understand the importance of Section 230, but also recognize the increasing pressures to amend it, have recently released a statement of principles for legislators to consider as they think about changes to Section 230.

Into this fray, the Third Circuit’s opinion in Oberdorf offers a potential change: making Section 230’s immunity for platforms proportional to their ability to reasonably identify speakers that use the platform to engage in harmful speech or conduct. This would restore the status quo ante, under which intermediaries and agents cannot be used as litigation shields without themselves assuming responsibility for any harmful conduct. This shielding effect was not an intended goal of Section 230, and it has been the cause of Section 230’s worst abuses. It was allowed at the time Section 230 was adopted because the used-identity requirements such as proposed here would not have been technologically reasonable at the time Section 230 was adopted. But technology has changed and, today, these requirements would impose only a moderate  burden on platforms today

Yesterday was President Trump’s big “Social Media Summit” where he got together with a number of right-wing firebrands to decry the power of Big Tech to censor conservatives online. According to the Wall Street Journal

Mr. Trump attacked social-media companies he says are trying to silence individuals and groups with right-leaning views, without presenting specific evidence. He said he was directing his administration to “explore all legislative and regulatory solutions to protect free speech and the free speech of all Americans.”

“Big Tech must not censor the voices of the American people,” Mr. Trump told a crowd of more than 100 allies who cheered him on. “This new technology is so important and it has to be used fairly.”

Despite the simplistic narrative tying President Trump’s vision of the world to conservatism, there is nothing conservative about his views on the First Amendment and how it applies to social media companies.

I have noted in several places before that there is a conflict of visions when it comes to whether the First Amendment protects a negative or positive conception of free speech. For those unfamiliar with the distinction: it comes from philosopher Isaiah Berlin, who identified negative liberty as freedom from external interference, and positive liberty as freedom to do something, including having the power and resources necessary to do that thing. Discussions of the First Amendment’s protection of free speech often elide over this distinction.

With respect to speech, the negative conception of liberty recognizes that individual property owners can control what is said on their property, for example. To force property owners to allow speakers/speech on their property that they don’t desire would actually be a violation of their liberty — what the Supreme Court calls “compelled speech.” The First Amendment, consistent with this view, generally protects speech from government interference (with very few, narrow exceptions), while allowing private regulation of speech (again, with very few, narrow exceptions).

Contrary to the original meaning of the First Amendment and the weight of Supreme Court precedent, President Trump’s view of the First Amendment is that it protects a positive conception of liberty — one under which the government, in order to facilitate its conception of “free speech,” has the right and even the duty to impose restrictions on how private actors regulate speech on their property (in this case, social media companies). 

But if Trump’s view were adopted, discretion as to what is necessary to facilitate free speech would be left to future presidents and congresses, undermining the bedrock conservative principle of the Constitution as a shield against government regulation, all falsely in the name of protecting speech. This is counter to the general approach of modern conservatism (but not, of course, necessarily Republicanism) in the United States, including that of many of President Trump’s own judicial and agency appointees. Indeed, it is actually more consistent with the views of modern progressives — especially within the FCC.

For instance, the current conservative bloc on the Supreme Court (over the dissent of the four liberal Justices) recently reaffirmed the view that the First Amendment applies only to state action in Manhattan Community Access Corp. v. Halleck. The opinion, written by Trump-appointee, Justice Brett Kavanaugh, states plainly that:

Ratified in 1791, the First Amendment provides in relevant part that “Congress shall make no law . . . abridging the freedom of speech.” Ratified in 1868, the Fourteenth Amendment makes the First Amendment’s Free Speech Clause applicable against the States: “No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law . . . .” §1. The text and original meaning of those Amendments, as well as this Court’s longstanding precedents, establish that the Free Speech Clause prohibits only governmental abridgment of speech. The Free Speech Clause does not prohibit private abridgment of speech… In accord with the text and structure of the Constitution, this Court’s state-action doctrine distinguishes the government from individuals and private entities. By enforcing that constitutional boundary between the governmental and the private, the state-action doctrine protects a robust sphere of individual liberty. (Emphasis added).

Former Stanford Law dean and First Amendment scholar, Kathleen Sullivan, has summed up the very different approaches to free speech pursued by conservatives and progressives (insofar as they are represented by the “conservative” and “liberal” blocs on the Supreme Court): 

In the first vision…, free speech rights serve an overarching interest in political equality. Free speech as equality embraces first an antidiscrimination principle: in upholding the speech rights of anarchists, syndicalists, communists, civil rights marchers, Maoist flag burners, and other marginal, dissident, or unorthodox speakers, the Court protects members of ideological minorities who are likely to be the target of the majority’s animus or selective indifference…. By invalidating conditions on speakers’ use of public land, facilities, and funds, a long line of speech cases in the free-speech-as-equality tradition ensures public subvention of speech expressing “the poorly financed causes of little people.” On the equality-based view of free speech, it follows that the well-financed causes of big people (or big corporations) do not merit special judicial protection from political regulation. And because, in this view, the value of equality is prior to the value of speech, politically disadvantaged speech prevails over regulation but regulation promoting political equality prevails over speech.

The second vision of free speech, by contrast, sees free speech as serving the interest of political liberty. On this view…, the First Amendment is a negative check on government tyranny, and treats with skepticism all government efforts at speech suppression that might skew the private ordering of ideas. And on this view, members of the public are trusted to make their own individual evaluations of speech, and government is forbidden to intervene for paternalistic or redistributive reasons. Government intervention might be warranted to correct certain allocative inefficiencies in the way that speech transactions take place, but otherwise, ideas are best left to a freely competitive ideological market.

The outcome of Citizens United is best explained as representing a triumph of the libertarian over the egalitarian vision of free speech. Justice Kennedy’s opinion for the Court, joined by Chief Justice Roberts and Justices Scalia, Thomas, and Alito, articulates a robust vision of free speech as serving political liberty; the dissenting opinion by Justice Stevens, joined by Justices Ginsburg, Breyer, and Sotomayor, sets forth in depth the countervailing egalitarian view. (Emphasis added).

President Trump’s views on the regulation of private speech are alarmingly consistent with those embraced by the Court’s progressives to “protect[] members of ideological minorities who are likely to be the target of the majority’s animus or selective indifference” — exactly the sort of conservative “victimhood” that Trump and his online supporters have somehow concocted to describe themselves. 

Trump’s views are also consistent with those of progressives who, since the Reagan FCC abolished it in 1987, have consistently angled for a resurrection of some form of fairness doctrine, as well as other policies inconsistent with the “free-speech-as-liberty” view. Thus Democratic commissioner Jessica Rosenworcel takes a far more interventionist approach to private speech:

The First Amendment does more than protect the interests of corporations. As courts have long recognized, it is a force to support individual interest in self-expression and the right of the public to receive information and ideas. As Justice Black so eloquently put it, “the widest possible dissemination of information from diverse and antagonistic sources is essential to the welfare of the public.” Our leased access rules provide opportunity for civic participation. They enhance the marketplace of ideas by increasing the number of speakers and the variety of viewpoints. They help preserve the possibility of a diverse, pluralistic medium—just as Congress called for the Cable Communications Policy Act… The proper inquiry then, is not simply whether corporations providing channel capacity have First Amendment rights, but whether this law abridges expression that the First Amendment was meant to protect. Here, our leased access rules are not content-based and their purpose and effect is to promote free speech. Moreover, they accomplish this in a narrowly-tailored way that does not substantially burden more speech than is necessary to further important interests. In other words, they are not at odds with the First Amendment, but instead help effectuate its purpose for all of us. (Emphasis added).

Consistent with the progressive approach, this leaves discretion in the hands of “experts” (like Rosenworcel) to determine what needs to be done in order to protect the underlying value of free speech in the First Amendment through government regulation, even if it means compelling speech upon private actors. 

Trump’s view of what the First Amendment’s free speech protections entail when it comes to social media companies is inconsistent with the conception of the Constitution-as-guarantor-of-negative-liberty that conservatives have long embraced. 

Of course, this is not merely a “conservative” position; it is fundamental to the longstanding bipartisan approach to free speech generally and to the regulation of online platforms specifically. As a diverse group of 75 scholars and civil society groups (including ICLE) wrote yesterday in their “Principles for Lawmakers on Liability for User-Generated Content Online”:

Principle #2: Any new intermediary liability law must not target constitutionally protected speech.

The government shouldn’t require—or coerce—intermediaries to remove constitutionally protected speech that the government cannot prohibit directly. Such demands violate the First Amendment. Also, imposing broad liability for user speech incentivizes services to err on the side of taking down speech, resulting in overbroad censorship—or even avoid offering speech forums altogether.

As those principles suggest, the sort of platform regulation that Trump, et al. advocate — essentially a “fairness doctrine” for the Internet — is the opposite of free speech:

Principle #4: Section 230 does not, and should not, require “neutrality.”

Publishing third-party content online never can be “neutral.” Indeed, every publication decision will necessarily prioritize some content at the expense of other content. Even an “objective” approach, such as presenting content in reverse chronological order, isn’t neutral because it prioritizes recency over other values. By protecting the prioritization, de-prioritization, and removal of content, Section 230 provides Internet services with the legal certainty they need to do the socially beneficial work of minimizing harmful content.

The idea that social media should be subject to a nondiscrimination requirement — for which President Trump and others like Senator Josh Hawley have been arguing lately — is flatly contrary to Section 230 — as well as to the First Amendment.

Conservatives upset about “social media discrimination” need to think hard about whether they really want to adopt this sort of position out of convenience, when the tradition with which they align rejects it — rightly — in nearly all other venues. Even if you believe that Facebook, Google, and Twitter are trying to make it harder for conservative voices to be heard (despite all evidence to the contrary), it is imprudent to reject constitutional first principles for a temporary policy victory. In fact, there’s nothing at all “conservative” about an abdication of the traditional principle linking freedom to property for the sake of political expediency.

Neither side in the debate over Section 230 is blameless for the current state of affairs. Reform/repeal proponents have tended to offer ill-considered, irrelevant, or often simply incorrect justifications for amending or tossing Section 230. Meanwhile, many supporters of the law in its current form are reflexively resistant to any change and too quick to dismiss the more reasonable concerns that have been voiced.

Most of all, the urge to politicize this issue — on all sides — stands squarely in the way of any sensible discussion and thus of any sensible reform.

Continue Reading...