Archives For section 230

Twitter’s decision to begin fact-checking the President’s tweets caused a long-simmering distrust between conservatives and online platforms to boil over late last month. This has led some conservatives to ask whether Section 230, the ‘safe harbour’ law that protects online platforms from certain liability stemming from content posted on their websites by users, is allowing online platforms to unfairly target conservative speech. 

In response to Twitter’s decision, along with an Executive Order released by the President that attacked Section 230, Senator Josh Hawley (R – MO) offered a new bill targeting online platforms, the “Limiting Section 230 Immunity to Good Samaritans Act”. This would require online platforms to engage in “good faith” moderation according to clearly stated terms of service – in effect, restricting Section 230’s protections to online platforms deemed to have done enough to moderate content ‘fairly’.  

While seemingly a sensible standard, if enacted, this approach would violate the First Amendment as an unconstitutional condition to a government benefit, thereby  undermining long-standing conservative principles and the ability of conservatives to be treated fairly online. 

There is established legal precedent that Congress may not grant benefits on conditions that violate Constitutionally-protected rights. In Rumsfeld v. FAIR, the Supreme Court stated that a law that withheld funds from universities that did not allow military recruiters on campus would be unconstitutional if it constrained those universities’ First Amendment rights to free speech. Since the First Amendment protects the right to editorial discretion, including the right of online platforms to make their own decisions on moderation, Congress may not condition Section 230 immunity on platforms taking a certain editorial stance it has dictated. 

Aware of this precedent, the bill attempts to circumvent the obstacle by taking away Section 230 immunity for issues unrelated to anti-conservative bias in moderation. Specifically, Senator Hawley’s bill attempts to condition immunity for platforms on having terms of service for content moderation, and making them subject to lawsuits if they do not act in “good faith” in policing them. 

It’s not even clear that the bill would do what Senator Hawley wants it to. The “good faith” standard only appears to apply to the enforcement of an online platform’s terms of service. It can’t, under the First Amendment, actually dictate what those terms of service say. So an online platform could, in theory, explicitly state in their terms of service that they believe some forms of conservative speech are “hate speech” they will not allow.

Mandating terms of service on content moderation is arguably akin to disclosures like labelling requirements, because it makes clear to platforms’ customers what they’re getting. There are, however, some limitations under the commercial speech doctrine as to what government can require. Under National Institute of Family & Life Advocates v. Becerra, a requirement for terms of service outlining content moderation policies would be upheld unless “unjustified or unduly burdensome.” A disclosure mandate alone would not be unconstitutional. 

But it is clear from the statutory definition of “good faith” that Senator Hawley is trying to overwhelm online platforms with lawsuits on the grounds that they have enforced these rules selectively and therefore not in “good faith”.

These “selective enforcement” lawsuits would make it practically impossible for platforms to moderate content at all, because they would open them up to being sued for any moderation, including moderation  completely unrelated to any purported anti-conservative bias. Any time a YouTuber was aggrieved about a video being pulled down as too sexually explicit, for example, they could file suit and demand that Youtube release information on whether all other similarly situated users were treated the same way. Any time a post was flagged on Facebook, for example for engaging in online bullying or for spreading false information, it could similarly lead to the same situation. 

This would end up requiring courts to act as the arbiter of decency and truth in order to even determine whether online platforms are “selectively enforcing” their terms of service.

Threatening liability for all third-party content is designed to force online platforms to give up moderating content on a perceived political basis. The result will be far less content moderation on a whole range of other areas. It is precisely this scenario that Section 230 was designed to prevent, in order to encourage platforms to moderate things like pornography that would otherwise proliferate on their sites, without exposing themselves to endless legal challenge.

It is likely that this would be unconstitutional as well. Forcing online platforms to choose between exercising their First Amendment rights to editorial discretion and retaining the benefits of Section 230 is exactly what the “unconstitutional conditions” jurisprudence is about. 

This is why conservatives have long argued the government has no business compelling speech. They opposed the “fairness doctrine” which required that radio stations provide a “balanced discussion”, and in practice allowed courts or federal agencies to determine content  until President Reagan overturned it. Later, President Bush appointee and then-FTC Chairman Tim Muris rejected a complaint against Fox News for its “Fair and Balanced” slogan, stating:

I am not aware of any instance in which the Federal Trade Commission has investigated the slogan of a news organization. There is no way to evaluate this petition without evaluating the content of the news at issue. That is a task the First Amendment leaves to the American people, not a government agency.

And recently conservatives were arguing businesses like Masterpiece Cakeshop should not be compelled to exercise their First Amendment rights against their will. All of these cases demonstrate once the state starts to try to stipulate what views can and cannot be broadcast by private organisations, conservatives will be the ones who suffer.

Senator Hawley’s bill fails to acknowledge this. Worse, it fails to live up to the Constitution, and would trample over the rights to freedom of speech that it gives. Conservatives should reject it.

As the initial shock of the COVID quarantine wanes, the Techlash waxes again bringing with it a raft of renewed legislative proposals to take on Big Tech. Prominent among these is the EARN IT Act (the Act), a bipartisan proposal to create a new national commission responsible for proposing best practices designed to mitigate the proliferation of child sexual abuse material (CSAM) online. The Act’s proposal is seemingly simple, but its fallout would be anything but.

Section 230 of the Communications Decency Act currently provides online services like Facebook and Google with a robust protection from liability that could arise as a result of the behavior of their users. Under the Act, this liability immunity would be conditioned on compliance with “best practices” that are produced by the new commission and adopted by Congress.  

Supporters of the Act believe that the best practices are necessary in order to ensure that platform companies effectively police CSAM. While critics of the Act assert that it is merely a backdoor for law enforcement to achieve its long-sought goal of defeating strong encryption. 

The truth of EARN IT—and how best to police CSAM—is more complicated. Ultimately, Congress needs to be very careful not to exceed its institutional capabilities by allowing the new commission to venture into areas beyond its (and Congress’s) expertise.

More can be done about illegal conduct online

On its face, conditioning Section 230’s liability protections on certain platform conduct is not necessarily objectionable. There is undoubtedly some abuse of services online, and it is also entirely possible that the incentives for finding and policing CSAM are not perfectly aligned with other conflicting incentives private actors face. It is, of course, first the responsibility of the government to prevent crime, but it is also consistent with past practice to expect private actors to assist such policing when feasible. 

By the same token, an immunity shield is necessary in some form to facilitate user generated communications and content at scale. Certainly in 1996 (when Section 230 was enacted), firms facing conflicting liability standards required some degree of immunity in order to launch their services. Today, the control of runaway liability remains important as billions of user interactions take place on platforms daily. Related, the liability shield also operates as a way to promote good samaritan self-policing—a measure that surely helps avoid actual censorship by governments, as opposed to the spurious claims made by those like Senator Hawley.

In this context, the Act is ambiguous. It creates a commission composed of a fairly wide cross-section of interested parties—from law enforcement, to victims, to platforms, to legal and technical experts—to recommend best practices. That hardly seems a bad thing, as more minds considering how to design a uniform approach to controlling CSAM would be beneficial—at least theoretically.

In practice, however, there are real pitfalls to imbuing any group of such thinkers—especially ones selected by political actors—with an actual or de facto final say over such practices. Much of this domain will continue to be mercurial, the rules necessary for one type of platform may not translate well into general principles, and it is possible that a public board will make recommendations that quickly tax Congress’s institutional limits. To the extent possible, Congress should be looking at ways to encourage private firms to work together to develop best practices in light of their unique knowledge about their products and their businesses. 

In fact, Facebook has already begun experimenting with an analogous idea in its recently announced Oversight Board. There, Facebook is developing a governance structure by giving the Oversight Board the ability to review content moderation decisions on the Facebook platform. 

So far as the commission created by the Act works to create best practices that align the incentives of firms with the removal of CSAM, it has a lot to offer. Yet, a better solution than the Act would be for Congress to establish policy that works with the private processes already in development.

Short of a more ideal solution, it is critical, however, that the Act establish the boundaries of the commission’s remit very clearly and keep it from venturing into technical areas outside of its expertise. 

The complicated problem of encryption (and technology)

The Act has a major problem insofar as the commission has a fairly open ended remit to recommend best practices, and this liberality can ultimately result in dangerous unintended consequences.

The Act only calls for two out of nineteen members to have some form of computer science background. A panel of non-technical experts should not design any technology—encryption or otherwise. 

To be sure, there are some interesting proposals to facilitate access to encrypted materials (notably, multi-key escrow systems and self-escrow). But such recommendations are beyond the scope of what the commission can responsibly proffer.

If Congress proceeds with the Act, it should put an explicit prohibition in the law preventing the new commission from recommending rules that would interfere with the design of complex technology, such as by recommending that encryption be weakened to provide access to law enforcement, mandating particular network architectures, or modifying the technical details of data storage.

Congress is right to consider if there is better policy to be had for aligning the incentives of the platforms with the deterrence of CSAM—including possible conditional access to Section 230’s liability shield.But just because there is a policy balance to be struck between policing CSAM and platform liability protection doesn’t mean that the new commission is suited to vetting, adopting and updating technical standards – it clearly isn’t. Conversely, to the extent that encryption and similarly complex technologies could be subject to broad policy change it should be through an explicit and considered democratic process, and not as a by-product of the Act.