Archives For Immunity

As the initial shock of the COVID quarantine wanes, the Techlash waxes again bringing with it a raft of renewed legislative proposals to take on Big Tech. Prominent among these is the EARN IT Act (the Act), a bipartisan proposal to create a new national commission responsible for proposing best practices designed to mitigate the proliferation of child sexual abuse material (CSAM) online. The Act’s proposal is seemingly simple, but its fallout would be anything but.

Section 230 of the Communications Decency Act currently provides online services like Facebook and Google with a robust protection from liability that could arise as a result of the behavior of their users. Under the Act, this liability immunity would be conditioned on compliance with “best practices” that are produced by the new commission and adopted by Congress.  

Supporters of the Act believe that the best practices are necessary in order to ensure that platform companies effectively police CSAM. While critics of the Act assert that it is merely a backdoor for law enforcement to achieve its long-sought goal of defeating strong encryption. 

The truth of EARN IT—and how best to police CSAM—is more complicated. Ultimately, Congress needs to be very careful not to exceed its institutional capabilities by allowing the new commission to venture into areas beyond its (and Congress’s) expertise.

More can be done about illegal conduct online

On its face, conditioning Section 230’s liability protections on certain platform conduct is not necessarily objectionable. There is undoubtedly some abuse of services online, and it is also entirely possible that the incentives for finding and policing CSAM are not perfectly aligned with other conflicting incentives private actors face. It is, of course, first the responsibility of the government to prevent crime, but it is also consistent with past practice to expect private actors to assist such policing when feasible. 

By the same token, an immunity shield is necessary in some form to facilitate user generated communications and content at scale. Certainly in 1996 (when Section 230 was enacted), firms facing conflicting liability standards required some degree of immunity in order to launch their services. Today, the control of runaway liability remains important as billions of user interactions take place on platforms daily. Related, the liability shield also operates as a way to promote good samaritan self-policing—a measure that surely helps avoid actual censorship by governments, as opposed to the spurious claims made by those like Senator Hawley.

In this context, the Act is ambiguous. It creates a commission composed of a fairly wide cross-section of interested parties—from law enforcement, to victims, to platforms, to legal and technical experts—to recommend best practices. That hardly seems a bad thing, as more minds considering how to design a uniform approach to controlling CSAM would be beneficial—at least theoretically.

In practice, however, there are real pitfalls to imbuing any group of such thinkers—especially ones selected by political actors—with an actual or de facto final say over such practices. Much of this domain will continue to be mercurial, the rules necessary for one type of platform may not translate well into general principles, and it is possible that a public board will make recommendations that quickly tax Congress’s institutional limits. To the extent possible, Congress should be looking at ways to encourage private firms to work together to develop best practices in light of their unique knowledge about their products and their businesses. 

In fact, Facebook has already begun experimenting with an analogous idea in its recently announced Oversight Board. There, Facebook is developing a governance structure by giving the Oversight Board the ability to review content moderation decisions on the Facebook platform. 

So far as the commission created by the Act works to create best practices that align the incentives of firms with the removal of CSAM, it has a lot to offer. Yet, a better solution than the Act would be for Congress to establish policy that works with the private processes already in development.

Short of a more ideal solution, it is critical, however, that the Act establish the boundaries of the commission’s remit very clearly and keep it from venturing into technical areas outside of its expertise. 

The complicated problem of encryption (and technology)

The Act has a major problem insofar as the commission has a fairly open ended remit to recommend best practices, and this liberality can ultimately result in dangerous unintended consequences.

The Act only calls for two out of nineteen members to have some form of computer science background. A panel of non-technical experts should not design any technology—encryption or otherwise. 

To be sure, there are some interesting proposals to facilitate access to encrypted materials (notably, multi-key escrow systems and self-escrow). But such recommendations are beyond the scope of what the commission can responsibly proffer.

If Congress proceeds with the Act, it should put an explicit prohibition in the law preventing the new commission from recommending rules that would interfere with the design of complex technology, such as by recommending that encryption be weakened to provide access to law enforcement, mandating particular network architectures, or modifying the technical details of data storage.

Congress is right to consider if there is better policy to be had for aligning the incentives of the platforms with the deterrence of CSAM—including possible conditional access to Section 230’s liability shield.But just because there is a policy balance to be struck between policing CSAM and platform liability protection doesn’t mean that the new commission is suited to vetting, adopting and updating technical standards – it clearly isn’t. Conversely, to the extent that encryption and similarly complex technologies could be subject to broad policy change it should be through an explicit and considered democratic process, and not as a by-product of the Act. 

For those who follow these things (and for those who don’t but should!), Eric Goldman just posted an excellent short essay on Section 230 immunity and account terminations.

Here’s the abstract:

An online provider’s termination of a user’s online account can be a major-and potentially even life-changing-event for the user. Account termination exiles the user from a virtual place the user wanted to be; termination disrupts any social network relationship ties in that venue, and prevents the user from sending or receiving messages there; and the user loses any virtual assets in the account, which could be anything from archived emails to accumulated game assets. The effects of account termination are especially acute in virtual worlds, where dedicated users may be spending a majority of their waking hours or have aggregated substantial in-game wealth. However, the problem arises in all online environments (including email, social networking and web hosting) where account termination disrupts investments made by users.

Because of the potentially significant consequences from online user account termination, user-rights advocates, especially in the virtual world context, have sought legal restrictions on online providers’ discretion to terminate users. However, these efforts are largely misdirected because of 47 U.S.C. §230(c)(2) (“Section 230(c)(2)”), a federal statutory immunity. This essay, written in conjunction with an April 2011 symposium at UC Irvine entitled “Governing the Magic Circle: Regulation of Virtual Worlds,” explains Section 230(c)(2)’s role in immunizing online providers’ decisions to terminate user accounts. It also explains why this immunity is sound policy.

But the meat of the essay (at least the normative part of the essay) is this:

Online user communities inevitably require at least some provider intervention. At times, users need “protection” from other users. The provider can give users self-help tools to reduce their reliance on the online provider’s intervention, but technological tools cannot ameliorate all community-damaging conduct by determined users. Eventually, the online provider needs to curb a rogue user’s behavior to protect the rest of the community. Alternatively, a provider may need to respond to users who are jeopardizing the site’s security or technical infrastructure. . . .  Section 230(c)(2) provides substantial legal certainty to online providers who police their premises and ensure the community’s stability when intervention is necessary.

* * *

Thus, marketplace incentives work unexpectedly well to discipline online providers from capriciously wielding their termination power. This is true even if many users face substantial nonrecoupable or switching costs, both financially and in terms of their social networks. Some users, both existing and prospective, can be swayed by the online provider’s capriciousness—and by the provider’s willingness to oust problem users who are disrupting the community. The online provider’s desire to keep these swayable users often can provide enough financial incentives for the online provider to make good choices.

Thus, broadly conceived, § 230(c)(2) removes legal regulation of an online provider’s account termination, making the marketplace the main governance mechanism over an online provider’s choices. Fortunately, the marketplace is effective enough to discipline those choices.

Eric doesn’t talk explicitly here about property rights and transaction costs, but that’s what he’s talking about.  Well-worth reading as a short, clear, informative introduction to this extremely important topic.