Site icon Truth on the Market

Section 230 principles for lawmakers and a note of caution as Trump convenes his “social media summit”

This morning a diverse group of more than 75 academics, scholars, and civil society organizations — including ICLE and several of its academic affiliates — published a set of seven “Principles for Lawmakers” on liability for user-generated content online, aimed at guiding discussions around potential amendments to Section 230 of the Communications Decency Act of 1996

I have reproduced the principles below, and they are available online (along with the list of signatories) here

Section 230 holds those who create content online responsible for that content and, controversially today, protects online intermediaries from liability for content generated by third parties except in specific circumstances. Advocates on both the political right and left have recently begun to argue more ardently for the repeal or at least reform of Section 230.

There is always good reason to consider whether decades-old laws, especially those aimed at rapidly evolving industries, should be updated to reflect both changed circumstances as well as new learning. But discussions over whether and how to reform (or repeal) Section 230 have, thus far, offered far more heat than light. 

Indeed, later today President Trump will hold a “social media summit” at the White House to which he has apparently invited a number of right-wing political firebrands — but no Internet policy experts or scholars and no representatives of social media firms. Nothing about the event suggests it will produce — or even aim at — meaningful analysis of the issues. Trump himself has already concluded about social media platforms that “[w]hat they are doing is wrong and possibly illegal.” On the basis of that (legally baseless) conclusion, “a lot of things are being looked at right now.” This is not how good policy decisions are made. 

The principles we published today are intended to foster an environment in which discussion over these contentious questions may actually be fruitfully pursued. But they also sound a cautionary note. As we write in the preamble to the principles:

[W]e value the balance between freely exchanging ideas, fostering innovation, and limiting harmful speech. Because this is an exceptionally delicate balance, Section 230 reform poses a substantial risk of failing to address policymakers’ concerns and harming the Internet overall.

Neither side in the debate over Section 230 is blameless for the current state of affairs. Reform/repeal proponents have tended to offer ill-considered, irrelevant, or often simply incorrect justifications for amending or tossing Section 230. Meanwhile, many supporters of the law in its current form are reflexively resistant to any change and too quick to dismiss the more reasonable concerns that have been voiced.

Most of all, the urge to politicize this issue — on all sides — stands squarely in the way of any sensible discussion and thus of any sensible reform. 

As the diversity of signatories to these principles demonstrates, there is room for reasoned discussion among Section 230 advocates and skeptics alike. For some, these principles represent a significant move from their initial, hard-line positions, undertaken in the face of the very real risk that if they give an inch others will latch on to their concessions to take a mile. They should be commended for their willingness nevertheless to engage seriously with the issues — and, challenged with de-politicized, sincere, and serious discussion, to change their minds. 

Everyone who thinks seriously about the issues implicated by Section 230 can — or should — agree that it has been instrumental in the birth and growth of the Internet as we know it: both the immense good and the unintended bad. That recognition does not lead inexorably to one conclusion or another regarding the future of Section 230, however. 

Ensuring that what comes next successfully confronts the problems without negating the benefits starts with the recognition that both costs and benefits exist, and that navigating the trade-offs is a fraught endeavor that absolutely will not be accomplished by romanticized assumptions and simplistic solutions. Efforts to update Section 230 should not deny its past successes, ignore reasonable concerns, nor co-opt the process to score costly political points. 

But that’s just the start. It’s incumbent upon those seeking to reform Section 230 to offer ideas and proposals that reflect the reality and the complexity of the world they seek to regulate. The basic principles we published today offer a set of minimum reasonable guardrails for those efforts. Adherence to these principles would allow plenty of scope for potential reforms while helping to ensure that they don’t throw out the baby with the bathwater. 

Accepting, for example, the reality that a “neutral” presentation of content is impossible (Principle #4) and that platform moderation is complicated and also essential for civil discourse (Principle #3) means admitting that a preferred standard of content moderation is just that — a preference. It may be defensible to impose that preference on all platforms by operation of law, and these principles do not obviate such a position. But they do demand that such an opinion be rigorously defended. It is insufficient simply to call for “neutrality” or any other standard (“all ideological voices must be equally represented,” e.g.) without making a valid case for what would be gained and why the inevitable costs and trade-offs would nevertheless be worthwhile. 

All of us who drafted and/or signed these principles are willing to come to the table to discuss good-faith efforts to reform Section 230 that recognize and reasonably account for such trade-offs. It remains to be seen whether we can wrest the process from those who would use it to promote their static, unrealistic, and politicized vision at the expense of everyone else.


Liability for User-Generated Content Online:

Principles for Lawmakers

Policymakers have expressed concern about both harmful online speech and the content moderation practices of tech companies. Section 230, enacted as part of the bipartisan Communications Decency Act of 1996, says that Internet services, or “intermediaries,” are not liable for illegal third-party content except with respect to intellectual property, federal criminal prosecutions, communications privacy (ECPA), and sex trafficking (FOSTA). Of course, Internet services remain responsible for content they themselves create. 

As civil society organizations, academics, and other experts who study the regulation of user-generated content, we value the balance between freely exchanging ideas, fostering innovation, and limiting harmful speech. Because this is an exceptionally delicate balance, Section 230 reform poses a substantial risk of failing to address policymakers’ concerns and harming the Internet overall. We hope the following principles help any policymakers considering amendments to Section 230. 

Principle #1: Content creators bear primary responsibility for their speech and actions.

Content creators — including online services themselves — bear primary responsibility for their own content and actions. Section 230 has never interfered with holding content creators liable. Instead, Section 230 restricts only who can be liable for the harmful content created by others.

Law enforcement online is as important as it is offline. If policymakers believe existing law does not adequately deter bad actors online, they should (i) invest more in the enforcement of existing laws, and (ii) identify and remove obstacles to the enforcement of existing laws. Importantly, while anonymity online can certainly constrain the ability to hold users accountable for their content and actions, courts and litigants have tools to pierce anonymity. And in the rare situation where truly egregious online conduct simply isn’t covered by existing criminal law, the law could be expanded. But if policymakers want to avoid chilling American entrepreneurship, it’s crucial to avoid imposing criminal liability on online intermediaries or their executives for unlawful user-generated content.

Principle #2: Any new intermediary liability must not target constitutionally protected speech. 

The government shouldn’t require — or coerce — intermediaries to remove constitutionally protected speech that the government cannot prohibit directly. Such demands violate the First Amendment. Also, imposing broad liability for user speech incentivizes services to err on the side of taking down speech, resulting in overbroad censorship — or even avoid offering speech forums altogether. 

Principle #3: The law shouldn’t discourage Internet services from moderating content. 

To flourish, the Internet requires that site managers have the ability to remove legal but objectionable content — including content that would be protected under the First Amendment from censorship by the government. If Internet services could not prohibit harassment, pornography, racial slurs, and other lawful but offensive or damaging material, they couldn’t facilitate civil discourse. Even when Internet services have the ability to moderate content, their moderation efforts will always be imperfect given the vast scale of even relatively small sites and the speed with which content is posted. Section 230 ensures that Internet services can carry out this socially beneficial but error-prone work without exposing themselves to increased liability; penalizing them for imperfect content moderation or second-guessing their decision-making will only discourage them from trying in the first place. This vital principle should remain intact.

Principle #4: Section 230 does not, and should not, require “neutrality.” 

Publishing third-party content online never can be “neutral.” Indeed, every publication decision will necessarily prioritize some content at the expense of other content. Even an “objective” approach, such as presenting content in reverse chronological order, isn’t neutral because it prioritizes recency over other values. By protecting the prioritization, deprioritization, and removal of content, Section 230 provides Internet services with the legal certainty they need to do the socially beneficial work of minimizing harmful content. 

Principle #5: We need a uniform national legal standard. 

Most Internet services cannot publish content on a state-by-state basis, so state-by-state variations in liability would force compliance with the most restrictive legal standard. In its current form, Section 230 prevents this dilemma by setting a consistent national standard — which includes potential liability under the uniform body of federal criminal law. Internet services, especially smaller companies and new entrants, would find it difficult, if not impossible, to manage the costs and legal risks of facing potential liability under state civil law, or of bearing the risk of prosecution under state criminal law. 

Principle #6: We must continue to promote innovation on the Internet.

Section 230 encourages innovation in Internet services, especially by smaller services and start-ups who need the most protection from potentially crushing liability. The law must continue to protect intermediaries not merely from liability, but from having to defend against excessive, often-meritless suits — what one court called “death by ten thousand duck-bites.” Without such protection, compliance, implementation, and litigation costs could strangle smaller companies before they even emerge, while larger, incumbent technology companies would be much better positioned to absorb these costs. Any proposal to reform Section 230 that is calibrated to what might be possible for the Internet giants will necessarily mis-calibrate the law for smaller services.

Principle #7: Section 230 should apply equally across a broad spectrum of online services.

Section 230 applies to services that users never interact with directly. The further removed an Internet service — such as a DDOS protection provider or domain name registrar — is from an offending user’s content or actions, the more blunt its tools to combat objectionable content become. Unlike social media companies or other user-facing services, infrastructure providers cannot take measures like removing individual posts or comments. Instead, they can only shutter entire sites or services, thus risking significant collateral damage to inoffensive or harmless content. Requirements drafted with user-facing services in mind will likely not work for these non-user-facing services.

Exit mobile version