Archives For torts

In what has become regularly scheduled programming on Capitol Hill, Facebook CEO Mark Zuckerberg, Twitter CEO Jack Dorsey, and Google CEO Sundar Pichai will be subject to yet another round of congressional grilling—this time, about the platforms’ content-moderation policies—during a March 25 joint hearing of two subcommittees of the House Energy and Commerce Committee.

The stated purpose of this latest bit of political theatre is to explore, as made explicit in the hearing’s title, “social media’s role in promoting extremism and misinformation.” Specific topics are expected to include proposed changes to Section 230 of the Communications Decency Act, heightened scrutiny by the Federal Trade Commission, and misinformation about COVID-19—the subject of new legislation introduced by Rep. Jennifer Wexton (D-Va.) and Sen. Mazie Hirono (D-Hawaii).

But while many in the Democratic majority argue that social media companies have not done enough to moderate misinformation or hate speech, it is a problem with no realistic legal fix. Any attempt to mandate removal of speech on grounds that it is misinformation or hate speech, either directly or indirectly, would run afoul of the First Amendment.

Much of the recent focus has been on misinformation spread on social media about the 2020 election and the COVID-19 pandemic. The memorandum for the March 25 hearing sums it up:

Facebook, Google, and Twitter have long come under fire for their role in the dissemination and amplification of misinformation and extremist content. For instance, since the beginning of the coronavirus disease of 2019 (COVID-19) pandemic, all three platforms have spread substantial amounts of misinformation about COVID-19. At the outset of the COVID-19 pandemic, disinformation regarding the severity of the virus and the effectiveness of alleged cures for COVID-19 was widespread. More recently, COVID-19 disinformation has misrepresented the safety and efficacy of COVID-19 vaccines.

Facebook, Google, and Twitter have also been distributors for years of election disinformation that appeared to be intended either to improperly influence or undermine the outcomes of free and fair elections. During the November 2016 election, social media platforms were used by foreign governments to disseminate information to manipulate public opinion. This trend continued during and after the November 2020 election, often fomented by domestic actors, with rampant disinformation about voter fraud, defective voting machines, and premature declarations of victory.

It is true that, despite social media companies’ efforts to label and remove false content and bar some of the biggest purveyors, there remains a considerable volume of false information on social media. But U.S. Supreme Court precedent consistently has limited government regulation of false speech to distinct categories like defamation, perjury, and fraud.

The Case of Stolen Valor

The court’s 2011 decision in United States v. Alvarez struck down as unconstitutional the Stolen Valor Act of 2005, which made it a federal crime to falsely claim to have earned a military medal. A four-justice plurality opinion written by Justice Anthony Kennedy, along with a two-justice concurrence, both agreed that a statement being false did not, by itself, exclude it from First Amendment protection. 

Kennedy’s opinion noted that while the government may impose penalties for false speech connected with the legal process (perjury or impersonating a government official); with receiving a benefit (fraud); or with harming someone’s reputation (defamation); the First Amendment does not sanction penalties for false speech, in and of itself. The plurality exhibited particular skepticism toward the notion that government actors could be entrusted as a “Ministry of Truth,” empowered to determine what categories of false speech should be made illegal:

Permitting the government to decree this speech to be a criminal offense, whether shouted from the rooftops or made in a barely audible whisper, would endorse government authority to compile a list of subjects about which false statements are punishable. That governmental power has no clear limiting principle. Our constitutional tradition stands against the idea that we need Oceania’s Ministry of Truth… Were this law to be sustained, there could be an endless list of subjects the National Government or the States could single out… Were the Court to hold that the interest in truthful discourse alone is sufficient to sustain a ban on speech, absent any evidence that the speech was used to gain a material advantage, it would give government a broad censorial power unprecedented in this Court’s cases or in our constitutional tradition. The mere potential for the exercise of that power casts a chill, a chill the First Amendment cannot permit if free speech, thought, and discourse are to remain a foundation of our freedom. [EMPHASIS ADDED]

As noted in the opinion, declaring false speech illegal constitutes a content-based restriction subject to “exacting scrutiny.” Applying that standard, the court found “the link between the Government’s interest in protecting the integrity of the military honors system and the Act’s restriction on the false claims of liars like respondent has not been shown.” 

While finding that the government “has not shown, and cannot show, why counterspeech would not suffice to achieve its interest,” the plurality suggested a more narrowly tailored solution could be simply to publish Medal of Honor recipients in an online database. In other words, the government could overcome the problem of false speech by promoting true speech. 

In 2012, President Barack Obama signed an updated version of the Stolen Valor Act that limited its penalties to situations where a misrepresentation is shown to result in receipt of some kind of benefit. That places the false speech in the category of fraud, consistent with the Alvarez opinion.

A Social Media Ministry of Truth

Applying the Alvarez standard to social media, the government could (and already does) promote its interest in public health or election integrity by publishing true speech through official channels. But there is little reason to believe the government at any level could regulate access to misinformation. Anything approaching an outright ban on accessing speech deemed false by the government not only would not be the most narrowly tailored way to deal with such speech, but it is bound to have chilling effects even on true speech.

The analysis doesn’t change if the government instead places Big Tech itself in the position of Ministry of Truth. Some propose making changes to Section 230, which currently immunizes social media companies from liability for user speech (with limited exceptions), regardless what moderation policies the platform adopts. A hypothetical change might condition Section 230’s liability shield on platforms agreeing to moderate certain categories of misinformation. But that would still place the government in the position of coercing platforms to take down speech. 

Even the “fix” of making social media companies liable for user speech they amplify through promotions on the platform, as proposed by Sen. Mark Warner’s (D-Va.) SAFE TECH Act, runs into First Amendment concerns. The aim of the bill is to regard sponsored content as constituting speech made by the platform, thus opening the platform to liability for the underlying misinformation. But any such liability also would be limited to categories of speech that fall outside First Amendment protection, like fraud or defamation. This would not appear to include most of the types of misinformation on COVID-19 or election security that animate the current legislative push.

There is no way for the government to regulate misinformation, in and of itself, consistent with the First Amendment. Big Tech companies are free to develop their own policies against misinformation, but the government may not force them to do so. 

Extremely Limited Room to Regulate Extremism

The Big Tech CEOs are also almost certain to be grilled about the use of social media to spread “hate speech” or “extremist content.” The memorandum for the March 25 hearing sums it up like this:

Facebook executives were repeatedly warned that extremist content was thriving on their platform, and that Facebook’s own algorithms and recommendation tools were responsible for the appeal of extremist groups and divisive content. Similarly, since 2015, videos from extremists have proliferated on YouTube; and YouTube’s algorithm often guides users from more innocuous or alternative content to more fringe channels and videos. Twitter has been criticized for being slow to stop white nationalists from organizing, fundraising, recruiting and spreading propaganda on Twitter.

Social media has often played host to racist, sexist, and other types of vile speech. While social media companies have community standards and other policies that restrict “hate speech” in some circumstances, there is demand from some public officials that they do more. But under a First Amendment analysis, regulating hate speech on social media would fare no better than the regulation of misinformation.

The First Amendment doesn’t allow for the regulation of “hate speech” as its own distinct category. Hate speech is, in fact, as protected as any other type of speech. There are some limited exceptions, as the First Amendment does not protect incitement, true threats of violence, or “fighting words.” Some of these flatly do not apply in the online context. “Fighting words,” for instance, applies only in face-to-face situations to “those personally abusive epithets which, when addressed to the ordinary citizen, are, as a matter of common knowledge, inherently likely to provoke violent reaction.”

One relevant precedent is the court’s 1992 decision in R.A.V. v. St. Paul, which considered a local ordinance in St. Paul, Minnesota, prohibiting public expressions that served to cause “outrage, alarm, or anger with respect to racial, gender or religious intolerance.” A juvenile was charged with violating the ordinance when he created a makeshift cross and lit it on fire in front of a black family’s home. The court unanimously struck down the ordinance as a violation of the First Amendment, finding it an impermissible content-based restraint that was not limited to incitement or true threats.

By contrast, in 2003’s Virginia v. Black, the Supreme Court upheld a Virginia law outlawing cross burnings done with the intent to intimidate. The court’s opinion distinguished R.A.V. on grounds that the Virginia statute didn’t single out speech regarding disfavored topics. Instead, it was aimed at speech that had the intent to intimidate regardless of the victim’s race, gender, religion, or other characteristic. But the court was careful to limit government regulation of hate speech to instances that involve true threats or incitement.

When it comes to incitement, the legal standard was set by the court’s landmark Brandenberg v. Ohio decision in 1969, which laid out that:

the constitutional guarantees of free speech and free press do not permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action. [EMPHASIS ADDED]

In other words, while “hate speech” is protected by the First Amendment, specific types of speech that convey true threats or fit under the related doctrine of incitement are not. The government may regulate those types of speech. And they do. In fact, social media users can be, and often are, charged with crimes for threats made online. But the government can’t issue a per se ban on hate speech or “extremist content.”

Just as with misinformation, the government also can’t condition Section 230 immunity on platforms removing hate speech. Insofar as speech is protected under the First Amendment, the government can’t specifically condition a government benefit on its removal. Even the SAFE TECH Act’s model for holding platforms accountable for amplifying hate speech or extremist content would have to be limited to speech that amounts to true threats or incitement. This is a far narrower category of hateful speech than the examples that concern legislators. 

Social media companies do remain free under the law to moderate hateful content as they see fit under their terms of service. Section 230 immunity is not dependent on whether companies do or don’t moderate such content, or on how they define hate speech. But government efforts to step in and define hate speech would likely run into First Amendment problems unless they stay focused on unprotected threats and incitement.

What Can the Government Do?

One may fairly ask what it is that governments can do to combat misinformation and hate speech online. The answer may be a law that requires takedowns by court order of speech after it is declared illegal, as proposed by the PACT Act, sponsored in the last session by Sens. Brian Schatz (D-Hawaii) and John Thune (R-S.D.). Such speech may, in some circumstances, include misinformation or hate speech.

But as outlined above, the misinformation that the government can regulate is limited to situations like fraud or defamation, while the hate speech it can regulate is limited to true threats and incitement. A narrowly tailored law that looked to address those specific categories may or may not be a good idea, but it would likely survive First Amendment scrutiny, and may even prove a productive line of discussion with the tech CEOs.

For those who follow these things (and for those who don’t but should!), Eric Goldman just posted an excellent short essay on Section 230 immunity and account terminations.

Here’s the abstract:

An online provider’s termination of a user’s online account can be a major-and potentially even life-changing-event for the user. Account termination exiles the user from a virtual place the user wanted to be; termination disrupts any social network relationship ties in that venue, and prevents the user from sending or receiving messages there; and the user loses any virtual assets in the account, which could be anything from archived emails to accumulated game assets. The effects of account termination are especially acute in virtual worlds, where dedicated users may be spending a majority of their waking hours or have aggregated substantial in-game wealth. However, the problem arises in all online environments (including email, social networking and web hosting) where account termination disrupts investments made by users.

Because of the potentially significant consequences from online user account termination, user-rights advocates, especially in the virtual world context, have sought legal restrictions on online providers’ discretion to terminate users. However, these efforts are largely misdirected because of 47 U.S.C. §230(c)(2) (“Section 230(c)(2)”), a federal statutory immunity. This essay, written in conjunction with an April 2011 symposium at UC Irvine entitled “Governing the Magic Circle: Regulation of Virtual Worlds,” explains Section 230(c)(2)’s role in immunizing online providers’ decisions to terminate user accounts. It also explains why this immunity is sound policy.

But the meat of the essay (at least the normative part of the essay) is this:

Online user communities inevitably require at least some provider intervention. At times, users need “protection” from other users. The provider can give users self-help tools to reduce their reliance on the online provider’s intervention, but technological tools cannot ameliorate all community-damaging conduct by determined users. Eventually, the online provider needs to curb a rogue user’s behavior to protect the rest of the community. Alternatively, a provider may need to respond to users who are jeopardizing the site’s security or technical infrastructure. . . .  Section 230(c)(2) provides substantial legal certainty to online providers who police their premises and ensure the community’s stability when intervention is necessary.

* * *

Thus, marketplace incentives work unexpectedly well to discipline online providers from capriciously wielding their termination power. This is true even if many users face substantial nonrecoupable or switching costs, both financially and in terms of their social networks. Some users, both existing and prospective, can be swayed by the online provider’s capriciousness—and by the provider’s willingness to oust problem users who are disrupting the community. The online provider’s desire to keep these swayable users often can provide enough financial incentives for the online provider to make good choices.

Thus, broadly conceived, § 230(c)(2) removes legal regulation of an online provider’s account termination, making the marketplace the main governance mechanism over an online provider’s choices. Fortunately, the marketplace is effective enough to discipline those choices.

Eric doesn’t talk explicitly here about property rights and transaction costs, but that’s what he’s talking about.  Well-worth reading as a short, clear, informative introduction to this extremely important topic.

Late last year, with support from the International Center for Law and Economics, I published a paper that empirically analyzed the Philadelphia civil court system. That study focused upon the Philadelphia Complex Litigation Center (PCLC) which handles large mass tort programs including asbestos cases, hormone therapy replacement cases, various prescription drug-related injuries, and other mass tort programs. The PCLC has recently come under criticism for the use of a number of controversial procedures including the consolidation of asbestos cases and the use of reverse-bifurcation methods, where a plaintiff’s damages are calculated prior to the establishment of liability. That paper considered publicly available data from the Administrative Office of Pennsylvania Courts to analyze trends in docketed and pending civil cases in Philadelphia compared to other non-Philadelphia Pennsylvania counties, cases in federal court, and a national sample of state courts.

The study highlighted some unusual trends.  Philadelphia case dockets are disproportionately larger relative to both its population and other state and federal courts.  Philadelphia plaintiffs are also relatively more likely to prefer jury trials and less likely to settle than other non-Philadelphia Pennsylvania plaintiffs.  The data appear to support the conclusion that Philadelphia courts demonstrate a meaningful preference for plaintiffs, by coaxing “business” from other courts and providing them with a unique combination of advantages; indeed, the PCLC’s own stated goals include a desire to “[take] business away from other courts.”   While these strategies have no doubt successfully increased litigation in Philadelphia, and benefit local Philadelphia attorneys, they also bring a substantial cost to Philadelphia businesses and consumers.

I’ve now conducted a preliminary supplemental analysis (available here) designed to test the proposition that the majority of plaintiffs in the PCLC are out-of-state without an apparent or substantive connection to either Philadelphia or even the State of Pennsylvania.  I considered a sample of about 1,400 of the mass-tort cases in the PCLC to determine if the plaintiff filing the case had a home address or had sustained the complained of injury either in Philadelphia or Pennsylvania. Although the findings are preliminary, the results indicate that a substantial fraction of plaintiffs with cases pending at the PCLC have no discernible or relevant connection to Philadelphia or Pennsylvania. This supplement to the original study provides strong evidence that the PCLC has succeeded in attracting a large number of out-of-state cases that comprise a substantial portion of the civil cases in Philadelphia.

The main conclusions of this supplemental analysis are as follows:

  • Of the 1,357 cases in the sample, 913 (67.2%) were brought by plaintiffs who live out-of-state without any apparent connection to Pennsylvania or Philadelphia.
  • Only 180 cases (13.3%) reveal plaintiffs who live in or allege injury in Philadelphia.
  • The most substantial case types where the plaintiffs were overwhelmingly out-of-state are hormone therapy, denture adhesive cream, and Paxil birth defect cases.
  • Although most or all of the companies involved in these cases do business in Philadelphia and a few have some sort of administrative offices there, the vast majority of defendants do not have their principal place of business in Philadelphia or even in Pennsylvania. It is unlikely that venue was moved to the PCLC in most or any of the cases.

A chart summarizing the results is available here at Table 1.

Continue Reading…

Medical Devices

Paul H. Rubin —  18 April 2011

The GAO has recently issued a report on medical devices.  The thrust of the report is that “high-risk” medical devices do not receive enough scrutiny from the FDA and that recalls are not handled well.  This report and other evidence indicates that the FDA is likely to require more testing of devices.  As of now, most medical devices are approved on a fast track that requires significantly less testing than that required for new drugs.  (As I have discussed in a forthcoming Cato Journal article, medical devices are also subject to more immunity from state produce liability lawsuits.)

The GAO report is remarkable.  The GAO defines its mission as

“Our Mission is to support the Congress in meeting its constitutional responsibilities and to help improve the performance and ensure the accountability of the federal government for the benefit of the American people. We provide Congress with timely information that is objective, fact-based, nonpartisan, nonideological, fair, and balanced.”

But the report on medical devices is entirely unbalanced.  It deals only with procedures for approval and the recall process (both of which are judged inadequate.)  There is no discussion of either costs or benefits.   That is, no evidence is presented that there is any actual harm from the “flawed” approval and recall processes.  Even more importantly, there is no evidence presented about the benefits to consumers from easy and rapid approval of medical devices.

As is well known, virtually all economists who have studied the FDA drug approval process have concluded that it causes serious harm by delaying drugs.  The import of the GAO Report is that we should duplicate that harm with medical devices.  This is an odd and perverse way of providing a “benefit” to the American people.

The state of New York is considering a cap on noneconomic damages (“pain and suffering”) for malpractice in order to save money.  The New York Times story asks

“… who benefits from caps — doctors or insurers — and whether the measures inflict unintended negative consequences upon victims of medical errors, including plaintiffs’ inability to find lawyers to take their cases.”

But in fact the evidence is that consumers actually benefit from such a cap. In a paper published in the Journal of Law and Economics in 2007 Joanna Shepherd and I examined the effect of various tort reforms on accidental death rates in states for the period 1981-2000.  We found that overall states that had passed tort reforms had lower accidental death rates, probably because of the increased availability of physicians in emergency rooms and other settings.  For the particular case of damage caps, we found that overall these caps led to a total of 5000 fewer deaths.  So we can have our cake and eat it too — caps will save money and also make New Yorkers safer.

Michael Abramowicz over at Concurring Opinions has an interesting post about the ongoing litigation between economists John Lott and Steven Levitt. Lott’s suit alleges that Levitt defamed him in his recent book Freakonomics by suggesting that Lott’s research on the relation between guns and crime could not be “replicated” by other scholars and in a subsequent email to an economist suggesting that Lott had paid $15,000 to the Journal of Law & Economics to publish in a special issue a series of articles supporting Lott’s views on guns. This week, a federal district court in Chicago granted Levitt’s motion to dismiss the claim concerning the statement in Freakonomics but denied his motion to dismiss the claim concerning the email.

Abramowicz suggests in his post that Lott’s “potential damages are almost certainly low” and that this case “though not technically frivolous” is “of a type that our legal system does not handle well” and “a vexatious use of the legal system, because the cost of bringing the claim seems much larger than any plausible reputational damage to Lott.”

Leaving the merits of the dispute aside, my question is this: if the cost of bringing the claim is really much larger than any damages that Lott may recover, then why is Lott pursuing the case? Isn’t Lott’s pursuit of the case strong evidence that he believes he could recover more than his costs?

Moreover, if indeed the claim is worth less than the cost of litigating it, why is Levitt vigorously defending the suit? Why have he and HarperCollins (his publisher) spent so much money disputing liability (e.g., by filing the motion to dismiss) rather than simply relaxing, knowing that damages won’t be very high? Isn’t it just as “vexatious” to dispute a vexatious claim as it is to assert one?

The answer, I think, is that defamation suits implicate subjective nonpecuniary interests that are difficult for courts to value. The formal legal remedies available in such cases are thus usually undercompensatory and pale in comparison to the reputational effects of winning or losing. While the financial stakes may be low, more is at stake than simply the money. The case is about reputation, not money. Hence the current legal quagmire. Even a generous financial settlement is therefore not likely to satisfy Mr. Lott, and by the same token an admission of having made a false statement is not something that Mr. Levitt would likely consider offering.

Here’s my suggestion for the most efficient way to end the dispute: in exchange for Lott’s agreement to dismiss the suit with prejudice, Levitt could agree to issue a statement not admitting to having defamed Lott but rather simply saying that he respects John Lott’s intellect and his work as an economist even though he remains skeptical about Lott’s work on guns and crime.

Hopefully a settlement along these lines is in the works, especially now that HarperCollins is out of the case and Levitt will have to start paying his lawyers out of his own pocket. But then again, settlement of the case would deprive bloggers of an interesting topic about which to comment!

Morrison at ELS Blog

Josh Wright —  18 December 2006

Ed Morrison (Columbia) has a great series of guest blogs at the always worth reading ELS Blog on a few research questions in bankruptcy and torts as well as a methodological entry. I am a little bit late with the link (his guest stint ended December 8th ), but I really enjoyed the posts. Here are the links:

Why are Small Business Bankruptcies so Rare?;

Propensity Score Matching; and More on Propensity Score Matching;

Do Consumers Want Insurance Coverage for Pain and Suffering? (proposing a diff-in-diff estimation strategy for answering this question based on California’s Prop 213);

and How Inefficient is Tort Law?

Domain Name Hijacking

Keith Sharfman —  6 November 2006

Dan Solove over at Concurring Opinions reports on an insidious practice that unfortunately has become increasingly common: domain name hijacking.

Here’s how it works. The original owner of a popular website fails to renew its domain name prior to the expiration of the owner’s entitlement. An opportunistic “hijacker” then purchases the name and offers to sell it back to the original owner for a tidy sum. The original owner is then left with an unhappy choice: pay the hijacker off, or set up shop under a new domain name with the loss of traffic that such a switch inevitably entails.

The latest victim of such a hijacking scheme is Crescat Sententia, a popular blog that used to be located at http://www.crescatsententia.org/ but now has been forced to move to http://www.crescatsententia.net/.

Dan suggests that domain name hijacking of this sort may well be characterized as copyright infringement. But because the case for copyright protection isn’t clear cut, he wonders if there are other legal protections too.

Here’s my suggestion for another theory of liability: intentional interference with prospective economic advantage. The elements of that tort–(1) an economic relationship between the plaintiff and some third person containing the probability of future economic benefit to the plaintiff; (2) knowledge by the defendant of the existence of the relationship, (3) intentional acts on the part of the defendant designed to disrupt the relationship, (4) actual disruption of the relationship; and (5) damages to the plaintiff proximately caused by the acts of the defendant–all seem to be present here. The original website has an economic relationship with its existing readers or patrons; the hijacker knows about this relationship; the hijacker intentionally acts to disrupt the relationship by acquiring the domain name; the loss of the domain name actually disrupts the relationship by shutting down the old site without indicating where a new site, if any, is located; and the original owner is thereby damaged.

As matter of policy and economics, there isn’t any positive social value associated with domain name hijacking. Indeed, once transaction and switching costs are considered, the conduct actually entails social losses. One would therefore hope (or perhaps a la Posner even dare to predict) that the common law would forbid and deter such conduct. Applying the tort of intentional interference with prospective economic advantage would do just that. And so even if the copyright case against domain name hijacking isn’t airtight, the common law should come to the rescue.

Hijackers beware!