Legislation to secure children’s safety online is all the rage right now, not only on Capitol Hill, but in state legislatures across the country. One of the favored approaches is to impose on platforms a duty of care to protect teen users.
For example, Sens. Richard Blumenthal (D-Conn.) and Marsha Blackburn (R-Tenn.) have reintroduced the Kid’s Online Safety Act (KOSA), which would require that social-media platforms “prevent or mitigate” a variety of potential harms, including mental-health harms; addiction; online bullying and harassment; sexual exploitation and abuse; promotion of narcotics, tobacco, gambling, or alcohol; and predatory, unfair, or deceptive business practices.
But while bills of this sort would define legal responsibilities that online platforms have to their minor users, this statutory duty of care is more likely to result in the exclusion of teens from online spaces than to promote better care of teens who use them.
Drawing on the previous research that I and my International Center for Law & Economics (ICLE) colleagues have done on the economics of intermediary liability and First Amendment jurisprudence, I will in this post consider the potential costs and benefits of imposing a statutory duty of care similar to that proposed by KOSA.
The Law & Economics of Online Intermediary Liability and the First Amendment (Kids Edition)
Previously (in a law review article, an amicus brief, and a blog post), we at ICLE have argued that there are times when the law rightfully places responsibility on intermediaries to monitor and control what happens on their platforms. From an economic point of view, it makes sense to impose liability on intermediaries when they are the least-cost avoider: i.e., the party that is best positioned to limit harm, even if they aren’t the party committing the harm.
On the other hand, as we have also noted, there are costs to imposing intermediary liability. This is especially true for online platforms with user-generated content. Specifically, there is a risk of “collateral censorship” wherein online platforms remove more speech than is necessary in order to avoid potential liability. For example, imposing a duty of care to “protect” minors, in particular, could result in online platforms limiting teens’ access.
If the social costs that arise from the imposition of intermediary liability are greater than the benefits accrued, then such an arrangement would be welfare-destroying, on net. While we want to deter harmful (illegal) content, we don’t want to do so if we end up deterring access to too much beneficial (legal) content as a result.
The First Amendment often limits otherwise generally applicable laws, on grounds that they impose burdens on speech. From an economic point of view, this could be seen as an implicit subsidy. That subsidy may be justifiable, because information is a public good that would otherwise be underproduced. As Daniel A. Farber put it in 1991:
[B]ecause information is a public good, it is likely to be undervalued by both the market and the political system. Individuals have an incentive to ‘free ride’ because they can enjoy the benefits of public goods without helping to produce those goods. Consequently, neither market demand nor political incentives fully capture the social value of public goods such as information. Our polity responds to this undervaluation of information by providing special constitutional protection for information-related activities. This simple insight explains a surprising amount of First Amendment doctrine.
In particular, the First Amendment provides important limits on how far the law can go in imposing intermediary liability that would chill speech, including when dealing with potential harms to teenage users. These limitations seek the same balance that the economics of intermediary liability would suggest: how to hold online platforms liable for legally cognizable harms without restricting access to too much beneficial content. Below is a summary of some of those relevant limitations.
Speech vs. Conduct
The First Amendment differentiates between speech and conduct. While the line between the two can be messy (and “expressive conduct” has its own standard under the O’Brien test), governmental regulation of some speech acts is permissible. Thus, harassment, terroristic threats, fighting words, and even incitement to violence can be punished by law. On the other hand, the First Amendment does not generally allow the government to regulate “hate speech” or “bullying.” As the 3rd U.S. Circuit Court of Appeals explained it in the context of a school’s anti-harassment policy:
There is of course no question that non-expressive, physically harassing conduct is entirely outside the ambit of the free speech clause. But there is also no question that the free speech clause protects a wide variety of speech that listeners may consider deeply offensive, including statements that impugn another’s race or national origin or that denigrate religious beliefs… When laws against harassment attempt to regulate oral or written expression on such topics, however detestable the views expressed may be, we cannot turn a blind eye to the First Amendment implications.
In other words, while a duty of care could reach harrassing conduct, it is unclear how it could reach pure expression on online platforms without implicating the First Amendment.
Impermissibly Vague
The First Amendment also disallows rules sufficiently vague that they would preclude a person of ordinary intelligence from having fair notice of what is prohibited. For instance, in an order handed down earlier this year in Høeg v. Newsom, the federal district court granted the plaintiffs’ motion to enjoin a California law that would charge medical doctors with sanctionable “unprofessional conduct” if, as part of treatment or advice, they shared with patients “false information that is contradicted by contemporaneous scientific consensus contrary to the standard of care.”
The court found that “contemporary scientific consensus” was so “ill-defined [that] physician plaintiffs are unable to determine if their intended conduct contradicts [it].” The court asked a series of questions relevant to trying to define the phrase:
[W]ho determines whether a consensus exists to begin with? If a consensus does exist, among whom must the consensus exist (for example practicing physicians, or professional organizations, or medical researchers, or public health officials, or perhaps a combination)? In which geographic area must the consensus exist (California, or the United States, or the world)? What level of agreement constitutes a consensus (perhaps a plurality, or a majority, or a supermajority)? How recently in time must the consensus have been established to be considered “contemporary”? And what source or sources should physicians consult to determine what the consensus is at any given time (perhaps peer-reviewed scientific articles, or clinical guidelines from professional organizations, or public health recommendations)?
Thus, any duty of care to limit access to potentially harmful online content must not be defined in a way that is too vague for a person of ordinary intelligence to know what is prohibited.
Liability for Third-Party Speech
The First Amendment limits intermediary liability when dealing with third-party speech. For the purposes of defamation law, the traditional continuum of liability was from publishers to distributors (or secondary publishers) to conduits. Publishers—such as newspapers, book publishers, and television producers—exercised significant editorial control over content. As a result, they could be held liable for defamatory material, because it was seen as their own speech. Conduits—like the telephone company—were on the other end of the spectrum, and could not be held liable for the speech of those who used their services.
As the Court of Appeals of the State of New York put in a 1974 opinion:
In order to be deemed to have published a libel a defendant must have had a direct hand in disseminating the material whether authored by another, or not. We would limit [liability] to media of communications involving the editorial or at least participatory function (newspapers, magazines, radio, television and telegraph)… The telephone company is not part of the “media” which puts forth information after processing it in one way or another. The telephone company is a public utility which is bound to make its equipment available to the public for any legal use to which it can be put…
Distributors—which included booksellers and libraries—were in the middle of this continuum. They had to have some notice that content they distributed was defamatory before they could be held liable.
Courts have long explored the tradeoffs between liability and carriage of third-party speech in this context. For instance, in Smith v. California, the U.S. Supreme Court found that a statute establishing strict liability for selling obscene materials violated the First Amendment because:
By dispensing with any requirement of knowledge of the contents of the book on the part of the seller, the ordinance tends to impose a severe limitation on the public’s access to constitutionally protected matter. For if the bookseller is criminally liable without knowledge of the contents, and the ordinance fulfills its purpose, he will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected as well as obscene literature. It has been well observed of a statute construed as dispensing with any requirement of scienter that: “Every bookseller would be placed under an obligation to make himself aware of the contents of every book in his shop. It would be altogether unreasonable to demand so near an approach to omniscience.” (internal citations omitted)
It’s also worth noting that traditional publisher liability was limited in the case of republication, such as when newspapers republished stories from wire services like the Associated Press. Courts observed the economic costs that would attend imposing a strict-liability standard in such cases:
No newspaper could afford to warrant the absolute authenticity of every item of its news’, nor assume in advance the burden of specially verifying every item of news reported to it by established news gathering agencies, and continue to discharge with efficiency and promptness the demands of modern necessity for prompt publication, if publication is to be had at all.
Over time, the rule was extended, either by common law or statute, from newspapers to radio and television broadcasts, with the treatment of republication of third-party speech eventually resembling conduit liability even more than distributor liability. See Brent Skorup and Jennifer Huddleston’s “The Erosion of Publisher Liability in American Law, Section 230, and the Future of Online Curation” for a more thoroughgoing treatment of the topic.
The thing that pushed the law toward conduit liability when entities carried third-party speech was the implicit economic reasoning. For example, in 1959’s Farmers Educational & Cooperative Union v. WDAY, Inc., the Supreme Court held that a broadcaster could not be found liable for defamation made by a political candidate on the air, arguing that:
The decision a broadcasting station would have to make in censoring libelous discussion by a candidate is far from easy. Whether a statement is defamatory is rarely clear. Whether such a statement is actionably libelous is an even more complex question, involving as it does, consideration of various legal defenses such as “truth” and the privilege of fair comment. Such issues have always troubled courts… if a station were held responsible for the broadcast of libelous material, all remarks evenly faintly objectionable would be excluded out of an excess of caution. Moreover, if any censorship were permissible, a station so inclined could intentionally inhibit a candidate’s legitimate presentation under the guise of lawful censorship of libelous matter. Because of the time limitation inherent in a political campaign, erroneous decisions by a station could not be corrected by the courts promptly enough to permit the candidate to bring improperly excluded matter before the public. It follows from all this that allowing censorship, even of the attenuated type advocated here, would almost inevitably force a candidate to avoid controversial issues during political debates over radio and television, and hence restrict the coverage of consideration relevant to intelligent political decision.
It is clear from the foregoing that imposing duty of care on online platforms to limit speech in ways that would make them strictly liable would be inconsistent with distributor liability. But even a duty of care that more resembled a negligence-based standard could implicate speech interests if online platforms are seen to be akin to newspapers, or to radio and television broadcasters, when they act as republishers of third-party speech. Such cases would appear to require conduit liability.
The First Amendment Applies to Children
The First Amendment has been found to limit what governments can do in the name of protecting children from encountering potentially harmful speech. For example, California in 2005 passed a law prohibiting the sale or rental of “violent video games” to minors. In Brown v. Entertainment Merchants Ass’n, the Supreme Court found the law unconstitutional, finding that:
No doubt [the government] possesses legitimate power to protect children from harm, but that does not include a free-floating power to restrict the ideas to which children may be exposed. “Speech that is neither obscene as to youths nor subject to some other legitimate proscription cannot be suppressed solely to protect the young from ideas or images that a legislative body thinks unsuitable for them.” (internal citations omitted)
The Court did not find it persuasive that the video games were violent (noting that children’s books often depict violence) or that they were interactive (as some children’s books offer choose-your-own-adventure options). In other words, there was nothing special about violent video games that would subject them to a lower level of constitutional protection, even for minors that wished to play them.
The Court also did not find persuasive California’s appeal that the law aided parents in making decisions about what their children could access, stating:
California claims that the Act is justified in aid of parental authority: By requiring that the purchase of violent video games can be made only by adults, the Act ensures that parents can decide what games are appropriate. At the outset, we note our doubts that punishing third parties for conveying protected speech to children just in case their parents disapprove of that speech is a proper governmental means of aiding parental authority.
Justice Samuel Alito’s concurrence in Brown would have found the California law unconstitutionally vague, arguing that constitutional speech would be chilled as a result of the law’s enforcement. The fact its intent was to protect minors didn’t change that analysis.
Limiting the availability of speech to minors in the online world is subject to the same analysis as in the offline world. In Reno v. ACLU, the Supreme Court made clear that the First Amendment applies with equal effect online, stating that “our cases provide no basis for qualifying the level of First Amendment scrutiny that should be applied to this medium.” In Packingham v. North Carolina, the Court went so far as to call social-media platforms “the modern public square.”
Restricting minors’ access to online platforms through age-verification requirements already have been found to violate the First Amendment. In Ashcroft v. ACLU (II), the Supreme Court reviewed provisions of the Children Online Protection Act’s (COPA) that would restrict posting content “harmful to minors” for “commercial purposes.” COPA allowed an affirmative defense if the online platform restricted access by minors through various age-verification devices. The Court found that “[b]locking and filtering software is an alternative that is less restrictive than COPA, and, in addition, likely more effective as a means of restricting children’s access to materials harmful to them” and upheld a preliminary injunction against the law, pending further review of its constitutionality.
On remand, the 3rd Circuit found that “[t]he Supreme Court has disapproved of content-based restrictions that require recipients to identify themselves affirmatively before being granted access to disfavored speech, because such restrictions can have an impermissible chilling effect on those would-be recipients.” The circuit court would eventually uphold the district court’s finding of unconstitutionality and permanently enjoin the statute’s provisions, noting that the age-verification requirements “would deter users from visiting implicated Web sites” and therefore “would chill protected speech.”
A duty of care to protect minors could be unconstitutional if it ends up limiting access to speech that is not illegal for them to access. Age-verification requirements that would likely accompany such a duty could also result in a statute being found unconstitutional.
In sum:
- A duty of care to prevent or mitigate harassment and bullying has First Amendment implications if it regulates pure expression, such as speech on online platforms.
- A duty of care to limit access to potentially harmful online speech can’t be defined so vaguely that a person of ordinary intelligence can’t know what is prohibited.
- A duty of care that establishes a strict-liability standard on online speech platforms would likely be unconstitutional for its chilling effects on legal speech. A duty of care that establishes a negligence standard could similarly lead to “collateral censorship” of third-party speech.
- A duty of care to protect minors could be unconstitutional if it limits access to legal speech. De facto age-verification requirements could also be found unconstitutional.
The Problems with KOSA: The First Amendment and Limiting Kids’ Access to Online Speech
KOSA would establish a duty of care for covered online platforms to “act in the best interests of a user that the platform knows or reasonably should know is a minor by taking reasonable measures in its design and operation of products and services to prevent and mitigate” a variety of potential harms, including:
- Consistent with evidence-informed medical information, the following mental health disorders: anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors.
- Patterns of use that indicate or encourage addiction-like behaviors.
- Physical violence, online bullying, and harassment of the minor.
- Sexual exploitation and abuse.
- Promotion and marketing of narcotic drugs (as defined in section 102 of the Controlled Substances Act (21 U.S.C. 802)), tobacco products, gambling, or alcohol.
- Predatory, unfair, or deceptive marketing practices, or other financial harms.
There are also a variety of tools and notices that must be made available to users under age 17, as well as to their parents.
Reno and Age Verification
KOSA could be found unconstitutional under the Reno and COPA-line of cases for creating a de facto age-verification requirement. The bill’s drafters appear to be aware of the legal problems that an age-verification requirement would entail. KOSA therefore states that:
Nothing in this Act shall be construed to require—(1) the affirmative collection of any personal data with respect to the age of users that a covered platform is not already collecting in the normal course of business; or (2) a covered platform to implement an age gating or age verification functionality.
But this doesn’t change the fact that, in order to effectuate KOSA’s requirements, online platforms would have to know their users’ ages. KOSA’s duty of care incorporates a constructive-knowledge requirement (i.e., “reasonably should know is a minor”). A duty of care combined with the mandated notices and tools that must be made available to minors makes it “reasonable” that platforms would have to verify the age of each user.
If a court were to agree that KOSA doesn’t require age gating or age verification, this would likely render the act ineffective. As it stands, most of the online platforms that would be covered by KOSA only ask users their age (or birthdate) upon creation of a profile, which is easily evaded by simple lying. While those under age 17 (but at least age 13) at the time of the act’s passage who have already created profiles would be implicated, it would appear the act wouldn’t require platforms to vet whether users who said they were at least 17 when they created new profiles were actually telling the truth.
Vagueness and Protected Speech
Even if KOSA were not found unconstitutional for creating an explicit age-verification scheme, it still likely would lead to kids under 17 being restricted from accessing protected speech. Several of the types of speech the duty of care covers could include legal speech. Moreover, the prohibited speech is defined so vaguely that it likely would lead to chilling effects on access to legal speech.
For example, pictures of photoshopped models are protected speech. If teenage girls want to see such content on their feeds, it isn’t clear that the law can constitutionally stop them, even if it’s done by creating a duty of care to prevent and mitigate harms associated with “anxiety, depression, or eating disorders.”
Moreover, access to content that kids really like to see or hear is still speech, even if they like it so much that an outside observer may think they are addicted to it. Much as the Court said in Brown, the government does not have “a free-floating power to restrict [speech] to which children may be exposed.”
KOSA’s Section 3(A)(1) and 3(A)(2) would also run into problems, as they are so vague that a person of ordinary intelligence would not know what they prohibit. As a result, there would likely be chilling effects on legal speech.
Much like in Høeg, the phrase “consistent with evidence-informed medical information” leads to various questions regarding how an online platform could comply with the law. For instance, it isn’t clear what content or design issue would be implicated by this subsection. Would a platform need to hire mental-health professionals to consult with them on every product-design and content-moderation decision?
Even worse is the requirement to prevent and mitigate “patterns of use that indicate or encourage addiction-like behaviors,” which isn’t defined by reference to “evidence-informed medical information” or to anything else.
Even Bullying May Be Protected Speech
Even KOSA’s duty to prevent and mitigate “physical violence, online bullying, and harassment of the minor” in Section 3(3) could implicate the First Amendment. While physical violence would clearly be outside of the First Amendment’s protections (although it’s unclear how an online platform could prevent or mitigate such violence), online bullying and harassing speech are, nonetheless, speech. As a result, this duty of care could receive constitutional scrutiny regarding whether it effectively limits lawful (though awful) speech directed at minors.
Locking Children Out of Online Spaces
KOSA’s duty of care appears to be based on negligence, in that it requires platforms to take “reasonable measures.” This probably makes it more likely to survive First Amendment scrutiny than a strict-liability regime would.
It could, however, still result in real (and costly) product-design and moderation challenges for online platforms. As a result, there would be significant incentives for those platforms to exclude those they know or reasonably believe are under age 17 from the platforms altogether.
While this is not strictly a First Amendment problem, per se, it nonetheless illustrates how laws intended to “protect” children’s safety while online can actually lead to their being restricted from using online speech platforms altogether.
Conclusion
Despite its being christened the “Kid’s Online Safety Act,” KOSA will result in real harm for kids if enacted into law. Its likely result would be considerable “collateral censorship,” as online platforms restrict teens’ access in order to avoid liability.
The bill’s duty of care would also either require likely unconstitutional age-verification, or it will be rendered ineffective, as teen users lie about their age in order to access desired content.
Congress shall make no law abridging the freedom of speech, even if it is done in the name of children.