Site icon Truth on the Market

Does NetChoice v Bonta Mean Curtains for KOSA?

To butcher a Winston Churchill quote, it’s not yet clear if this is the beginning of the end, or just the end of the beginning, for children’s online-safety bills. 

Such legislation has been all the rage in recent years, earning bipartisan support at both the federal and state level. A version of the Kids Online Safety Act (KOSA)—with legislative text that was merged with the somewhat-related Children and Teen’s Online Privacy Protection Act (COPPA 2.0)—passed the U.S. Senate earlier this summer, although it now appears that it will face legislative roadblocks in the U.S. House. 

But as I’ve previously argued, KOSA also faces possible constitutional problems. This has been confirmed yet again by the 9th U.S. Circuit Court of Appeals’ opinion earlier this month upholding in part the U.S. District Court for the Northern District of California’s preliminary injunction against California’s Age-Appropriate Design Code (AADC). 

NetChoice v. Bonta Did, in Fact, Make It Harder for the Government

The district court in NetChoice v. Bonta had previously granted a preliminary injunction against the AADC, while applying a lower commercial-speech standard (intermediate scrutiny) from the U.S. Supreme Court’s 1980 Central Hudson decision. The 9th Circuit’s opinion has only made it harder for the government

In its analysis of the California law’s data-protection impact-assessment (DPIA) provisions, the appeals panel found that it went far beyond regulating commercial speech:

The DPIA report requirement—in requiring covered businesses to opine on and mitigate the risk that children are exposed to harmful content online—regulates far more than mere commercial speech… the DPIA requirement goes further, because it not only requires businesses to identify harmful or potentially harmful content but also requires businesses to take steps to protect children from such content. p. 32, 33

Thus, the 9th Circuit concluded that the trial court erred in applying only an intermediate-scrutiny standard to these provisions (even though the court rightly granted the preliminary injunction anyway):

Considering the above, the district court in its preliminary injunction analysis should have subjected the DPIA report requirement to strict scrutiny, as opposed to mere intermediate commercial scrutiny. Strict scrutiny is warranted because the DPIA report requirement… deputizes private actors into censoring speech based on its content, see United States v. Playboy Ent. Grp., Inc., 529 U.S. 803, 806, 813 (2000). While it is true that “a State possesses legitimate power to protect children from harm, [] that does not include a freefloating power to restrict the ideas to which children may be exposed.” Brown v. Ent. Merchants Ass’n, 564 U.S. 786, 794 (2011) (citations omitted). p. 34

The 9th Circuit didn’t stop there, as the opinion went on to argue that the DPIA provisions likely fail strict scrutiny, as well. The court argued that, even if you assume that protecting children is a compelling state interest, the reporting and mitigation requirements are not the least-restrictive means to achieve those ends. Instead:

The State could have easily employed less restrictive means to accomplish its protective goals, such as by (1) incentivizing companies to offer voluntary content filters or application blockers, (2) educating children and parents on the importance of using such tools, and (3) relying on existing criminal laws that prohibit related unlawful conduct. p. 35.

In other words, as I’ve argued elsewhere, parents and minors are often the lowest-cost avoiders of online harms. Encouraging the use (and creation) of online tools to avoid such harms is less burdensome than creating a duty for online platforms to undertake mitigation in advance. And if particular online content already violates criminal law (presumably, laws that can be enforced consistent with the First Amendment), then the government should be doing more to go after those who cause such harms.

While California attempted to defend the law as merely a disclosure regime, the 9th Circuit was unconvinced. The court correctly saw that the law’s provisions were an attempt to encourage online platforms to censor lawful content that may be harmful in some other way:

[T]he State attempts to indirectly censor the material available to children online, by delegating the controversial question of what content may “harm to children” to the companies themselves…” p. 36-37.

Moreover, the 9th Circuit’s opinion noted that the statute’s language for determining what constitutes a harmful design feature is extremely vague: “the relevant provisions are worded at such a high level of generality that they provide little help to businesses in identifying which of those practices or designs may actually harm children.”

In sum, the appeals court upheld the lower court’s issuance of the preliminary injunction as to the DPIA provisions, as well as other parts of the law that weren’t severable from them. The 9th Circuit allowed other parts of the law to go into effect, insofar as they weren’t yet ripe for a facial challenge.

What This Means for KOSA’s Duty of Care

The version of KOSA’s duty of care that passed the U.S. Senate states:

(a) PREVENTION OF HARM TO MINORS. — A covered platform shall exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate the following harms to minors: 

(1) Consistent with evidence-informed medical information, the following mental health disorders: anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors. 

(2) Patterns of use that indicate or encourage addiction-like behaviors by minors. 

(3) Physical violence, online bullying, and harassment of the minor. 

(4) Sexual exploitation and abuse of minors. 

(5) Promotion and marketing of narcotic drugs (as defined in section 102 of the Controlled Substances Act (21 U.S.C. 802)), tobacco products, gambling, or alcohol. 

(6) Predatory, unfair, or deceptive marketing practices, or other financial harms. 

(b) LIMITATION. — Nothing in subsection (a) shall be construed to require a covered platform to prevent or preclude any minor from— 

(1) deliberately and independently searching for, or specifically requesting, content; or 

(2) accessing resources and information regarding the prevention or mitigation of the harms described in subsection (a).

In a previous blog post discussing the impact of the district court’s opinion on KOSA, I argued:

[T]he duty of care in KOSA is to act in the best interest of a minor by protecting him or her against certain product designs, but many of these are, in fact, protected speech or communication decisions by platforms—either all or some of the time. Even with unprotected speech or conduct, the duty could result in collateral censorship due to its vagueness or overbreadth. 

This is consistent with Bonta’s opinion, it turns out, which found that, even under lax commercial-speech standards, such a duty of care will not survive if it leads to beneficial speech being overly burdened. To be fair, limitation (b) does a better job than the AADC did in heading off the problem of looking for beneficial content. It therefore may be possible to save parts of (a)(1) and (2), if judged under a lax commercial-speech standard. 

But there will still likely be vagueness concerns with (a)(1) in light of cases like HØEG v. Newsom. Moreover, it seems impossible that (a)(1) and (a)(3) are not content-based, as online bullying and harassment are clearly speech under the Supreme Court’s Counterman decision. As a result, a duty of care, as written, to mitigate or prevent such content will likely be unconstitutional, due to a lack of a proper mens rea. Finally, (a)(2) is arguably protected by the “right to editorial control” of how to present information, as laid out in Tornillo, which would then subject that provision to strict scrutiny.

This analysis largely holds up in light of the 9th Circuit’s ruling, with the additional burden that the appeals court’s opinion now makes clear that the duty to mitigate harms from content is subject to strict scrutiny. Moreover, the court notes that it would be impossible for an online platform to determine whether a particular design feature increases the risk of mental-health disorders for minor users or leads to online bullying or harassment without considering the content itself. 

This problem is amplified when considering the vagueness of KOSA’s directives. For instance, in the case of taking reasonable care to prevent or mitigate mental-health harms, covered platforms would have incentive to restrict access to all but the most benign material for minors, in case it could trigger anxiety. And in the case of online bullying or harassment, the covered platform would have incentive to restrict the ability to make certain jokes or satirical posts, use coarse language, or even engage in good-natured teasing.

Context, after all, matters a great deal. It isn’t clear that algorithms can be trained to understand an inside joke. In other words, KOSA encourages covered platforms to take down protected speech in the name of protecting kids.

In sum, Bonta v. NetChoice appears to mean not only that the AADC’s DPIA provisions do not survive First Amendment challenge, but KOSA’s duty of care would not either.

Conclusion

As the U.S. House prepares to take up KOSA and other online children’s safety bills, lawmakers must acknowledge the reality that KOSA’s duty of care is likely unconstitutional. We will soon discover whether this is the end of the beginning for KOSA, or just the end.