With yet another win for NetChoice in the U.S. District Court for the Northern District of California—this time a preliminary injunction granted against California’s Age Appropriate Design Code (AADC)—it is worth asking what this means for the federally proposed Kids Online Safety Act (KOSA) and other laws of similar import that have been considered in a few states. I also thought it was worthwhile to contrast them with the duty-of-care proposal we at the International Center for Law & Economics have put forward, in terms of how best to protect children from harms associated with social media and other online platforms.
In this post, I will first consider the Bonta case, its analysis, and what it means going forward for KOSA. Next, I will explain how our duty-of-care proposal differs from KOSA and the AADC, and why it would, in select circumstances, open online platforms to intermediary liability where they are best placed to monitor and control harms to minors, by making it possible to bring products-liability suits. I will also outline a framework for considering how the First Amendment and the threat of collateral censorship interacts with such suits.
Bonta, the First Amendment, and KOSA
It’s Only Gonna Get Harder for the Government
In Bonta, much as in the Arkansas federal district court in NetChoice v. Griffin, the federal district court was skeptical of the idea that the law at issue was truly content-neutral, and thus subject to a lower level than strict scrutiny. It nonetheless decided to proceed as if the law was subject to a lower level of scrutiny anyway. In Bonta, that meant applying the commercial-speech standard from Central Hudson and its progeny. As the court put it:
Accordingly, the Court will assume for the purposes of the present motion that only the lesser standard of intermediate scrutiny for commercial speech applies because, as shown below, the outcome of the analysis here is not affected by the Act’s evaluation under the lower standard of commercial speech scrutiny. (Slip op. at 18).
Then, applying Central Hudson, the court first found that protecting children is clearly enough to satisfy a substantial state interest, because it would likely even be a compelling state interest. Thus, it moved on to the more important issue of the means-end test, as applied to each of the AADC’s main provisions.
The first provision considered was one on disclosure of how the covered platforms will protect children from potentially harmful product designs. But the court found it likely to violate the First Amendment even under the commercial-speech standard, because it doesn’t really do anything to promote the interest of protecting children from harmful product designs:
Accepting the State’s statement of the harm it seeks to cure, the Court concludes that the State has not met its burden to demonstrate that the DPIA provisions in fact address the identified harm. For example, the Act does not require covered businesses to assess the potential harm of product designs—which Dr. Radesky asserts cause the harm at issue—but rather of “the risks of material detriment to children that arise from the data management practices of the business.” CAADCA § 31(a)(1)(B) (emphasis added). And more importantly, although the CAADCA requires businesses to “create a timed plan to mitigate or eliminate the risk before the online service, product, or feature is accessed by children,” id. § 31(a)(2), there is no actual requirement to adhere to such a plan. See generally id. § 31(a)(1)-(4); see also Tr. 26:9–10 (“As long as you write the plan, there is no way to be in violation.”), ECF 66. (Slip Op. at 21).
Then, the court moved to a provision that would require the covered platform’s to estimate a user’s age and to either provide heightened data and privacy protections for them, or provide the same level of protection due to adults. The court first analyzed the age-estimation provision and found:
Based on the materials before the Court, the CAADCA’s age estimation provision appears not only unlikely to materially alleviate the harm of insufficient data and privacy protections for children, but actually likely to exacerbate the problem by inducing covered businesses to require consumers, including children, to divulge additional personal information. (Slip Op. at 22).
The court then considered whether applying generally applicable privacy and data-security protections to children would satisfy the First Amendment, finding it would chill more legal speech than necessary for both children and adults (i.e., that it would result in collateral censorship):
The Court is indeed concerned with the potentially vast chilling effect of the CAADCA generally, and the age estimation provision specifically. The State argues that the CAADCA does not prevent any specific content from being displayed to a consumer, even if the consumer is a minor; it only prohibits a business from profiling a minor and using that information to provide targeted content. See, e.g., Opp’n 16. Yet the State does not deny that the end goal of the CAADCA is to reduce the amount of harmful content displayed to children…
Putting aside for the moment the issue of whether the government may shield children from such content—and the Court does not question that the content is in fact harmful—the Court here focuses on the logical conclusion that data and privacy protections intended to shield children from harmful content, if applied to adults, will also shield adults from that same content. That is, if a business chooses not to estimate age but instead to apply broad privacy and data protections to all consumers, it appears that the inevitable effect will be to impermissibly “reduce the adult population … to reading only what is fit for children.” Butler v. Michigan, 352 U.S. 380, 381, 383 (1957). And because such an effect would likely be, at the very least, a “substantially excessive” means of achieving greater data and privacy protections for children, see Hunt, 638 F.3d at 717 (citation omitted), NetChoice is likely to succeed in showing that the provision’s clause applying the same process to all users fails commercial speech scrutiny.
For these reasons, even accepting the increasing of children’s data and privacy protections as a substantial governmental interest, the Court finds that the State has failed to satisfy its burden to justify the age estimation provision as directly advancing the State’s substantial interest in protecting the physical, mental, and emotional health and well-being of minors, so that NetChoice is likely to succeed in arguing that the provision fails commercial speech scrutiny. (Slip Op. at 23, 24).
Moreover, even providing high-default privacy settings for children fails, mainly because it’s unclear how it applies at all:
The instant provision, however, does not make clear whether it applies only to privacy settings on accounts created by children—which is the harm discussed in the State’s materials, see, e.g., Radesky Decl. ¶ 59—or if it applies, for example, to any child visitor of an online website run by a covered business. Slip Op. at 25.
The court also rejected community-standard transparency provisions on privacy policy:
Nothing in the State’s materials indicates that the policy language provision would materially alleviate a harm to minors caused by current privacy policy language, let alone by the terms of service and community standards that the provision also encompasses.” Slip Op. at 26.
And the enforcement of those stated provisions fairs no better:
The lack of any attempt at tailoring the proposed solution to a specific harm suggests that the State here seeks to force covered businesses to exercise their editorial judgment in permitting or prohibiting content that may, for instance, violate a company’s published community standards… Lastly, the Court is not persuaded by the State’s argument that the provision is necessary because there is currently “no law holding online businesses accountable for enforcing their own policies,” Def.’s Suppl. Br. 5, as the State itself cites to a Ninth Circuit case permitting a lawsuit to proceed where the plaintiff brought a breach of contract suit against an online platform for failure to adhere to its terms. (Slip Op. at 27).
This court also found that barring the use of “the personal information of any child in a way that the business knows, or has reason to know, is materially detrimental to the physical health, mental health, or well-being of a child” is both vaguely defined, and would lead to the consequence of locking children out of protected speech:
The CAADCA does not define what uses of information may be considered “materially detrimental” to a child’s well-being, and it defines a “child” as a consumer under 18 years of age. See CAADCA § 30. Although there may be some uses of personal information that are objectively detrimental to children of any age, the CAADCA appears generally to contemplate a sliding scale of potential harms to children as they age… NetChoice has provided evidence that covered businesses might well bar all children from accessing their online services rather than undergo the burden of determining exactly what can be done with the personal information of each consumer under the age of 18. See, e.g., NYT Am. Br. 5–6 (asserting CAADCA requirements that covered businesses consider various potential harms to children would make it “almost certain that news organizations and others will take steps to prevent those under the age of 18 from accessing online news content, features, or services”). (Slip Op. at 27, 28).
The court then considered a provision that, by default, turns off profiling for children under age 18, and only allows it in certain circumstances. Accepting—for the sake of argument—that there is a “concrete harm” in profiling for harmful content (and thus ignoring, to some degree, whether it is protected) the court nonetheless still found a lack of tailoring to survive this low level of scrutiny:
NetChoice has provided evidence indicating that profiling and subsequent targeted content can be beneficial to minors, particularly those in vulnerable populations. For example, LGBTQ+ youth—especially those in more hostile environments who turn to the internet for community and information—may have a more difficult time finding resources regarding their personal health, gender identity, and sexual orientation. See Amicus Curiae Br. of Chamber of Progress, IP Justice, & LGBT Tech Inst. (“LGBT Tech Am. Br.”), ECF 42-1, at 12–13. Pregnant teenagers are another group of children who may benefit greatly from access to reproductive health information. Id. at 14–15. Even aside from these more vulnerable groups, the internet may provide children— like any other consumer—with information that may lead to fulfilling new interests that the consumer may not have otherwise thought to search out. The provision at issue appears likely to discard these beneficial aspects of targeted information along with harmful content such as smoking, gambling, alcohol, or extreme weight loss. (Slip Op. at 29-30).
Thus, it burdens more speech (in this case, beneficial speech) than necessary. Which is the same reason the court provided for rejecting a provision banning the collection or sharing of data by default with advertisers or other content providers. Even “unauthorized use” of personal information fails, as it would result in less beneficial or neutral content provided to children, which, as the court put it: “throws out the baby out with the bathwater.” Slip Op. at 31. (Called it).
Finally, the court considered “dark patterns” or those “design features that ‘nudge’ individuals into making certain decisions, such as spending more time on an application.” Slip Op. at 32. The court was troubled by the vagueness of this provision, as well as its likelihood to result in burdening more beneficial speech than necessary:
The Court is troubled by the “has reason to know” language in the Act, given the lack of objective standard regarding what content is materially detrimental to a child’s well-being… And some content that might be considered harmful to one child may be neutral at worst to another. NetChoice has provided evidence that in the face of such uncertainties about the statute’s requirements, the statute may cause covered businesses to deny children access to their platforms or content. (Slip Op. at 34).
In sum, even under the relatively lax commercial-speech standard of review, AADC fails. If it gets to the merits and a higher standard of review (such as strict scrutiny) applies, then it will be literally impossible for AADC to survive. It’s only going to get harder for the government.
KOSA’s Provisions Would Fail for the Same Reasons
A comparison of the AADC with KOSA is instructive. KOSA’s main provision on a duty of care states:
(a) Prevention of harm to minors
A covered platform shall act in the best interests of a user that the platform knows or reasonably should know is a minor by taking reasonable measures in its design and operation of products and services to prevent and mitigate the following:
- Consistent with evidence-informed medical information, the following mental health disorders: anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors.
- Patterns of use that indicate or encourage addiction-like behaviors.
- Physical violence, online bullying, and harassment of the minor.
- Sexual exploitation and abuse.
- Promotion and marketing of narcotic drugs (as defined in section 102 of the Controlled Substances Act (21 U.S.C. 802)), tobacco products, gambling, or alcohol.
- Predatory, unfair, or deceptive marketing practices, or other financial harms.
(b) Limitation
Nothing in subsection (a) shall be construed to require a covered platform to prevent or preclude—
- any minor from deliberately and independently searching for, or specifically requesting, content; or
- the covered platform or individuals on the platform from providing resources for the prevention or mitigation of suicidal behaviors, substance use, and other harms, including evidence-informed information and clinical resources.
While the most recent text of KOSA did attempt to take some of the First Amendment concerns raised about its original version into consideration, there are still clear problems when you apply Bonta.
As I’ve noted previously on several occasions, the duty of care in KOSA is to act in the best interest of a minor by protecting him or her against certain product designs, but many of these are, in fact, protected speech or communication decisions by platforms—either all or some of the time. Even with unprotected speech or conduct, the duty could result in collateral censorship due to its vagueness or overbreadth.
This is consistent with Bonta’s opinion, it turns out, which found that, even under lax commercial-speech standards, such a duty of care will not survive if it leads to beneficial speech being overly burdened. To be fair, limitation (b) does a better job than the AADC did in heading off the problem of looking for beneficial content. It therefore may be possible to save parts of (a)(1) and (2), if judged under a lax commercial-speech standard.
But there will still likely be vagueness concerns with (a)(1) in light of cases like HØEG v. Newsom. Moreover, it seems impossible that (a)(1) and (a)(3) are not content-based, as online bullying and harassment are clearly speech under the Supreme Court’s Counterman decision. As a result, a duty of care, as written, to mitigate or prevent such content will likely be unconstitutional, due to a lack of a proper mens rea. Finally, (a)(2) is arguably protected by the “right to editorial control” of how to present information, as laid out in Tornillo, which would then subject that provision to strict scrutiny.
There are, however, many parts of KOSA that I never even considered before as likewise ripe for First Amendment challenge as a result of Bonta.
For instance, KOSA also has provisions in Section 4 on default privacy and data protection at the highest level in (b)(4), which are almost identical to the AADC. Other provisions with direct analogues include (b)(3) and (c), Section 5(a), and Section 6 on notice and reporting requirements, and even a “dark patterns” prohibition in (e)(2) that all failed under commercial-speech scrutiny in Bonta. Each could easily be viewed by courts as protected speech, or protected editorial discretion in presentation of speech, that would certainly then fail under strict scrutiny.
Finally, not only could Section 13(b)’s disclaiming of age-verification requirements undermine the bill’s effectiveness, but the resulting requirement becomes awfully close to the age-estimation provision rejected in Bonta.
In sum, the issue with KOSA is much like what I concluded when reviewing age-verification requirements and verifiable parental-consent laws: the problem is “that these laws will not survive any First Amendment scrutiny at all.”
ICLE’s Duty of Care, Products Liability, and the First Amendment
During a recent webinar organized by the Federalist Society’s Regulatory Transparency Project, I answered a question about market failure by noting that one might be associated with the negative externalities that social-media usage imposes on children and parents. But as the late Nobel laureate Ronald Coase and the public choice field of economics teach us, that is not the end of the analysis.
First, understanding transaction costs and the institutional environments that define the rules of the game is crucial to determine who should bear the costs of avoiding social-media harms in specific cases. It may, in fact, sometimes be the social-media companies who should bear those costs under some finding of intermediary liability. But we also need to be aware of the threat of government failure in implementing such a duty of care, as well as the potentially resulting threat of collateral censorship.
This brings me to our ICLE proposal, drawn from our paper “Who Moderates the Moderators?: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet.” We propose a duty of care that differs substantially from those in the AADC or KOSA. Our proposal focuses exclusively on imposing intermediary liability when online platforms are the least-cost avoider of harms, which would arise their unique ability to monitor and control harmful conduct or unprotected speech.
Specifically, we propose:
Section 230(c)(1) should not preclude intermediary liability when an online service provider fails to take reasonable care to prevent non-speech-related tortious or illegal conduct by its users.
As an exception to the general reasonableness rule above, Section 230(c)(1) should preclude intermediary liability for communication torts arising out of user-generated content unless an online service provider fails to remove content it knew or should have known was defamatory.
Section 230(c)(2) should provide a safe harbor from liability when an online service provider does take reasonable steps to moderate unlawful conduct. In this way, an online service provider would not be held liable simply for having let harmful content slip through, despite its reasonable efforts.
The act of moderation should not give rise to a presumption of knowledge. Taking down content may indicate an online service provider knows it is unlawful, but it does not establish that the online service provider should necessarily be liable for a failure to remove it anywhere the same or similar content arises.
But Section 230 should contemplate “red-flag” knowledge, such that a failure to remove content will not be deemed reasonable if an online service provider knows or should have known that it is illegal or tortious. Because the Internet creates exceptional opportunities for the rapid spread of harmful content, a reasonableness obligation that applies only ex ante may be insufficient. Rather, it may be necessary to impose certain ex post requirements for harmful content that was reasonably permitted in the first instance, but that should nevertheless be removed given sufficient notice.
Our duty-of-care proposal would not create a new federal cause of action against online platforms. Instead, it would allow for regular rules of intermediary liability to apply. For instance, it could be a product-liability claim, much like those in a relatively recent complaint filed against the major social-media companies in California for product-design choices related to how information is presented and how it contributed to mental-health issues for minor users.
Now, there could still be a valid concern about collateral censorship, and this could result in First Amendment challenge if the underlying cause of action burdens more speech than necessary. But there are reasons to think that a pure product-liability challenge that targets how content is presented could, in theory, survive a First Amendment challenge, even in light of the right to editorial discretion.
I do think it will be a delicate balance for the courts. Ultimately, however, balancing product liability with First Amendment interests is likely the best approach, rather than the issue being settled under Section 230 immunity alone.
Snapchat, Content Neutrality, Product Liability, and the Right to Editorial Discretion
Under the 9th U.S. Circuit Court of Appeals’ SnapChat decision, such a product-liability suit was found able to proceed even under Section 230 immunity. This means that, at least in that circuit’s courts, our proposal is unnecessary. But there remains important First Amendment analysis to be done.
A particularly thorny issue here is whether the right to editorial discretion under Tornillo and related cases means that a product-liability suit against communications platforms can get off the ground at all. If determinations regarding how information is presented receive the strongest possible First Amendment protection, even an otherwise content-neutral product-liability complaint could be subject to strict scrutiny. On the other hand, if there is a weaker version of editorial discretion applied, then a product-liability suit would likely be considered under intermediate scrutiny. Ultimately, it will probably depend on the tricky speech and conduct distinction in First Amendment law, which I outline further below.
At a certain level of abstraction, because a product-liability suit would be grounded in how information is presented, and not on what is being said, it would be content neutral, and thus not subject to strict scrutiny. Instead, it would be reviewed under intermediate scrutiny, where the regulation is constitutional if it 1) furthers an important or substantial government interest 2) unrelated to the suppression of free expression and 3) the incidental restriction on speech is not greater than necessary to further that interest.
Even under that standard, it is unclear how the deliberations would go. It is possible that, just as in Bonta and NetChoice v. Griffin, the product-liability suit burdens more speech than necessary, even under the lower standard.
First, a product-liability suit would further a substantial interest because it is designed to force an entity to internalize a negative externality—in this case, product-design features that are harmful to children. It would, at least on its face, be unrelated to the suppression of free expression, although it could be argued that the underlying speech is the problem. The best argument would be that this concerns the social-media platforms’ conduct, not their speech. But that would be the initial matter a court would have to decide.
Second, whether a product-liability suit would burden more speech than necessary would likely depend on the specific product design in question. For instance, an allegation that allowing direct messaging results in online bullying may be more speech-related than a platform’s algorithmic recommendations. Even within algorithmic recommendations, important distinctions may be made between “neutral” algorithms whose recommendations are based on a user’s stated interests and algorithms that offer harmful content that was completely undesired. What specific allegations are made could make all the difference in deciding whether an incidental restriction on speech or expressive conduct goes too far.
This will not be an easy call, but there are general guidelines from previous cases and background law that would apply. For instance, the seminal case may be one that comes from the pre-internet age, in which plaintiffs got sick from eating mushrooms after relying on an encyclopedia entry. There, the 9th Circuit stated:
A book containing Shakespeare’s sonnets consists of two parts, the material and print therein, and the ideas and expression thereof. The first may be a product, but the second is not. The latter, were Shakespeare alive, would be governed by copyright laws; the laws of libel, to the extent consistent with the First Amendment; and the laws of misrepresentation, negligent misrepresentation, negligence, and mistake. These doctrines applicable to the second part are aimed at the delicate issues that arise with respect to intangibles such as ideas and expression. Products liability law is geared to the tangible world.
The language of products liability law reflects its focus on tangible items. In describing the scope of products liability law, the Restatement (Second) of Torts lists examples of items that are covered. All of these are tangible items, such as tires, automobiles, and insecticides. The American Law Institute clearly was concerned with including all physical items but gave no indication that the doctrine should be expanded beyond that area.
The purposes served by products liability law also are focused on the tangible world and do not take into consideration the unique characteristics of ideas and expression. Under products liability law, strict liability is imposed on the theory that ” [t]he costs of damaging events due to defectively dangerous products can best be borne by the enterprisers who make and sell these products.” Prosser & Keeton on The Law of Torts, Sec. 98, at 692-93 (W. Keeton ed. 5th ed. 1984). Strict liability principles have been adopted to further the “cause of accident prevention … [by] the elimination of the necessity of proving negligence.” Id. at 693. Additionally, because of the difficulty of establishing fault or negligence in products liability cases, strict liability is the appropriate legal theory to hold manufacturers liable for defective products. Id. Thus, the seller is subject to liability “even though he has exercised all possible care in the preparation and sale of the product.” Restatement Sec. 402A comment a. It is not a question of fault but simply a determination of how society wishes to assess certain costs that arise from the creation and distribution of products in a complex technological society in which the consumer thereof is unable to protect himself against certain product defects.
Although there is always some appeal to the involuntary spreading of costs of injuries in any area, the costs in any comprehensive cost/benefit analysis would be quite different were strict liability concepts applied to words and ideas. We place a high priority on the unfettered exchange of ideas. We accept the risk that words and ideas have wings we cannot clip and which carry them we know not where. The threat of liability without fault (financial responsibility for our words and ideas in the absence of fault or a special undertaking or responsibility) could seriously inhibit those who wish to share thoughts and theories. As a New York court commented, with the specter of strict liability, ” [w]ould any author wish to be exposed … for writing on a topic which might result in physical injury? e.g. How to cut trees; How to keep bees?” Walter v. Bauer, 109 Misc.2d 189, 191, 439 N.Y.S.2d 821, 823 (Sup.Ct.1981) (student injured doing science project described in textbook; court held that the book was not a product for purposes of products liability law), aff’d in part & rev’d in part on other grounds, 88 A.D.2d 787, 451 N.Y.S.2d 533 (1982). One might add: “Would anyone undertake to guide by ideas expressed in words either a discrete group, a nation, or humanity in general?”
Strict liability principles even when applied to products are not without their costs. Innovation may be inhibited. We tolerate these losses. They are much less disturbing than the prospect that we might be deprived of the latest ideas and theories…
Plaintiffs’ argument is stronger when they assert that The Encyclopedia of Mushrooms should be analogized to aeronautical charts. Several jurisdictions have held that charts which graphically depict geographic features or instrument approach information for airplanes are “products” for the purpose of products liability law… Plaintiffs suggest that The Encyclopedia of Mushrooms can be compared to aeronautical charts because both items contain representations of natural features and both are intended to be used while engaging in a hazardous activity. We are not persuaded.
Aeronautical charts are highly technical tools. They are graphic depictions of technical, mechanical data. The best analogy to an aeronautical chart is a compass. Both may be used to guide an individual who is engaged in an activity requiring certain knowledge of natural features. Computer software that fails to yield the result for which it was designed may be another. In contrast, The Encyclopedia of Mushrooms is like a book on how to use a compass or an aeronautical chart. The chart itself is like a physical “product” while the “How to Use” book is pure thought and expression.
Given these considerations, we decline to expand products liability law to embrace the ideas and expression in a book. We know of no court that has chosen the path to which the plaintiffs point… Guided by the First Amendment and the values embodied therein, we decline to extend liability under this theory to the ideas and expression contained in a book.
As there, courts will need to decide whether an algorithm is a tangible product, or protected First Amendment information. The question may turn on whether it is simply trying to point the way, like a compass or an aeronautical chart, or trying to influence users to discover new information that they otherwise wouldn’t want.
But a further difficulty is that the First Amendment may apply more if the algorithm is more customized. As I argued against Tim Wu’s “Machine Speech” article nearly a decade ago:
Of course, Wu admits the actual application of his test online can be difficult. In his law review article he deals with some easy cases, like the obvious application of the First Amendment to blog posts, tweets, and video games, and non-application to Google Maps. Of course, harder cases are the main target of his article: search engines, automated concierges, and other algorithm-based services. At the very end of his law review article, Wu finally states how to differentiate between protected speech and non-speech in such cases:
The rule of thumb is this: the more the concierge merely tells the user about himself, the more like a tool and less like protected speech the program is. The more the programmer puts in place his opinion, and tries to influence the user, the more likely there will be First Amendment coverage. These are the kinds of considerations that ultimately should drive every algorithmic output case that courts could encounter.
Unfortunately for Wu, this test would lead to results counterproductive to his goals.
Applying this rationale to Google, for instance, would lead to the perverse conclusion that the more the allegations against the company about tinkering with its algorithm to disadvantage competitors are true, the more likely Google would receive First Amendment protection. And if Net Neutrality advocates are right that ISPs are restricting consumer access to content, then the analogy to the newspaper in Tornillo becomes a good one–ISPs have a right to exercise editorial discretion and mandating speech would be unconstitutional. The application of Wu’s test to search engines and ISPs effectively puts them in a “use it or lose it” position with their First Amendment rights that courts have rejected. The idea that antitrust and FCC regulations can apply without First Amendment scrutiny only if search engines and ISPs are not doing anything requiring antitrust or FCC scrutiny is counterproductive to sound public policy–and presumably, the regulatory goals Wu holds.
Wu has moved on from Lochner-baiting to comparing Bonta to Hammer v. Dagenhart, but his fundamental problem with First Amendment law, as it has actually developed, hasn’t changed.
Much like in Tornillo and other compelled-speech cases, the issue turns to some degree on how much the newspaper, parade, public access channel, or university is identified with the speech at issue. Here, social-media companies probably aren’t really trying to connect themselves with harmful (even when technically legal) speech. We know this because of their assertion of Section 230 immunity in such cases. Thus, it will be very interesting to see what happens with the social-media case in California when the First Amendment defense is inevitably brought up.
In sum, a targeted Section 230 reform, such as the one we at ICLE proposed, would clarify that it does not prevent a SnapChat-type lawsuit from proceeding, and simply allow the First Amendment analysis of a product-liability suit to proceed. The balance of accountability and avoiding collateral censorship will ultimately need to be hashed out by the courts.
Conclusion
Bonta likely means that KOSA is as doomed as the AADC under the First Amendment. But a targeted duty of care like our ICLE proposal could allow courts to figure out the proper balance between speech and protecting children online through product-liability suits.