Kids and Online Safety: An International End-of-Year Review

Cite this Article
Ben Sperry, Kids and Online Safety: An International End-of-Year Review, Truth on the Market (December 19, 2024), https://truthonthemarket.com/2024/12/19/kids-and-online-safety-an-international-end-of-year-review/

As the year comes to a close, it is worth reviewing how governments around the world—including at both the state and federal level in the United States—have approached online regulation to protect minors. I will review several of the major legislative and regulatory initiatives, with brief commentary on the tradeoffs involved in each approach. 

Age Gating 

Australia recently became the first country to pass a law that would ban all users under the age of 16 from social-media platforms. The Online Safety Amendment requires that “[a] provider of an age-restricted social media platform must take reasonable steps to prevent age-restricted users having accounts with the age-restricted social media platform.” 

The legislation’s text leaves it somewhat unclear what this requires and to whom it would apply. Communications Minister Michelle Rowland suggested that Snapchat, TikTok, X, Instagram, Reddit, and Facebook are likely to be part of the ban, but that YouTube will not be included because of its “significant” educational purpose. 

As to what is required, the “reasonable steps” provision will require some sort of age-verification technology that would effectively allow social-media platforms to restrict users under 16. But the legislation bans the collection of government-issued identification for the purposes of age verification. Further regulations will hopefully make this more clear before the law comes into effect.

Aside from its lack of clarity, Australia’s approach will likely have considerable costs to social-media users, both adult and minor. Social-media platforms will have to somehow figure out how to restrict users under age 16, which will necessarily mean the collection of data on the ages of all users. For privacy-sensitive users at least over age 16 who don’t want to have such data collected, this could be a dealbreaker for using social media. 

But the costs for those under 16 are even more clear: they will not be able to access many social-media platforms anymore. While there are clearly costs to using social media, it seems likely that, for many users under the age of 16, there are also corresponding benefits that outweigh those costs (at least, from their point of view) or they wouldn’t be using those platforms.

Note that this law would apply even in cases parents would consent or don’t care if their children under age 16 use social media. And it forestalls the use of other means and technologies by parents that monitor or restrict their young teenagers in a way they see fit, without an outright  ban on their social-media usage.

Age Verification/Parental Consent

Many countries around the world use a combination of age verification and parental consent to attempt to reduce harms associated with social-media usage. 

In the EU, for instance, Germany requires parental consent for minors between the ages of 13 and 16 to create accounts, while France requires parental consent for minors under 15. Norway has proposed a similar law that would require parental consent for users under 15. Italy requires parental consent for children under 14 and Belgium requires children to be at least 13 to create a social-media account without parental permission. 

Here in the United States, there is no federal rule setting the minimum age to create a social-media profile, but the de facto standard is age 13. This is due to the Children’s Online Privacy Protection Act (COPPA), which includes provisions requiring parental consent for collection of personal information for users under that age. There have, however, been many state level attempts to create laws that would require parental consent for minor users, although many of them have been held up due to First Amendment challenges. At the federal level, Rep. John James (R-Mich.) has proposed a bill (H.R. 10364) that would require age verification and parental consent at the app-store level. 

As I’ve written extensively, the combination of age-verification and parental-consent mandates will likely lead to the exclusion of minors from online spaces. This is a more roundabout way to have the same effect as an outright ban, due to the transaction costs of gaining verifiable parental consent. Studies on the effects of COPPA, for instance, have noted that:

Because obtaining verifiable parental consent for free online services is difficult and rarely cost justified, COPPA acts as a de facto ban on the collection of personal information—and hence personalized advertising—by providers of free child-directed content.

For social media, this would result in collateral censorship, as minors would no longer have access to speech protected under the First Amendment (i.e., material that is not obscene as to children). As I’ve argued before, this is just as true if the age verification and parental consent is required at the app-store level.

A less-restrictive alternative to age verification and parental consent is to promote technical and practical means to avoid online harms. As noted by one federal court who considered Arkansas’ parental-consent law for social-media users under age 18:

[P]arents may rightly decide to regulate their children’s use of social media—including restricting the amount of time they spend on it, the content they may access, or even those they chat with. And many tools exist to help parents with this.

These tools include options available at the ISP, router, device, and browser level, as well as those provided by social media platforms themselves. In light of the costs associated with age verification and parental consent, parents and teens appear to be the lower-cost avoiders. 

Duty of Care

Another approach is to apply a duty of care to online platforms requiring them to take steps to prevent or mitigate harms to their minor users. 

The UK passed the Online Safety Act of 2023, which applies a special duty of care requiring regulated platforms to protect minors against not only illegal, but also “harmful or age-inappropriate” content and behavior. This is somewhat mirrored in the United States by provisions of California’s Age-Appropriate Design Code (AADC) and the recently proposed Kids Online Safety and Privacy Act (KOSPA) at the federal level. These all attempt—to varying degrees of specificity—to apply a duty of care to covered online platforms requiring them to create and enforce plans to prevent and mitigate harms to minor users.

Aside from the privacy and data-collection concerns associated with age-verification processes, these laws also greatly increase the cost to online platforms of hosting minors. As the costs of hosting minors increases, there will likely come a point where they exceed the benefits of any expected revenue. The likely result is the exclusion of minors altogether. As an amicus brief from the New York Times and the Student Press Law Center in NetChoice v. Bonta put it:

…in the face of [the AADC’s] requirements, it is almost certain that news organizations and others will take steps to prevent those under the age of 18 from accessing online news content, features, or services.

This is harmful for both online platforms and for those minor users who lose access to protected online speech. As a result, one district court and the 9th U.S. Circuit Court of Appeals have found that the applicable provisions of California’s AADC likely violate the First Amendment. As I argued previously, this likely means KOSA (now KOSPA) would likely fail under the First Amendment, as well.

The differences between KOSPA’s and KOSA’s duty of care are minimal. While KOSPA does have tighter language in its duty of care, it would still require covered platforms to guess as to what the act’s enforcers believe is “reasonable” care to prevent and mitigate specific reasonably foreseeable harms (some of which remain a bit vague). Despite attempts to limit government officials’ ability to enforce the law in a viewpoint-discriminatory manner, KOSPA arguably gives both federal and state officials more space to jawbone online platforms into restricting speech that is potentially harmful to children. The threat of enforcement will mean covered online platforms will almost certainly err on the side of restricting minors’ access.

Restrictions on Targeted Advertising

Another way that many jurisdictions around the world, including some U.S. policymakers, have fought to protect minors online is through restrictions or bans on targeted advertising.

Under the General Data Protection Regulation (GDPR), the EU requires parental consent for processing the personal data of children under age 16, though the bloc’s 27 member states can lower that limit to 13. In the United States, COPPA has likewise been interpreted to require verifiable parental consent for the use of persistent identifiers used to serve targeted advertisements to users under age 13.

KOSPA proposes that covered platforms must get verifiable consent for teens over age 13 and under age 17 before collecting data like persistent identifiers for “individual-specific advertising.” Regulatory regimes like California’s AADC also restrict online platforms’ ability to collect and use data on minors for curation or targeted advertising.

Most online platforms are multisided: they curate speech for users on one side of the platform and businesses on the other side of the platform pay to serve targeted advertising to those users. Data collection is vital for both of these practices. Targeted advertising allows online platforms to offer access to speech for a lower price to users (usually free). Restrictions on targeted advertising to minors often end up reducing the amount of speech available to them. One study of COPPA shows that, for example, significantly less children’s content was created after the Federal Trade Commission’s (FTC) enforcement action against YouTube. 

As I argued recently, reducing platforms’ ability to monetize the presence of minors will lead to those users being excluded from online platforms. Courts could find that targeted advertising is “inextricably intertwined” with free access to speech platforms. If they do, such restrictions should be found to violate the First Amendment.

Conclusion

It’s important to remember that regulations intended to protect minors online could actually lead to them being kicked off online platforms altogether. Sometimes this is the goal, such as with bans on use until a certain age. But other times, it is just the inevitable consequence of raising the costs to serve minors and/or restricting how their presence can be monetized.

While there are clearly costs to minors that arise from their use of the internet (including social media), there are also benefits that likely outweigh those costs for the vast majority of teens. Parents are better positioned than government actors to help their kids avoid harms online. And in the United States, the First Amendment requires the least-restrictive alternative when access to lawful speech platforms is at issue.