Age-Verification Mandates: Constitutional Concerns and Policy Pitfalls

Cite this Article
Kristian Stout, Age-Verification Mandates: Constitutional Concerns and Policy Pitfalls, Truth on the Market (October 01, 2024), https://truthonthemarket.com/2024/10/01/age-verification-mandates-constitutional-concerns-and-policy-pitfalls/

In a recent post, my International Center for Law & Economics (ICLE) colleague Ben Sperry explored the First Amendment implications of Rep. John James’ (R-Mich.) proposal to mandate app stores either verify users’ ages and or obtain parental consent for users who are minors. While that analysis provided a solid foundation for understanding the constitutional issues at-play, it’s crucial to delve deeper into why such laws are almost certainly not the least-restrictive means to protect children online. Further, these constitutional infirmities demonstrate why this is likely to be a counterproductive public policy, even if it could survive judicial scrutiny.

Age-verification and parental-consent mandates not only raise significant First Amendment concerns, but also present a host of practical challenges that could undermine their effectiveness and potentially create new risks for users of all ages.

These Efforts Are Almost Certainly Unconstitutional

It’s worth reiterating Ben’s points on the constitutional infirmity of Rep. James’ proposal, as the First Amendment implications are significant. As Ben noted, laws that restrict access to protected speech—even if those restrictions apply only to minors—must pass strict scrutiny under the First Amendment. This requires that a law be narrowly tailored to serve a compelling government interest and that it use the least-restrictive means to achieve that interest.

While protecting minors from online harms is undoubtedly a compelling government interest, Rep. James’ proposed amendment to H.R. 7890, the Children and Teens’ Online Privacy Protection Act—which he withdrew before the bill’s Sept. 18 passage by the U.S. House Energy and Commerce Committee—would likely fail to meet the least-restrictive-means test.

First, and most obviously, the existence of robust, widely available parental-control tools undermines the proposal’s necessity. These tools, already implemented by major platforms and device manufacturers, allow parents to monitor and control their children’s online activities without imposing blanket restrictions on protected speech.

Moreover, the bill’s approach of mandating age verification and parental consent at the app-store level is overly broad. As the U.S. Supreme Court articulated in its 2011 Brown v. Entertainment Merchants Association ruling: “Such laws do not enforce parental authority over children’s speech and religion; they impose governmental authority, subject only to a parental veto.”

But as Ben noted, this veto would need to operate against some set of defaults that are meant to insulate providers from legal liability—a reality that is more likely to annoy parents than aid them, as judged against the status quo. This kind of proposal is flawed at a fundamental level, insofar as it is rooted in the idea of comparing app providers to bars and taverns, a comparison that courts have explicitly rejected. Just as it would be unconstitutional to ban minors from entering a shopping mall on the grounds that one of the mall’s stores sells alcohol, it is similarly problematic to restrict access to entire digital platforms or app stores due to the presence of some potentially inappropriate content.

The First Amendment concerns raised by proposals like the James amendment are also not limited to app stores. As several courts have already found, they extend to similar mandates imposed on individual websites and services, as well. The core constitutional issues remain the same: such mandates risk being overly broad, imposing government restrictions on speech with (at best) only a parental veto, rather than enhancing parental choice. After all, the Supreme Court’s 1997 Reno v. ACLU decision, which struck down most of the Communications Decency Act of 1996, already stands for skepticism of exactly these sorts of restrictions. 

In short, it is highly likely that a court applying First Amendment scrutiny would find the James amendment to be far from the least-restrictive means to protect minors online. The existence of more targeted, flexible, and parent-controlled alternatives renders the proposed sweeping restrictions on app stores  constitutionally suspect.

Beyond the Law: Policy Problems with Age-Verification Mandates

Beyond the legal analysis, these sorts of mandates, whether imposed on app stores or other online services, present a whole host of other problems. Indeed, it’s these problems that best illustrate the intuition shared by courts that have found against such provisions as being the least-restrictive means to achieve the goal of protecting children online. 

Technically speaking, it’s not even clear that providers actually have the data about their users’ ages that policymakers appear to imagine they do. Thus, without a clearer understanding of the current state of technological feasibility, it may not be possible for platforms to comply with such laws. Indeed, one of the most significant concerns about age-verification mandates is their potential to abet privacy breaches and magnify data-security risks. To implement effective age verification, platforms and app stores would need to collect and store a trove of users’ sensitive personal information. This creates new vectors for data breaches and other potential misuses of that information.

For example, if an app store were required to verify the age of every user, it might need to collect government-issued IDs or other forms of identification. This sensitive data, if compromised, could lead to identity theft or fraud. Similarly, if individual websites had to verify ages, users would be forced to share personal information with numerous entities, similarly increasing the risk of data breaches. Even a well-meaning, well-resourced provider would face significant risk of exposure; after all, no security measure is perfect. It’s not at all clear that increasing the attack surface for leaks of personal information is balanced by heightening the obligations on providers who already provide a large suite of tools that enable parents to monitor their children’s online activity. 

Age-verification mandates also don’t only affect minors; they have significant implications for adult users, as well. These sorts of mandates could effectively force all adults to prove their age to access content, even if that content isn’t age-restricted. Consider a professional who uses their smartphone primarily for work-related apps and communication. Under a strict age-verification regime for app stores, this user might need to demonstrate he or she is not a minor and to furnish unnecessary personal information just to download productivity apps or to access basic services. This would create undue burdens on adult users and potentially discourage the use of beneficial online services.

Further, in the real world, there is little to be gained from imposing this sort of broad legal liability on providers. Indeed, age-verification systems are often easily circumvented, especially by tech-savvy youth. For instance, if age verification is implemented at the app-store level, a minor could simply access the same services through a web browser, bypassing the restrictions entirely. Similarly, at the website or service level, determined minors could use virtual private networks (VPNs); create accounts with false birth dates; or borrow login credentials from older friends or siblings, rendering the age-verification measures largely ineffective. 

Moreover, in households with shared devices, age verification becomes particularly problematic. A tablet used by both parents and children, for example, would either need to restrict access for everyone based on age of the youngest user, or require cumbersome switching among user profiles for each app launch.

Even the more advanced age-verification technologies are far from perfect. Facial-analysis software, for example, has been shown to have varying accuracy rates, particularly for certain demographic groups. Implementing these imperfect technologies on a broad scale could lead to false positives (incorrectly identifying adults as minors) and false negatives (failing to identify minors), neither of which serves the intended purpose of protecting children online.

Misplaced Responsibility and Vast Legal Liability

Age-verification mandates typically shift responsibility away from content creators and onto platforms or app stores. But given the incredible volume and variety of content available online—apart from fairly crude levels of resolution—imposing per-se legal obligations on app stores or platforms for the speech of third parties makes little sense. Section 230 exists for a reason; it was very clear from the beginning that legal liability for the speech of third parties could become destructive to the internet ecosystem.

To be sure, there is room for marginal changes to this regime, but any such changes need to be subject to careful consideration, and gradual stepwise implementation, lest we lose all the benefits that the internet has fostered. Rep. James’s proposal and others like it stand to do more harm than good, relative to the status quo. 

Paradoxically, strict age-verification mandates could actually reduce parental control and flexibility. Many parents prefer to have granular control over their children’s online activities, allowing more access as their children mature. But under the James amendment, for instance, app stores would need to obtain verifiable parental consent for every app download and purchase. This one-size-fits-all approach doesn’t account for varying parental preferences or the varying maturity levels of children of the same age.

The proposed penalties for noncompliance are also often severe and disproportionate. James’ proposal, for example, would assess specific civil penalties of $500 per-user for knowing misstatements of content, $250 per-user for negligent misstatements of content, and $1,000 per-user for failure to verify parental consent, up to a total cap of $4 billion per-violation, as well as empowering the Federal Trade Commission (FTC) to seek general civil penalties for violations of the law up to a cap of $2 billion per-violation. This is orders of magnitude larger than current FTC penalties for intentional violations of children’s privacy laws.

Such extreme penalties could lead to overly cautious behavior by platforms that results in unacceptable levels of collateral censorship—i.e., limiting access to beneficial services and content out of fear of accidental noncompliance.

Conclusion

In light of the numerous legal and practical issues surrounding age-verification mandates, it’s clear that such proposals are both constitutionally suspect and ill-advised as public policy. The crux of the matter lies in the abundance of existing tools already available to parents. From device-level controls to home network-management solutions, parents have access to a wide array of flexible, customizable options to monitor and guide their children’s online activities. These tools, which don’t infringe on First Amendment rights or create new privacy risks, represent the kinds of “less-restrictive means” that courts look for when applying strict scrutiny to content-based restrictions.

The existence of these plentiful and effective alternatives underscores why age-verification mandates are unlikely to survive constitutional scrutiny. Moreover, the potential for unintended consequences—including privacy breaches, technological limitations, and reduced parental flexibility—demonstrates why these proposals are fundamentally flawed as public policy. Rather than imposing broad, one-size-fits-all restrictions that burden both providers and users, policymakers should focus on educating parents about existing tools and empowering them to make informed decisions about their children’s online experiences.

In the end, targeted, flexible solutions that respect constitutional rights and acknowledge the complex realities of the digital age are far more likely to achieve the goal of protecting children online without compromising the open nature of the internet.