Right to Anonymous Speech, Part 3: Anonymous Speech and Age-Verification Laws

Cite this Article
Ben Sperry, Right to Anonymous Speech, Part 3: Anonymous Speech and Age-Verification Laws, Truth on the Market (September 11, 2023), https://truthonthemarket.com/2023/09/11/right-to-anonymous-speech-part-3-anonymous-speech-and-age-verification-laws/

An issue that came up during a terrific panel that I participated in last Thursday—organized by the Federalist Society’s Regulatory Transparency Project—was whether age-verification laws for social-media use infringed on a First Amendment right of either adults or minors to receive speech anonymously.

My co-panelist Clare Morell of the Ethics and Public Policy Center put together an excellent tweet thread summarizing some of her thoughts, including on the anonymous-speech angle. Another co-panelist—Shoshana Weissmann of the R Street Institute—also has a terrific series of blog posts on this particular issue.

Continuing this ongoing Truth on the Market series on anonymous speech, I wanted to respond to some of these ideas, and to argue that the primary First Amendment and public-policy concerns with age-verification laws really aren’t about anonymous speech. Instead, they are about whether such laws place the burden of avoiding harms on the least-cost avoider. Or, in the language of First Amendment jurisprudence, whether they are the least restrictive means to achieve a particular policy end.

Below, I elaborate first on how transaction costs doom the aims of age-verification and verifiable parental-consent laws, and then consider the state of First Amendment precedent for anonymous speech as it relates to age-verification laws. While there may be something to the argument for a right to receive speech anonymously, I predict that that is not  the primary issue on which court decisions related to age-verification laws will turn. It’s really going to be all about transaction costs (though the courts will call it “least-restrictive means” or “burdening more speech than necessary”).

It’s All About Transaction Costs

As I argued at the panel and will elaborate further in a forthcoming International Center for Law & Economics (ICLE) paper, the issue with age-verification and parental-consent laws is that they incorrectly place the cost of avoiding the negative externalities of potentially harmful but nonetheless protected speech onto social-media platforms, instead of parents working with their teens to make marginal decisions about social-media usage—decisions that take into account such teens’ maturity and particular needs.

As the Nobel laureate Ronald Coase taught us, in a world without transaction costs (or if such costs were sufficiently low), age-verification laws or obtaining verifiable parental consent wouldn’t matter. The parties would simply negotiate among themselves. Because there are high transaction costs that prevent such bargains from being easily struck, setting the default as teens being unable to join social media without verifiable parental consent could actually end up excluding them altogether from the great benefits that social media can generate.

There is considerable evidence that, while the internet and digital technologies have brought down transaction costs considerably on a wide range of fronts, they remain high when it comes to age verification and verifiable parental consent. One data point on this is the experience of social-media platforms under the Children’s Online Privacy Protection Act (COPPA). In their excellent working paper “COPPAcalypse? The YouTube Settlement’s Impact on Kids Content,” Garrett Johnson, Tesary Lin, James C. Cooper, & Liang Zhong summarized the issue as follows:

The Children’s Online Privacy Protection Act (COPPA), and its implementing regulations, broadly prohibit operators of online services directed at children under 13 from collecting personal information without providing notice of its data collection and use practices and obtaining verifiable parental consent. Because obtaining verifiable parental consent for free online services is difficult and rarely cost justified, COPPA essentially acts as a de facto ban on the collection of personal information by providers of free child-directed content. In 2013, the FTC amended the COPPA rules to include in the definition of personal information “persistent identifier that can be used to recognize a user over time and across different Web sites or online services,” such as a “customer number held in a cookie . . . or unique device identifier.” This regulatory change meant that, as a practical matter, online operators who provide child-directed content could no longer engage in personalized advertising. 

On September 4, 2019, the FTC entered into a consent agreement with YouTube to settle charges that it had violated COPPA. The FTC’s allegations focused on YouTube’s practice of serving personalized advertising on child-directed content at children without obtaining verifiable parental consent. Although YouTube maintains it is a general audience website and users must be at least 13 years old to obtain a Google ID (which makes personalized advertising possible), the FTC complaint alleges that YouTube knew that many of its channels were popular with children under 13, citing YouTube’s own claims to advertisers. The settlement required YouTube to identify child-directed channels and videos and to stop collecting personal information from visitors to these channels. In response, YouTube required channel owners producing [“made-for-kids”] MFK content to designate either their entire channels or specific videos as MFK, beginning on January 1, 2020. YouTube supplemented these self-designations with an automated classifier designed to identify content that was likely directed at children younger than 13.9 In so doing, YouTube effectively shifted liability under COPPA to the channel owners, who could face up to $42,530 in fines per video if they fail to self-designate and are not detected by YouTube’s classifier. (emphasis added).

By requiring verifiable parental consent, the rule change and settlement actually increased the transaction costs imposed on social-media platforms. YouTube’s economically rational response was to restrict content creators’ ability to benefit from the considerably more lucrative personalized advertising. The end result was less content created for children, with competitive effects to boot:

Consistent with a loss in personalized ad revenue, we find that child-directed content creators produce 13% less content and pivot towards producing non-child-directed content. On the demand side, views of child-directed channels fall by 22%. Consistent with the platform’s degraded capacity to match viewers to content, we find that content creation and content views become more concentrated among top child-directed YouTube channels.

This isn’t the only data point on COPPA’s tendency to reduce the production of content for children. Morgan Reed—president of the APP Association, a global trade association for small and medium-sized technology companies—presented extensively on the subject at the Federal Trade Commission’s (FTC) COPPA workshop (which is worth reading in full). His testimony describes how obtaining verifiable parental consent does little to enhance parental control, while doing much to reduce the quality and quantity of content directed to children.

What I want to highlight is Reed’s use of terms like “friction,” “restriction,” and “cost” to describe how COPPA affects the behavior of parents, children, and social-media platforms. He contrasts general-audience content (“unfettered, meaning that you don’t feel restricted by what you can get to, how you do it. It’s easy, it’s low friction. Widely available. I can get it on any platform, in any case, in any context and I can get to it rapidly”) with COPPA-regulated apps and content, which are all about:

Friction, restriction, and cost. Every layer of friction you add alters parent behavior significantly. We jokingly refer to it as the over the shoulder factor. If a parent wants access to something and they have to pass it from the back seat to the front seat of the car more than one time, the parent moves on to the next thing. So the more friction you add to an application directed at children the less likely it is that the parent is going to take the steps necessary to get through it because the competition, of course, is as I said, free, unfettered, widely available. Restriction. Kids balk against some of the restrictions. I can’t get to this, I can’t do that. And they say that to the parent. And from the parent’s perspective, fine, I’ll just put in a different age date. They’re participating, they’re parenting but they’re not using the regulatory construction that we all understand.

The COPPA side, expensive, onerous or friction full. We have to find some way around that. Restrictive, fewer features, fewer capabilities, less known or available, and it’s entertaining-ish.

Is COPPA the barrier? I thought this quote really summed it up. “Seamlessness is expected. But with COPPA, seamlessness is impossible.” And that has been one of the single largest areas of concern. Our folks are looking to provide a COPPA compliant environment. And they’re finding doing VPC is really hard. We want to make it this way, we just walked away. And why do they want to do it? We wanted to create a hub for kids to promote creativity. So these are not folks who are looking to take data and provide interest based advertising. They’re trying to figure out how to do it so they can build an engaging product. Parental consent makes the whole process very complicated. And this is the depressing part.

We say that VPC is intentional friction. It’s clear from everything we’ve heard in the last two panels that the authors of COPPA, we don’t really want information collected on kids. So friction is intentional. And this is leading to the destruction of general audience applications basically wiping out COPPA apps off the face of the map.

Reed’s use of the word “friction” is particularly enlightening. Mike Munger of Duke University has often described transaction costs as frictions—explaining that, to consumers, all costs are transaction costs. Thus, when you impose higher transaction costs on social-media platforms, it actually is felt by end users. In this case, the result is children and parents receiving a declining amount of quality children’s apps and content.

A similar example can be seen in the various battles between traditional-media and social-media companies in Australia, Canada, and the EU, where laws have been passed that require digital platforms to pay for linking to certain news content. Again, due to these laws raising the transaction costs of linking to news stories, social-media platforms have predictably responded by restricting access to news links, to the detriment of users and the news-media organizations themselves. In other words, much like with verifiable parental consent, the intent of these laws is thwarted by the underlying economics.

Even more evidence on this front was revealed in the preliminary injunction the U.S. District Court for the Western District of Texas issued in what is referred to as the “Texas porn-law case.” There, the court noted the high compliance costs of age verification, citing the plaintiff’s complaint, which “includes several commercial verification services, showing that they cost, at minimum, $40,000.00 per 100,000 verifications.” The court also noted:

H.B. 1181 imposes substantial liability for violations, including $10,000.00 per day for each violation, and up to $250,000.00 if a minor is shown to have viewed the adult content.

Moreover, these aren’t even all of the transaction costs, which also include the subjective costs imposed on adults who must verify their age to access pornography. As the court noted: “the law interferes with the Adult Video Companies’ ability to conduct business, and risks deterring adults from visiting the websites.” The court issued a preliminary injunction against the age-verification provision of the law, finding that other means—such as content-filtering technology—are clearly more effective than age verification to protect children from unwanted content.

In sum, the economics of transaction costs explains why the stated objectives of age verification and verifiable parental consent will not be achieved through those policies. As I argued during the FedSoc webinar, just as with minimum-wage laws and rent control, economics helps to explain the counterintuitive tendency of even well-intentioned laws to generate results the precise opposite of lawmakers’ stated intentions. Here, that means age-verification and verifiable parental-consent laws actually lead to parents and teens having less ability to make meaningful and marginal decisions about the costs and benefits of their own social-media use.

The Right to Anonymous Speech: Missing in Action

In her panel comments and follow-up tweet thread, Clare Morell makes a strong argument that “[a]nonymous authentication methods completely transform the First Amendment analysis for age-verification requirements.” This is because commercial technological means have progressed to be much more strongly privacy-protective than they previously were. While the court in the Texas porn case was very skeptical of this argument, citing continuing issues with data breaches throughout the internet ecosystem, it is certainly true that there have been significant improvements in the technological means to obtain identity. As I will argue here, however, this is not really the issue with age-verification laws in active litigation, nor is it the main argument that will likely win the day with the courts. 

While the Texas porn case does deal with the privacy implications of age verification, any discussion of a right to access pornography anonymously is conspicuously missing. Similarly, this argument was also not cited in the NetChoice case over Arkansas’ age-verification and parental-consent law. There is a reason for this: neither of the two most cited precedents—ACLU v. Ashcroft and Brown v. Entertainment Merchant’s Association—are really about a right to anonymous speech. You will search in vain in either case to find the word “anonymous,” or discussion of precedents on the right to anonymous speech.

It is worth noting that you could make an argument that ACLU v. Ashcroft can be read as an anonymous-speech case. While Jeff Kosseff’s magisterial “The United States of Anonymous” doesn’t actually spend any time on the case at all, he has argued in the Daily Beast that the right to anonymous speech means that age-verification laws are unconstitutional. His argument is well-crafted, but not necessarily the most compelling based on the precedent in Ashcroft.  In fact, a federal district court case out of New Mexico that predates the Supreme Court’s decision in Ashcroft is likely the best court citation for connecting age-verification laws to the right to access speech anonymously. The best that Ashcroft itself has to offer is the following language:

Filters are less restrictive than COPA. They impose selective restrictions on speech at the receiving end, not universal restrictions at the source. Under a filtering regime, adults without children may gain access to speech they have a right to see without having to identify themselves or provide their credit card information.

This was seized upon by the district court in the Texas porn case, whose slip opinion stated: “[t]he same is true here—adults must affirmatively identify themselves before accessing controversial material, chilling them from accessing that speech.” 

But this is dangerously close to saying that there is an inherent right for adults to view pornography free from age-verification altogether, just because it is online. As David French has argued, is this really what the First Amendment is about? While there was an excellent rebuttal to French’s argument from Ari Cohn of TechFreedom, there is still reason to be skeptical that the First Amendment demands the right to anonymously view pornography, especially when relatively strong privacy technology has become available.

The debate is, at least in part, over that technology. As Cohn put it: “there is a world of difference between a quick glance at an ID to check date of birth, and uploading identity documents to the internet that create a record of a user’s access.” But over time, it seems there really won’t be much difference. From the standpoint of a right to anonymous speech, online age verification would therefore be just as constitutional as a convenience store checking a customer’s ID who wants to buy a pornographic magazine. After all, the internet by its very nature already collects a host of information, including IP addresses and other metadata, which could allow for identification in specific instances.

Thus, it is fair to ask whether the right to anonymous speech will, or should, even be relevant at all to the legal challenges to age-verification and parental-consent laws. While this is an argument that could be developed, it seems that advances in age-verification technology—while still necessarily imperfect from the perspective of data security—are sufficiently strong that the right for either adults or children to enjoy nearly anonymous access to protected speech may not be infringed to any significant degree.

Least-Restrictive Means and Burdening More Speech Than Necessary

Plaintiffs challenging the age-verification laws have focused on the laws failing First Amendment scrutiny more generally, rather than on any right to anonymously access online content. One of the big arguments that will need to be hashed out is whether age-verification laws are content neutral. If they are, then they could be subject to intermediate scrutiny, which would not require that they pursue the least-restrictive means.

Even then, courts could still find that a particular age-verification law is more burdensome on speech than necessary. In fact, this is exactly what happened in the NetChoice case in Arkansas. Thus, when my esteemed co-panelist Clare Morell argues that the court failed to consider that this was just a content-neutral contract issue, she is actually not quite right. While the court was skeptical that the law was truly content neutral, it proceeded on that basis anyway, and still issued a preliminary injunction, because the age-verification law burdened more speech than necessary.

It is worth noting in response to Clare that Associate Justice Clarence Thomas’ dissent in Brown was not joined by any other members of the Supreme Court. Moreover, Thomas joined the majority in Ashcroft v. ACLU, suggesting he probably still sees age-verification laws as unconstitutional! The late Associate Justice Antonin Scalia—himself, an originalist—was the author of the Brown opinion. Even Associate Justice Samuel Alito joined the majority in that case, and it seems unlikely that the newer conservative justices—who are all more speech-protective by nature—would join Justice Thomas in his opinion on the right of children to receive speech. Far from being vague, Justice Scalia’s majority opinion clearly stated that:

[M]inors are entitled to a significant measure of First Amendment protection, and only in relatively narrow and well-defined circumstances may government bar public dissemination of protected materials to them… but that does not include a free-floating power to restrict the ideas to which children may be exposed.

Precedent is strong against age-verification and parental-consent laws, and there is no reason to think the personnel changes on the Supreme Court would change the analysis.

If age-verification and parental-consent laws are found to be content based, then they will be subject to strict scrutiny, including the least-restrictive-means test. If this is the case, then promoting the widely available practical and technological means to filter online content—expertly listed in paragraphs 13-21 in the NetChoice complaint against the Arkansas’ law—will likely be found to be a less-restrictive means to protect teens and children from harmful content and features of social-media platforms.

Not to beat a dead horse, but this is, again, all about transaction costs. The least-cost avoidant from the negative externalities imposed by social-media use remains parents and teens themselves, working together to make marginal decisions about how to use these platforms.

Conclusion

While one could argue that the externalities imposed by social-media platforms on teen users and their parents represent a market failure, this is not the end of the analysis. Transaction costs help to explain that the institutional environment we create fosters the rules of the game that platforms, parents, and teens follow. If transaction costs are too high or placed incorrectly on social-media platforms, parents and teens’ ability to control how they use social media will actually suffer. Those same platforms will invest more in excluding teens from using their services altogether, rather than creating safe and vibrant communities from which teens can benefit. This would be a tragic result. 

On the other hand, there may be instances where social-media platforms really are the least-cost avoider of harm, and thus should be subject to intermediary liability. For more on that, see our paper “Who Moderates the Moderators?: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet.” 

Regardless, the right to anonymous speech really isn’t the issue with age-verification laws. The debate is, instead, about the fact that these laws will not survive any First Amendment scrutiny at all.