I participated yesterday in a webinar panel hosted by the Federalist Society’s Regulatory Transparency Project. The video was livestreamed at YouTube. Below, I offer my opening remarks, with some links.
Thank you for having me. As mentioned, I’m a senior scholar in innovation policy at the International Center for Law & Economics (ICLE). This means I have the institutional responsibility to talk to you today about Ronald Coase and transaction costs. Don’t worry, I’ll define my terms and explain why these things are important. In fact, I think it could help to frame our discussion, before offering my own preliminary thoughts on the debates over a duty of care to protect minors online and online age-verification and parental-consent laws.
So, one of the insights of the Nobel Prize-winning economist Ronald Coase was that, in the absence of transaction costs, the problem of externalities can be dealt with by the parties through bargaining, regardless which side has the initial property right. But in the presence of transaction costs, the initial allocation of rights does matter. In such cases, the burden of avoiding the harm of the externality should be placed on the lowest-cost avoider, taking into consideration the total social costs of the institutional framework. Now, let me define all of these terms and then explain how it is relevant to our discussion today.
An externality is a side effect of an action, basically what occurs when we do something that affects another person from their point of view. A negative externality occurs when a third party doesn’t like the effects of an action. When we say that such an externality is bilateral, it is to say that it takes two parties to tango: only when there is a conflict in the use or enjoyment of property is there an externality problem. Transaction costs are those costs that are not reflected in the price of a good or service alone, i.e., the costs of all those actions involved in the economic transaction, like the costs incurred to come to an agreement. Transaction costs are very important, because their presence and degree can prevent otherwise beneficial agreements from taking place. Institutional frameworks are thus also very important, because they determine rules of the game, including who should bear transaction costs. In order to maximize efficiency, the burden of avoiding harm from negative externalities should be placed on the party or parties that can avoid it at the lowest cost.
For our debate today, this is all very important, I promise. Here, this means considering the issues related to online social-media usage by teenagers. Social-media platforms, teenage users, and their parents represent the parties at issue. I think, at a high level, we can all agree that, while social-media platforms create incredible value for their users, they also impose negative externalities on both teens and their parents, which include both the content available on social-media platforms and how that content is presented.
If we lived in a world without transaction costs, or if they were low enough, it wouldn’t matter if you had parental-consent or age-verification laws, because the process would be so seamless. But studies on the Children’s Online Privacy Protection Act, litigation in the Texas porn case, and a recent report from the Australian government all show that the transaction costs of age verification and verifiable consent are substantial. The question, then, is how do we set policy in order to maximize the benefits of social media for society—including teen users—while dealing with those negative externalities. This is best done by placing the burden on the least-cost avoider of those harms.
Now, it is possible that the social-media platforms are the least-cost avoiders in many instances. We at ICLE have proposed in the past that online platforms should be subject to intermediary liability when they are the least-cost avoider. But when considering such a move, we must also consider the threat of collateral censorship, or what occurs when online platforms take down more speech than necessary to avoid legal liability.
The tricky part of this, of course, is that the negative externalities at issue are largely speech, and the vast majority of it is protected speech and protected communication decisions by platforms. This means there are important First Amendment values at issue. The Supreme Court has found that minors have a right to receive speech, including speech obtained through commercial transactions.
Thus, to set up our discussion: the question of what to do about negative externalities from social-media platforms is whether to place the burden on the platforms themselves: either by a duty to care to protect children—as in KOSA or California’s Age-Appropriate Design Code—or to age-verify users and obtain verifiable parental consent, as several states have done or proposed, as well as in COPPA 2.0; or to place the burden upon teenagers and parents working together, using practical and technological means to avoid those harms.
As for my preliminary thoughts: based on the three major Supreme Court decisions that have considered parental-consent or age-verification regimes, it appears that the Court agrees that, under First Amendment strict scrutiny, the least-cost avoider of harms associated with inappropriate content for children is parents working on behalf of their children and with their children themselves. Or more precisely, to use First Amendment language, the least-restrictive means of regulating speech here would be for the government to promote low-cost practical and technological means to help parents and minors avoid harmful content. Those cases: United States v. Playboy Entertainment Group (2000); Ashcroft v. American Civil Liberties Union (2004); and Brown v. Entertainment Merchants Association (2012), along with the recent NetChoice case out of Arkansas and Texas porn case, all stand for this same proposition.
While KOSA explicitly states it is not an age-verification law, it offers its own set of problems, as well. Interestingly, we at ICLE have also proposed a duty-of-care standard that would effectively amend Section 230 immunity. But the duty of care proposed in KOSA takes on a very different form. KOSA requires covered platforms to “act in the best interests of a user that the platform knows or reasonably should know is a minor by taking reasonable measures in its design and operation of products and services to prevent and mitigate” a variety of potential harms, including:
- Consistent with evidence-informed medical information, the following mental-health disorders: anxiety, depression, eating disorders, substance-use disorders, and suicidal behaviors;
- Patterns of use that indicate or encourage addiction-like behaviors;
- Physical violence, online bullying, and harassment of minors;
- Sexual exploitation and abuse;
- Promotion and marketing of narcotic drugs (as defined in Section 102 of the Controlled Substances Act (21 U.S.C. 802)), tobacco products, gambling, or alcohol; and
- Predatory, unfair, or deceptive marketing practices, or other financial harms.
Some of these things are clearly unprotected speech or conduct and would involve no constitutional issues. Others are either clearly protected speech, sometimes protected speech, or arguably protected speech. There are also real vagueness and overbreadth (i.e., chilling effect) concerns. And the enforcement mechanism of the bill would place a lot of power in the hands of state attorneys general and the Federal Trade Commission to make what are, in effect, decisions about what is acceptable speech or presentation of speech to minors. All of this would end up in First Amendment challenges that would likely sink the aims of the bill.
With that, I’ll conclude and reserve the balance of my time for further discussion.