An important lesson of economics is that policies intended to help a targeted group of people often end up harming them in unintended ways. For instance, economists have long argued that policies like rent control and minimum-wage laws actually tend to lead to shortages in housing and jobs, respectively.
Similarly, despite having the stated intention of helping parents to protect their children online, the Kids Online Safety Act (KOSA) and the Children and Teen’s Online Privacy Protection Act (COPPA 2.0)—passed today by the U.S. Senate in a 91-3 vote as amendments to S. 2073—will likely lead online platforms to invest more in excluding minors than in creating safe and vibrant spaces for them to enjoy. The measure now moves to the U.S. House, where it is expected to face more significant political headwinds.
The bills would increase platforms’ costs to serve content to minors while hindering their ability to monetize that content through targeted advertising. For many online platforms, the easiest way to comply may be to verify users’ ages and exclude those who are now too costly to serve. A much better way forward would be to help parents and teens become more aware of the many tools already available to help them avoid potential harms that can attend using the internet.
KOSA requires a “high impact online company” to exercise “reasonable care” in its design features to “prevent and mitigate” certain harms, not all of which are easily identifiable. These potential harms include certain mental-health disorders and patterns indicating or encouraging compulsive use by minors, as well as physical violence, cyberbullying, and discriminatory harassment. Moreover, KOSA requires all covered platforms to implement default safeguards to limit design features that encourage minors’ use of the platforms, and to control the use of personalized recommendation systems.
COPPA 2.0 would expand the protections granted by the Children’s Online Privacy Protection Act of 1998 to users under age 13 to also cover those between 13 and 17 years of age. Where the current law requires parental consent to collect and use persistent identifiers for “individual-specific advertising” directed to children under age 13, COPPA 2.0 would require the verifiable consent of the teen or a parent to serve such ads to teens.
Obtaining verifiable consent has proven so costly under the existing COPPA rule that almost no covered entities make efforts to obtain it. COPPA has instead largely prevented platforms from monetizing children’s content, which has meant that less of it is created, and that which is created is of lower quality. As a recent study of the impact of the YouTube COPPA settlement from scholars Garrett A. Johnson, Tesary Lin, James C. Cooper, & Liang Zhong found:
[W]e find that child-directed content creators produce 18% less content and pivot towards producing non-child-directed content. Child-directed content creators also invest less in content quality: the proportion of original content falls by 11% and manual captioning falls by 27%, while user content ratings fall by 10%.
Extending COPPA to cover teens would likely lead to similar results. Without the ability to serve targeted ads to these users, platforms will have less incentive to encourage the creation of teen-focused content.
While both KOSA and COPPA 2.0 disclaim establishing any age-verification requirements or the collection of any data not already collected “in the normal course of business,” they both establish a constructive-knowledge standard for determining whether online platforms are knowingly serving minors—i.e., “knowledge fairly implied on the basis of objective circumstances.” As a result, online platforms will likely feel compelled to identify those users who are minors, in order to comply with the regulations on “personalized recommendation systems” (KOSA) or “individual-specific advertising” (COPPA 2.0). Since these laws would necessarily involve significant costs and foregone revenue, age verification may be a step toward restricting access based on age.
The U.S. Supreme Court found online age verification to be likely unconstitutional under the First Amendment in the 2004 case Ashcroft v. ACLU, and courts continue to find that age-verification requirements are not the least-restrictive means to protect children online. As one federal district court recently put it: “parents may rightly decide to regulate their children’s use of social media—including restricting the amount of time they spend on it, the content they may access, or even those they chat with. And many tools exist to help parents with this.” This also points to a better way forward on the policy question.
Parents and teens can work together to avoid online harms by using available technology. Cell-phone carriers and broadband providers allow for blocking of certain apps and sites, as well as control over how long devices can be used or with whom they can communicate. Online content can be monitored and filtered through wireless routers, devices, and third-party applications. The major online platforms themselves even offer numerous tools that facilitate relatively low-cost monitoring and control. Both parents and teens can use these tools to filter out content and people who are potentially harmful.
As the House takes up consideration of KOSA and COPPA 2.0, members should thoughtfully consider whether these laws are the best way forward. Rather than promoting children’s online safety and parental involvement, these laws would encourage the exclusion of minors from many online platforms altogether.