Over the past few months, we at the International Center for Law & Economics (ICLE) have endeavored to bring the law & economics methodology to the forefront of several major public controversies surrounding online speech. To date, ICLE has engaged these issues by filing two amicus briefs before the U.S. Supreme Court, and another in Ohio state court.
The basic premise we have outlined is that online platforms ought to retain the right to engage in the marketplace of ideas by exercising editorial discretion, free from government interference. A free marketplace of ideas best serves both the users of these platforms, and society at-large.
In December, we filed an amicus to the Supreme Court in the NetChoice v. Paxton and Moody v. NetChoice cases, arguing that social-media companies are best positioned to balance the speech interests of their users, and that the First Amendment protects their right to exercise editorial discretion by enforcing their moderation policies. We also argue that the “common carriage” label is inappropriate for social-media platforms, which require users—even before they have made their member profiles—to agree to moderation policies that include restricting speech believed to harm others.
In other words, the online platforms do not hold themselves out to be open to all comers or all speech. Thus, Texas and Florida’s state laws not only violate the First Amendment, but also reduce social-media platforms’ value to users by requiring them to carry “lawful but awful” speech.
Last month, ICLE filed an amicus brief in the Court of Common Pleas of Delaware County, Ohio, in Ohio v. Google, in which argued that Google Search was not a common carrier and, in fact, had a First Amendment interest in their own search results. Google is likewise not a common carrier, because it offers individualized answers to users’ queries, which may be based, in part, on their location, search history, and other factors. In other words, online search is not an undifferentiated product like railroad carriage or telephone service.
In fact, as several federal district courts have found, search results are themselves protected by the First Amendment. They constitute speech, as they amount to search engines giving their opinion as to the best answers to various queries. The order of such search results—even if they give preference to Google’s own products and services—are therefore protected editorial discretion. There is no basis to assume that Google’s users are harmed, particularly when they can easily choose to use other general or specialized search engines if they don’t like the integration that Google provides.
Most recently, just this past Friday, ICLE filed an amicus brief in Murthy v. Missouri, in which we argue that the balance that social-media companies strike in exercising editorial discretion to benefit their users is upset when government actors intervene in the marketplace of ideas by coercing such companies into censorship.
Under the First Amendment, government actors may not suppress speech (in this case, speech that the government actors deem “misinformation”), even if the suppression is accomplished by pressuring private actors to do so on their behalf. The government may participate in the marketplace of ideas through counterspeech, but they may not coerce social-media companies into removing lawful speech or speakers, even in the name of misinformation. This is to the benefit of both the supply side of the marketplace of ideas (i.e., speakers), and the demand side (i.e., listeners) and redounds to society’s benefit at-large, as it empowers the people to make democratic decisions.
The uniting factor in each of these briefs is a proper understanding of the digital platforms as multi-sided markets that participate in the marketplace of ideas by exercising editorial discretion to their users’ benefit. As I put it in a previous post on our amicus in the NetChoice cases:
[T]he First Amendment’s protection of the “marketplace of ideas” requires allowing private actors—like social-media companies—to set speech policies for their own private property. Social-media companies are best-placed to balance the speech interests of their users, a process that requires considering both the benefits and harms of various kinds of speech. Moreover, the First Amendment protects their ability to do so, free from government intrusion, even if the intrusion is justified by an attempt to identify social media as common carriers.
If social-media companies are to create a useful product for their users, they must be able to strike a delicate balance between what people want to post and what they want to see and hear. As multisided platforms that rely on advertising revenue, they must also make sure to keep high-value users engaged on the platform. Moderation policies are an attempt to create community rules to strike this balance. This may include limits on otherwise legal speech in ways that are not viewpoint neutral. For instance, to keep users and advertisers, social-media platforms may choose to restrict pro-Nazi speech. But in order to enforce these rules, they need the ability to exclude those who refuse to abide by them. This is private ordering: the ability of private actors to create rules for their own property and to enforce them through technological and legal means.
Similarly, in the Ohio v. Google case, the search engine must be able to exercise editorial discretion in its search results in order to provide the best answers to its users, or risk losing users to competitors and thus becoming less valuable to advertisers. This could include integrating its own products and services into search results. As we put it in our amicus:
Google’s mission is to “organize the world’s information and make it universally accessible and useful.” … Google does this at zero price, otherwise known as free, to its users. This generates billions of dollars of consumer surplus per year for U.S. consumers… This incredible deal for users is possible because Google is what economists call a multisided platform… On one side of the platform, Google provides answers to queries of users. On the other side of the platform, advertisers, pay for access to Google’s users, and, by extension, subsidize the user-side consumption of Google’s free services.
In order to maximize the value of its platform, Google must curate the answers it provides in its search results to the benefit of its users, or it risks losing those users to other search engines. This includes both other general search engines and specialized search engines that focus on one segment of online content (like Yelp or Etsy or Amazon). Losing users would mean the platform becomes less valuable to advertisers.
If users don’t find Google’s answers useful, including answers that may preference other Google products, then they can easily leave and use alternative methods of search. Thus, there are real limitations on how much Google can self-preference before the incentives that allowed it to build a successful platform unravel as users and therefore advertisers leave. In fact, it is highly likely that users of Google search want the integration of direct answers and Google products, and Google provides these results to the benefit of its users.
Whether it’s imposing common-carriage requirements that force social-media companies (or Google Search) to change how they exercise editorial discretion, or by imposing censorship requirements by pressuring social-media companies to take down alleged misinformation, government actors violate the First Amendment when they seek to intervene in the marketplace of ideas, and ultimately harm users of those platforms.
The best answer for the future of online speech is found in the First Amendment’s protection of the marketplace of ideas from government intervention. Competition in the idea market requires a hands-off approach. Appeals to preventing “bias,” “unfairness,” or “misinformation” are insufficient to justify departing from established constitutional norms.