
A Feb. 20 press release from the Federal Trade Commission (FTC) announces “Federal Trade Commission Launches Inquiry on Tech Censorship.” That is, “a public inquiry to better understand how technology platforms deny or degrade users’ access to services based on the content of their speech or affiliations, and how this conduct may have violated the law.” Specifically, the agency published a “Request for Public Comment Regarding Technology Platform Censorship.”
This sort of request for information (RFI) is not a law-enforcement investigation, and it does not require any party to submit anything in particular—no subpoena power or compulsory process is invoked, Rather, it’s a broad sort of informal inquiry that may inform the commission and, potentially (or not), lead to a more formal study or investigation down the road. It fits broadly under the FTC’s authority under Section 6(a) of the FTC Act:
To gather and compile information concerning, and to investigate from time to time the organization, business, conduct, practices, and management of any person, partnership, or corporation engaged in or whose business affects commerce….
As such, it is business as usual. Keeping up with market developments by various means—from rigorous empirical studies to issue-spotting workshops and RFIs—are basic parts of the FTC’s statutory mission. The FTC’s press release does not mention a commission vote authorizing the RFI, but perhaps none was needed.
At the same time, we might wonder whether an inquiry into “censorship” and how “technology platform” conduct “may have violated the law” appears to tilt the inquiry (and perhaps the legal playing field), even if it doesn’t say which provision of the antitrust laws or the FTC Act “may” be violated by, e.g., a platform’s denial or degradation of “users’ access to services based on the content of their speech or affiliations.” Encouraging input from “[t]ech platform users who have been banned, shadow banned, demonetized, or otherwise censored” (but no others) hardly seems a neutral solicitation of public comment on the potential costs and benefits of platform conduct.
Indeed, some of the commission’s commentary seems downright ominous. The FTC’s press release informs us that “Censorship by technology platforms is not just un-American, it is potentially illegal.” And similarly (very similarly), in a post on the technology platform formerly known as Twitter, Ferguson says that: “Big Tech censorship is not just un-American, it is potentially illegal.”
We’re engaged in a broad, if informal, inquiry into “un-American” activities? Are we just asking questions? On the same tech platform, Commissioner Melissa Holyoak opines that: “Big tech censorship is one of the most consequential issues facing our nation.”
This could, of course, be more a matter of rhetorical vigor than a policy signal, but it does raise questions. Just “big tech”? And which firms, at the moment, constitute “big tech”? I’m tempted to ask why other sorts of tech censorship are neither un-American nor even potentially illegal, but I’m stuck on the “censorship” part.
I am not a First Amendment scholar, even if I’ve written a bit on the commercial-speech doctrine. Still, I am confident of the text of the First Amendment. And I am nearly certain that a substantial body of Supreme Court jurisprudence on free speech and censorship considers First Amendment protections against government suppression (or compulsion) of speech, not the selective promotion of certain speech by private actors, whether big or little, high-tech or low.
Here is Justice Brett Kavanaugh writing for the Court in Manhattan Community Access Corp. v. Halleck:
The text and original meaning of those Amendments, as well as this Court’s longstanding precedents, establish that the Free Speech Clause prohibits only governmental abridgment of speech. The Free Speech Clause does not prohibit private abridgement of speech.
And here is the Supreme Court, just last year, on platforms’ content-moderation practices in Moody v. Netchoice:
[A] State may not interfere with private actors’ speech to advance its own vision of ideological balance. States (and their citizens) are of course right to want an expressive realm in which the public has access to a wide range of views. That is, indeed, a fundamental aim of the First Amendment. But the way the First Amendment achieves that goal is by preventing the government from “tilt[ing] public debate in a preferred direction.” Sorrell v. IMSHealth Inc., 564 U. S. 552, 578–579 (2011). It is not by licensing the government to stop private actors from speaking as they wish and preferring some views over others.
And that is so even when those actors possess “enviable vehicle[s]” for expression. Hurley, 515 U. S., at 577. In a better world, there would be fewer inequities in speech opportunities; and the government can take many steps to bring that world closer. But it cannot prohibit speech to improve or better balance the speech market.
On the spectrum of dangers to free expression, there are few greater than allowing the government to change the speech of private actors in order to achieve its own conception of speech nirvana. That is why we have said in so many contexts that the government may not “restrict the speech of some elements of our society in order to enhance the relative voice of others.” Buckley v. Valeo, 424 U. S. 1, 48–49 (1976) (per curiam). That unadorned interest is not “unrelated to the suppression of free expression,” and the government may not pursue it consistent with the First Amendment.
At issue in Moody were Florida and Texas laws that purported to regulate “large social media companies and other internet platforms.” While recognizing that the state laws differed in various respects, the Court noted that both laws would “curtail the platforms’ capacity to engage in content moderation—to filter, prioritize, and label the varied third-party messages, videos, and other content their users wish to post.” The decision seems very much on point, even if the content moderation at-issue may not exhaust the issues of potential concern to the FTC.
The Court’s holding in Moody was not sui generis. It followed a long line of cases on government regulation of speech and, not incidentally, editorial discretion. As my International Center for Law & Economics (ICLE) colleague Ben Sperry (who is a First Amendment scholar) put the point four years before the Moody decision:
With respect to speech, the negative conception of liberty recognizes that individual property owners can control what is said on their property, for example. To force property owners to allow speakers/speech on their property that they don’t desire would actually be a violation of their liberty — what the Supreme Court calls “compelled speech.” The First Amendment, consistent with this view, generally protects speech from government interference (with very few, narrow exceptions), while allowing private regulation of speech (again, with very few, narrow exceptions).
On the potential import for these issues to antitrust, I recommend Ben’s posts (here and here). He argues, among other things, that:
…the First Amendment actually restricts government actions aimed at platforms like Facebook when they engage in editorial discretion by moderating content. If an antitrust plaintiff was to act on the impulse to “break up” Facebook because of alleged political bias in its editorial discretion, the lawsuit would be running headlong into the First Amendment’s protections.
Further:
Before even getting into the analysis of how to incorporate political bias into antitrust analysis … it should be noted that there likely is no viable antitrust remedy [for allegedly anticompetitive content moderation]. … online platforms like Google are First Amendment speakers who have editorial discretion over their sites and apps, much like newspapers. An antitrust remedy compelling these companies to carry speech they disagree with would almost certainly violate the First Amendment. [Internal citations omitted.]
Of course, that is not to say that the First Amendment immunizes acquisitions by tech platforms from antitrust scrutiny. Large tech firms operate complex businesses, and a merger involving a large tech firm could violate Section 7 of the Clayton Act, just as conduct by a tech firm could violate Sections 1 or 2 of the Sherman Antitrust Act.
But such violations would have to be established on traditional theories of antitrust harm. A horizontal acquisition could tend to create a monopoly in a product market, likely leading to higher prices and/or reduced output (in violation of Clayton §7); or horizontal competitors could agree to fix prices or allocate markets (in violation of §1). Et cetera.
As Ben notes, competing platforms could be shown to collude on non-price aspects of their services, but that possibility seems at odds with the fact that various tech platforms seem to have very different content-moderation policies. That should tend to foster, rather than suppress, interbrand competition.
He also acknowledges that one could, “in theory,” attempt to argue that a given merger would likely lead to a given model of content moderation, to be construed as a reduction in product quality and, hence, an increase in quality-adjusted price. But that would be tough sledding, indeed—for all the reasons that purely qualitative arguments often founder when applied to products with myriad qualitative aspects valued variously by their consumers.
I don’t doubt that there are consumers who dislike certain platforms’ policies, perhaps in the extreme. I am but an n of one, but there are things I don’t like at all. As mom used to say, “Daniel can be difficult.” That’s not obviously an antitrust problem.
When considering hospital mergers, certain qualitative aspects of care may be salient: ceteris peribus, higher rates of post-operative infection are uniformly considered lower-quality outcomes. Is there an analog for platforms’ moderation policies? Given the complexity of platform attributes and the number and diversity of consumers, there may be something for everyone to dislike. Still, some consumers might strongly prefer just the type of content moderation others revile.
And it’s not all about content moderation. It is reported that Meta’s Facebook had a 2024 audience of nearly 194 million users in the United States alone. One supposes that those users like something about Facebook’s bundle of attributes at the price Meta charges for them. And it is not the government’s role to say which qualitative bundle is to be preferred, much less to compel a firm to provide it.
As for unilateral conduct, various theories of harm might—under some facts and circumstances—find traction. But firms (including firms with market power) are typically free to price (or charge quality-adjusted prices) products as they will, subject to market (not regulatory) discipline.
The RFI suggests that both consumer protection and antitrust issues may be at play:
Such actions by technology platforms may violate their terms of service or other policies (collectively, “policies”) and flout users’ reasonable expectations based on the technology platforms’ public representations. Such policies and practices, which may affect competition, may have resulted from a lack of competition or may have been the product of anti-competitive conduct.
Certainly, firms are bound by Section 5’s prohibition of “unfair or deceptive acts or practices” in commerce, just as they are bound by the antitrust laws. And it’s possible that some platform’s marketing is materially false or misleading (or harmful to consumers without being offset by countervailing benefits to competition or consumers). A platform might promise me an unlimited right to “say anything,” and secure payment from me based on that assurance, only to fail to deliver the benefit of the bargain.
But it’s not clear how the conduct described (in rather broad terms) in the RFI suggests a UDAP action that would hold water. Perhaps there is a case out there, but nothing comes to mind. And compelled speech might not be a constitutionally viable remedy in any case.
As I said up top, as a general matter, there’s nothing out of the ordinary—much less wrong—with requests for public input on diverse issues of possible relevance to the FTC’s very broad statutory jurisdiction. But this one seems, as the kids say, “problematic.” Which is not to say “un-American.”