One of the biggest names in economics, Daron Acemoglu, recently joined the mess that is Twitter. He wasted no time in throwing out big ideas for discussion and immediately getting tons of, let us say, spirited replies.
One of Acemoglu’s threads involved a discussion of F.A. Hayek’s famous essay “The Use of Knowledge in Society,” wherein Hayek questions central planners’ ability to acquire and utilize such knowledge. Echoing many other commentators, Acemoglu asks: can supercomputers and artificial intelligence get around Hayek’s concerns?
Coming back to Hayek’s argument, there was another aspect of it that has always bothered me. What if computational power of central planners improved tremendously? Would Hayek then be happy with central planning?
While there are a few different layers to Hayek’s argument, at least one key aspect does not rest at all on computational power. Hayek argues that markets do not require users to have much information in order to make their decisions.
To use Hayek’s example, when the price of tin increases: “All that the users of tin need to know is that some of the tin they used to consume is now more profitably employed elsewhere.” Knowing whether demand or supply shifted to cause the price increase would be redundant information for the tin user; the price provides all the information about market conditions that the user needs.
To Hayek, this informational role of prices is what makes markets unique (compared to central planning):
The most significant fact about this [market] system is the economy of knowledge with which it operates, or how little the individual participants need to know in order to take the right action.
Good computers, bad computers—it doesn’t matter. Markets just require less information from their individual participants. This was made precise in the 1970s and 1980s in a series of papers on the “informational efficiency” of competitive markets.
This post will give an explanation of what the formal results say. From there, we can go back to debating the relevance for Acemoglu’s argument and the future of central planning with AI.
From Hayek to Hurwicz
First, let’s run through an oversimplified history of economic thought. Hayek developed his argument about information and markets during the socialist-calculation debate between Hayek and Ludwig von Mises on one side and Oskar Lange and Abba Lerner on the other. Lange and Lerner argued that a planned socialist economy could replicate a market economy. Mises and Hayek argued that it could not, because the socialist planner would not have the relevant information.
In response to the socialist-calculation debate, Leonid Hurwicz—who studied with Hayek at the London School of Economics, overlapped with Mises in Geneva, and would ultimately be awarded the Nobel Memorial Prize in 2007—developed the formal language in the 1960s and 1970s that became what we now call “mechanism design.”
Specifically, Hurwicz developed an abstract way to measure how much information a system needed. What does it mean for a system to require little information? What is the “efficient” (i.e., minimal) amount of information? Two later papers (Mount and Reiter (1974) and Jordan (1982)) used Hurwicz’s framework to prove that competitive markets are informationally efficient.
Understanding the Meaning of Informational Efficiency
How much information do people need to achieve a competitive outcome? This is where Hurwicz’s theory comes in. He gave us a formal way to discuss more and less information: the size of the message space.
To understand the message space’s size, consider an economy with six people: three buyers and three sellers. Some buyers—call them type B3—are willing to pay $3. Type B2 is willing to pay $2. Sellers of type S0 are willing to sell for $0. S1 for $1, and so on. Each buyer knows their valuation for the good, and each seller knows their cost.
Here’s the weird exercise. Along comes an oracle who knows everything. The oracle decides to figure out a competitive price that will clear the market, so he draws out the supply curve (in orange), and the demand curve (in blue) and picks an equilibrium point where they cross (in red).
So the oracle knows a price of $1.50 and a quantity of 2 is an equilibrium.
Now, we, the ignorant outsiders, come along and want to verify that the oracle is telling the truth and knows that it is an equilibrium. But we shouldn’t take the oracle’s word for it.
How can the oracle convince us that this is an equilibrium? We don’t know anyone’s valuation.
The oracle puts forward a game to the six players. The oracle says:
The price is $1.50, meaning that if you buy 1, you pay $1.50; if you sell 1, you receive $1.50.
If you say you’re B3 (which means you value the good at $3), you must buy 1.
If you say you’re B2, you must buy 1.
If you say you’re B1, you must buy 0.
If you say you’re S0, you must sell 1.
If you say you’re S1, you must sell 1.
If you say you’re S2, you must sell 0.
The oracle then asks everyone: do you accept the terms of this mechanism? Everyone says yes, because only the buyers who value it more than $1.50 buy and only the sellers with a cost less than $1.50 sell. By everyone agreeing, we (the ignorant outsiders) can verify that the oracle did, in fact, know people’s valuations.
Now, let’s count how much information the oracle needed to communicate. He needed to send a message that included the price and the trades for each type. Technically, he didn’t need to say S2 sells zero, because it is implied by the fact that the quantity bought must equal the quantity sold. In total, he needs to send six messages.
The formal exercise amounts to counting each message that needs to be sent. With a formally specified way of measuring how much information is required in competitive markets, we can now ask whether this is a lot.
If you don’t care about efficiency, you can always save on information and not say anything, don’t have anyone trade, and have a message space of size 0. That saves on information; just do nothing.
But in the context of the socialist-calculation debate, the argument was over how much information was needed to achieve “good” outcomes. Lange and Lerner argued that market socialism could be efficient, not that it would result in zero trade, so efficiency is the welfare benchmark we are aiming for.
If you restrict your attention to efficient outcomes, Mount and Reiter (1974) showed you cannot use less information than competitive markets. In a later paper, Jordan (1982) showed that there is no way to match the competitive mechanism in terms of information. The competitive mechanism is the unique mechanism with this dimension.
Acemoglu reads Hayek as saying “central planning wouldn’t work because it would be impossible to collect and compute the right allocation of resources.” But the Jordan and Mount & Reiter papers don’t claim that computation is impossible for central planners. Take whatever computational abilities exist, from the first computer to the newest AI—competitive markets always require the least information possible. Supercomputers or AI do not, and cannot, change that relative comparison.
Beyond Computational Issues
In terms of information costs, the best a central planner could hope for is to mimic exactly the market mechanism. But then, of what use is the planner? She’s just one more actor who could divert the system toward her own interest. As Acemoglu points out, “if the planner could collect all of that information, she could do lots of bad things with it.”
The incentive problem is a separate problem, which is why Hayek tried to focus solely on information. Think about building a road. There is a concern that markets will not provide roads because people would be unwilling to pay for them without being coerced through taxes. You cannot simply ask people how much they are willing to pay for the road and charge them that price. People will lie and say they do not care about roads. No amount of computing power fixes incentives. Again, computing power is tangential to the question of markets versus planning. Superior computational power doesn’t help.
There’s a lot buried in Hayek and all of those ideas are important and worth considering. They are just further complications with which we should grapple. A handful of theory papers will never solve all of our questions about the nature of markets and central planning. Instead, the formal papers tell us, in a very stylized setting, what it would even mean to quantify the “amount of information.” And once we quantify it, we have an explicit way to ask: do markets use minimal information?
For several decades, we have known that the answer is yes. In recent work, Rafael Guthmann and I show that informational efficiency can extend to big platforms coordinating buyers and sellers—what we call market-makers.
The bigger problem with Acemoglu’s suggestion that computational abilities can solve Hayek’s challenge is that Hayek wasn’t merely thinking about computation and the communication of information. Instead, Hayek was concerned about our ability to even articulate our desires. In the example above, the buyers know exactly how much they are willing to pay and sellers know exactly how much they are willing to sell for. But in the real world, people have tacit knowledge that they cannot communicate to third parties. This is especially true when we think about a dynamic world of innovation. How do you communicate to a central planner a new product?
The real issue is the market dynamics require entrepreneurs who are imagining new futures with new products like the iPhone. Major innovations will never be able to be articulated and communicated to a central planner. All of these readings of Hayek and the market’s ability to communicate information—from formal informational efficiency to tacit knowledge—are independent of computational capabilities.
Legislation to secure children’s safety online is all the rage right now, not only on Capitol Hill, but in state legislatures across the country. One of the favored approaches is to impose on platforms a duty of care to protect teen users.
For example, Sens. Richard Blumenthal (D-Conn.) and Marsha Blackburn (R-Tenn.) have reintroduced the Kid’s Online Safety Act (KOSA), which would require that social-media platforms “prevent or mitigate” a variety of potential harms, including mental-health harms; addiction; online bullying and harassment; sexual exploitation and abuse; promotion of narcotics, tobacco, gambling, or alcohol; and predatory, unfair, or deceptive business practices.
But while bills of this sort would define legal responsibilities that online platforms have to their minor users, this statutory duty of care is more likely to result in the exclusion of teens from online spaces than to promote better care of teens who use them.
Drawing on the previous research that I and my International Center for Law & Economics (ICLE) colleagues have done on the economics of intermediary liability and First Amendment jurisprudence, I will in this post consider the potential costs and benefits of imposing a statutory duty of care similar to that proposed by KOSA.
The Law & Economics of Online Intermediary Liability and the First Amendment (Kids Edition)
Previously (in a law review article, an amicus brief, and a blog post), we at ICLE have argued that there are times when the law rightfully places responsibility on intermediaries to monitor and control what happens on their platforms. From an economic point of view, it makes sense to impose liability on intermediaries when they are the least-cost avoider: i.e., the party that is best positioned to limit harm, even if they aren’t the party committing the harm.
On the other hand, as we have also noted, there are costs to imposing intermediary liability. This is especially true for online platforms with user-generated content. Specifically, there is a risk of “collateral censorship” wherein online platforms remove more speech than is necessary in order to avoid potential liability. For example, imposing a duty of care to “protect” minors, in particular, could result in online platforms limiting teens’ access.
If the social costs that arise from the imposition of intermediary liability are greater than the benefits accrued, then such an arrangement would be welfare-destroying, on net. While we want to deter harmful (illegal) content, we don’t want to do so if we end up deterring access to too much beneficial (legal) content as a result.
The First Amendment often limits otherwise generally applicable laws, on grounds that they impose burdens on speech. From an economic point of view, this could be seen as an implicit subsidy. That subsidy may be justifiable, because information is a public good that would otherwise be underproduced. As Daniel A. Farber put it in 1991:
[B]ecause information is a public good, it is likely to be undervalued by both the market and the political system. Individuals have an incentive to ‘free ride’ because they can enjoy the benefits of public goods without helping to produce those goods. Consequently, neither market demand nor political incentives fully capture the social value of public goods such as information. Our polity responds to this undervaluation of information by providing special constitutional protection for information-related activities. This simple insight explains a surprising amount of First Amendment doctrine.
In particular, the First Amendment provides important limits on how far the law can go in imposing intermediary liability that would chill speech, including when dealing with potential harms to teenage users. These limitations seek the same balance that the economics of intermediary liability would suggest: how to hold online platforms liable for legally cognizable harms without restricting access to too much beneficial content. Below is a summary of some of those relevant limitations.
Speech vs. Conduct
The First Amendment differentiates between speech and conduct. While the line between the two can be messy (and “expressive conduct” has its own standard under the O’Brien test), governmental regulation of some speech acts is permissible. Thus, harassment, terroristic threats, fighting words, and even incitement to violence can be punished by law. On the other hand, the First Amendment does not generally allow the government to regulate “hate speech” or “bullying.” As the 3rd U.S. Circuit Court of Appeals explained it in the context of a school’s anti-harassment policy:
There is of course no question that non-expressive, physically harassing conduct is entirely outside the ambit of the free speech clause. But there is also no question that the free speech clause protects a wide variety of speech that listeners may consider deeply offensive, including statements that impugn another’s race or national origin or that denigrate religious beliefs… When laws against harassment attempt to regulate oral or written expression on such topics, however detestable the views expressed may be, we cannot turn a blind eye to the First Amendment implications.
In other words, while a duty of care could reach harrassing conduct, it is unclear how it could reach pure expression on online platforms without implicating the First Amendment.
Impermissibly Vague
The First Amendment also disallows rules sufficiently vague that they would preclude a person of ordinary intelligence from having fair notice of what is prohibited. For instance, in an order handed down earlier this year in Høeg v. Newsom, the federal district court granted the plaintiffs’ motion to enjoin a California law that would charge medical doctors with sanctionable “unprofessional conduct” if, as part of treatment or advice, they shared with patients “false information that is contradicted by contemporaneous scientific consensus contrary to the standard of care.”
The court found that “contemporary scientific consensus” was so “ill-defined [that] physician plaintiffs are unable to determine if their intended conduct contradicts [it].” The court asked a series of questions relevant to trying to define the phrase:
[W]ho determines whether a consensus exists to begin with? If a consensus does exist, among whom must the consensus exist (for example practicing physicians, or professional organizations, or medical researchers, or public health officials, or perhaps a combination)? In which geographic area must the consensus exist (California, or the United States, or the world)? What level of agreement constitutes a consensus (perhaps a plurality, or a majority, or a supermajority)? How recently in time must the consensus have been established to be considered “contemporary”? And what source or sources should physicians consult to determine what the consensus is at any given time (perhaps peer-reviewed scientific articles, or clinical guidelines from professional organizations, or public health recommendations)?
Thus, any duty of care to limit access to potentially harmful online content must not be defined in a way that is too vague for a person of ordinary intelligence to know what is prohibited.
Liability for Third-Party Speech
The First Amendment limits intermediary liability when dealing with third-party speech. For the purposes of defamation law, the traditional continuum of liability was from publishers to distributors (or secondary publishers) to conduits. Publishers—such as newspapers, book publishers, and television producers—exercised significant editorial control over content. As a result, they could be held liable for defamatory material, because it was seen as their own speech. Conduits—like the telephone company—were on the other end of the spectrum, and could not be held liable for the speech of those who used their services.
As the Court of Appeals of the State of New York put in a 1974 opinion:
In order to be deemed to have published a libel a defendant must have had a direct hand in disseminating the material whether authored by another, or not. We would limit [liability] to media of communications involving the editorial or at least participatory function (newspapers, magazines, radio, television and telegraph)… The telephone company is not part of the “media” which puts forth information after processing it in one way or another. The telephone company is a public utility which is bound to make its equipment available to the public for any legal use to which it can be put…
Distributors—which included booksellers and libraries—were in the middle of this continuum. They had to have some notice that content they distributed was defamatory before they could be held liable.
Courts have long explored the tradeoffs between liability and carriage of third-party speech in this context. For instance, in Smith v. California, the U.S. Supreme Court found that a statute establishing strict liability for selling obscene materials violated the First Amendment because:
By dispensing with any requirement of knowledge of the contents of the book on the part of the seller, the ordinance tends to impose a severe limitation on the public’s access to constitutionally protected matter. For if the bookseller is criminally liable without knowledge of the contents, and the ordinance fulfills its purpose, he will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected as well as obscene literature. It has been well observed of a statute construed as dispensing with any requirement of scienter that: “Every bookseller would be placed under an obligation to make himself aware of the contents of every book in his shop. It would be altogether unreasonable to demand so near an approach to omniscience.” (internal citations omitted)
It’s also worth noting that traditional publisher liability was limited in the case of republication, such as when newspapers republished stories from wire services like the Associated Press. Courts observed the economic costs that would attend imposing a strict-liability standard in such cases:
No newspaper could afford to warrant the absolute authenticity of every item of its news’, nor assume in advance the burden of specially verifying every item of news reported to it by established news gathering agencies, and continue to discharge with efficiency and promptness the demands of modern necessity for prompt publication, if publication is to be had at all.
Over time, the rule was extended, either by common law or statute, from newspapers to radio and television broadcasts, with the treatment of republication of third-party speech eventually resembling conduit liability even more than distributor liability. See Brent Skorup and Jennifer Huddleston’s “The Erosion of Publisher Liability in American Law, Section 230, and the Future of Online Curation” for a more thoroughgoing treatment of the topic.
The thing that pushed the law toward conduit liability when entities carried third-party speech was the implicit economic reasoning. For example, in 1959’s Farmers Educational & Cooperative Union v. WDAY, Inc., the Supreme Court held that a broadcaster could not be found liable for defamation made by a political candidate on the air, arguing that:
The decision a broadcasting station would have to make in censoring libelous discussion by a candidate is far from easy. Whether a statement is defamatory is rarely clear. Whether such a statement is actionably libelous is an even more complex question, involving as it does, consideration of various legal defenses such as “truth” and the privilege of fair comment. Such issues have always troubled courts… if a station were held responsible for the broadcast of libelous material, all remarks evenly faintly objectionable would be excluded out of an excess of caution. Moreover, if any censorship were permissible, a station so inclined could intentionally inhibit a candidate’s legitimate presentation under the guise of lawful censorship of libelous matter. Because of the time limitation inherent in a political campaign, erroneous decisions by a station could not be corrected by the courts promptly enough to permit the candidate to bring improperly excluded matter before the public. It follows from all this that allowing censorship, even of the attenuated type advocated here, would almost inevitably force a candidate to avoid controversial issues during political debates over radio and television, and hence restrict the coverage of consideration relevant to intelligent political decision.
It is clear from the foregoing that imposing duty of care on online platforms to limit speech in ways that would make them strictly liable would be inconsistent with distributor liability. But even a duty of care that more resembled a negligence-based standard could implicate speech interests if online platforms are seen to be akin to newspapers, or to radio and television broadcasters, when they act as republishers of third-party speech. Such cases would appear to require conduit liability.
The First Amendment Applies to Children
The First Amendment has been found to limit what governments can do in the name of protecting children from encountering potentially harmful speech. For example, California in 2005 passed a law prohibiting the sale or rental of “violent video games” to minors. In Brown v. Entertainment Merchants Ass’n, the Supreme Court found the law unconstitutional, finding that:
No doubt [the government] possesses legitimate power to protect children from harm, but that does not include a free-floating power to restrict the ideas to which children may be exposed. “Speech that is neither obscene as to youths nor subject to some other legitimate proscription cannot be suppressed solely to protect the young from ideas or images that a legislative body thinks unsuitable for them.” (internal citations omitted)
The Court did not find it persuasive that the video games were violent (noting that children’s books often depict violence) or that they were interactive (as some children’s books offer choose-your-own-adventure options). In other words, there was nothing special about violent video games that would subject them to a lower level of constitutional protection, even for minors that wished to play them.
The Court also did not find persuasive California’s appeal that the law aided parents in making decisions about what their children could access, stating:
California claims that the Act is justified in aid of parental authority: By requiring that the purchase of violent video games can be made only by adults, the Act ensures that parents can decide what games are appropriate. At the outset, we note our doubts that punishing third parties for conveying protected speech to children just in case their parents disapprove of that speech is a proper governmental means of aiding parental authority.
Justice Samuel Alito’s concurrence in Brown would have found the California law unconstitutionally vague, arguing that constitutional speech would be chilled as a result of the law’s enforcement. The fact its intent was to protect minors didn’t change that analysis.
Limiting the availability of speech to minors in the online world is subject to the same analysis as in the offline world. In Reno v. ACLU, the Supreme Court made clear that the First Amendment applies with equal effect online, stating that “our cases provide no basis for qualifying the level of First Amendment scrutiny that should be applied to this medium.” In Packingham v. North Carolina, the Court went so far as to call social-media platforms “the modern public square.”
Restricting minors’ access to online platforms through age-verification requirements already have been found to violate the First Amendment. In Ashcroft v. ACLU (II), the Supreme Court reviewed provisions of the Children Online Protection Act’s (COPA) that would restrict posting content “harmful to minors” for “commercial purposes.” COPA allowed an affirmative defense if the online platform restricted access by minors through various age-verification devices. The Court found that “[b]locking and filtering software is an alternative that is less restrictive than COPA, and, in addition, likely more effective as a means of restricting children’s access to materials harmful to them” and upheld a preliminary injunction against the law, pending further review of its constitutionality.
On remand, the 3rd Circuit found that “[t]he Supreme Court has disapproved of content-based restrictions that require recipients to identify themselves affirmatively before being granted access to disfavored speech, because such restrictions can have an impermissible chilling effect on those would-be recipients.” The circuit court would eventually uphold the district court’s finding of unconstitutionality and permanently enjoin the statute’s provisions, noting that the age-verification requirements “would deter users from visiting implicated Web sites” and therefore “would chill protected speech.”
A duty of care to protect minors could be unconstitutional if it ends up limiting access to speech that is not illegal for them to access. Age-verification requirements that would likely accompany such a duty could also result in a statute being found unconstitutional.
In sum:
A duty of care to prevent or mitigate harassment and bullying has First Amendment implications if it regulates pure expression, such as speech on online platforms.
A duty of care to limit access to potentially harmful online speech can’t be defined so vaguely that a person of ordinary intelligence can’t know what is prohibited.
A duty of care that establishes a strict-liability standard on online speech platforms would likely be unconstitutional for its chilling effects on legal speech. A duty of care that establishes a negligence standard could similarly lead to “collateral censorship” of third-party speech.
A duty of care to protect minors could be unconstitutional if it limits access to legal speech. De facto age-verification requirements could also be found unconstitutional.
The Problems with KOSA: The First Amendment and Limiting Kids’ Access to Online Speech
KOSA would establish a duty of care for covered online platforms to “act in the best interests of a user that the platform knows or reasonably should know is a minor by taking reasonable measures in its design and operation of products and services to prevent and mitigate” a variety of potential harms, including:
Consistent with evidence-informed medical information, the following mental health disorders: anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors.
Patterns of use that indicate or encourage addiction-like behaviors.
Physical violence, online bullying, and harassment of the minor.
Sexual exploitation and abuse.
Promotion and marketing of narcotic drugs (as defined in section 102 of the Controlled Substances Act (21 U.S.C. 802)), tobacco products, gambling, or alcohol.
Predatory, unfair, or deceptive marketing practices, or other financial harms.
There are also a variety of tools and notices that must be made available to users under age 17, as well as to their parents.
Reno and Age Verification
KOSA could be found unconstitutional under the Reno and COPA-line of cases for creating a de facto age-verification requirement. The bill’s drafters appear to be aware of the legal problems that an age-verification requirement would entail. KOSA therefore states that:
Nothing in this Act shall be construed to require—(1) the affirmative collection of any personal data with respect to the age of users that a covered platform is not already collecting in the normal course of business; or (2) a covered platform to implement an age gating or age verification functionality.
But this doesn’t change the fact that, in order to effectuate KOSA’s requirements, online platforms would have to know their users’ ages. KOSA’s duty of care incorporates a constructive-knowledge requirement (i.e., “reasonably should know is a minor”). A duty of care combined with the mandated notices and tools that must be made available to minors makes it “reasonable” that platforms would have to verify the age of each user.
If a court were to agree that KOSA doesn’t require age gating or age verification, this would likely render the act ineffective. As it stands, most of the online platforms that would be covered by KOSA only ask users their age (or birthdate) upon creation of a profile, which is easily evaded by simple lying. While those under age 17 (but at least age 13) at the time of the act’s passage who have already created profiles would be implicated, it would appear the act wouldn’t require platforms to vet whether users who said they were at least 17 when they created new profiles were actually telling the truth.
Vagueness and Protected Speech
Even if KOSA were not found unconstitutional for creating an explicit age-verification scheme, it still likely would lead to kids under 17 being restricted from accessing protected speech. Several of the types of speech the duty of care covers could include legal speech. Moreover, the prohibited speech is defined so vaguely that it likely would lead to chilling effects on access to legal speech.
For example, pictures of photoshopped models are protected speech. If teenage girls want to see such content on their feeds, it isn’t clear that the law can constitutionally stop them, even if it’s done by creating a duty of care to prevent and mitigate harms associated with “anxiety, depression, or eating disorders.”
Moreover, access to content that kids really like to see or hear is still speech, even if they like it so much that an outside observer may think they are addicted to it. Much as the Court said in Brown, the government does not have “a free-floating power to restrict [speech] to which children may be exposed.”
KOSA’s Section 3(A)(1) and 3(A)(2) would also run into problems, as they are so vague that a person of ordinary intelligence would not know what they prohibit. As a result, there would likely be chilling effects on legal speech.
Much like in Høeg, the phrase “consistent with evidence-informed medical information” leads to various questions regarding how an online platform could comply with the law. For instance, it isn’t clear what content or design issue would be implicated by this subsection. Would a platform need to hire mental-health professionals to consult with them on every product-design and content-moderation decision?
Even worse is the requirement to prevent and mitigate “patterns of use that indicate or encourage addiction-like behaviors,” which isn’t defined by reference to “evidence-informed medical information” or to anything else.
Even Bullying May Be Protected Speech
Even KOSA’s duty to prevent and mitigate “physical violence, online bullying, and harassment of the minor” in Section 3(3) could implicate the First Amendment. While physical violence would clearly be outside of the First Amendment’s protections (although it’s unclear how an online platform could prevent or mitigate such violence), online bullying and harassing speech are, nonetheless, speech. As a result, this duty of care could receive constitutional scrutiny regarding whether it effectively limits lawful (though awful) speech directed at minors.
Locking Children Out of Online Spaces
KOSA’s duty of care appears to be based on negligence, in that it requires platforms to take “reasonable measures.” This probably makes it more likely to survive First Amendment scrutiny than a strict-liability regime would.
It could, however, still result in real (and costly) product-design and moderation challenges for online platforms. As a result, there would be significant incentives for those platforms to exclude those they know or reasonably believe are under age 17 from the platforms altogether.
While this is not strictly a First Amendment problem, per se, it nonetheless illustrates how laws intended to “protect” children’s safety while online can actually lead to their being restricted from using online speech platforms altogether.
Conclusion
Despite its being christened the “Kid’s Online Safety Act,” KOSA will result in real harm for kids if enacted into law. Its likely result would be considerable “collateral censorship,” as online platforms restrict teens’ access in order to avoid liability.
The bill’s duty of care would also either require likely unconstitutional age-verification, or it will be rendered ineffective, as teen users lie about their age in order to access desired content.
Congress shall make no law abridging the freedom of speech, even if it is done in the name of children.
As the U.S. House Energy and Commerce Subcommittee on Oversight and Investigations convenes this morning for a hearing on overseeing federal funds for broadband deployment, it bears mention that one of the largest U.S. broadband-subsidy programs is actually likely run out of money within the next year. Writing in Forbes, Roslyn Layton observes of the Affordable Connectivity Program (ACP) that it has enrolled more than 14 million households, concluding that it “may be the most effective broadband benefit program to date with its direct to consumer model.”
This may be true, but how should we measure effectiveness? One seemingly simple measure would be the number of households with at-home internet access who would not have it but for the ACP’s subsidies. Those households can be broadly divided into two groups:
Households that signed up for ACP and got at-home internet; and
Households that have at-home internet, but wouldn’t if they didn’t receive the ACP subsidies.
Conceptually, evaluating the first group is straightforward. We can survey ACP subscribers and determine whether they had internet access before receiving the ACP subsidies. The second group is much more difficult, if not impossible, to measure with the available information. We can only guess as to how many households would unsubscribe if the subsidies went away.
To give a bit of background on the program we now call the ACP: broadband has been included since 2016 as a supported service under the Federal Communication Commission’s (FCC) Lifeline program. Among the Lifeline program’s goals are to ensure the availability of broadband for low-income households (to close the so-called “digital divide”) and to minimize the Universal Service Fund contribution burden levied on consumers and businesses.
As part of the appropriations act enacted in 2021 in response to the COVID-19 pandemic, Congress created a temporary $3.2 billion Emergency Broadband Benefit (EBB) program within the Lifeline program. EBB provided eligible households with a $50 monthly discount on qualifying broadband service or bundled voice-broadband packages purchased from participating providers, as well as a one-time discount of up to $100 for the purchase of a device (computer or tablet). The EBB program was originally set to expire when the funds were depleted, or six months after the U.S. Department of Health and Human Services (HHS) declared an end to the pandemic.
With passage of the Infrastructure Investment and Jobs Act (IIJA) in November 2021, the EBB’s temporary subsidy was extended indefinitely and renamed the Affordable Connectivity Program, or ACP. The IIJA allocated an additional $14 billion to provide subsidies of $30 a month to eligible households. Without additional appropriations, the ACP is expected to run out of funding by early 2024.
The Case of the Nonadopters
According to Information Technology & Innovation Foundation (ITIF), 97.6% of the U.S. population has access to a fixed connection of at least 25/3 Mbps through asymmetric digital subscriber line (ADSL), cable, fiber, or fixed wireless. Pew Research reports that 93% of its survey respondents indicated they have a broadband connection at home.
Pew’s results are in-line with U.S. Census estimates from the American Community Survey. The figure below, summarizing information from 2021, shows that 92.6% of households had a broadband subscription or had access without having to pay for a subscription. Assuming ITIF’s estimates of broadband availability are accurate, then among households without broadband, approximately two-thirds of them—6.4 million—have access to broadband.
On the one hand, price is obviously a major factor driving adoption. For example, among the 7.4% of households who do not use the internet at home, Census surveys show about one-third indicate that price is one reason for not having an at-home connection, responding that they “can’t afford it” or that it’s “not worth the cost.” On the other hand, more than half of respondents said they “don’t need it” or are “not interested.”
But George Ford argues that these responses to the Census surveys are unhelpful in evaluating the importance of price relative to other factors. For example, if a consumer says broadband is “not worth the cost,” it’s not clear whether the “worth” is too low or the “cost” is too high. Consumers who are “not interested” in subscribing to an internet service are implicitly saying that they are not interested at current prices. In other words, there may be a price that is sufficiently low that uninterested consumers become interested.
But in some cases, that price may be zero—or even negative.
A 2022 National Telecommunications and Information Administration (NTIA) survey of internet use found that about 75% of offline households said they wanted to pay nothing for internet access. In addition, as shown in the figure above, about a quarter of households without a broadband or smartphone subscription claim that they can access the internet at home without paying for a subscription. Thus, there may be a substantial share of nonadopters who would not adopt even if the service were free to the consumer.
Aside from surveys, another way to evaluate the importance of price in internet-adoption decisions is with empirical estimates of demand elasticity. The price elasticity of demand is the percent change in the quantity demanded for a good, divided by the percent change in price. A demand curve with an elasticity between 0 and –1 is said to be inelastic, meaning the change in the quantity demanded is relatively less responsive to changes in price. An elasticity of less than –1 is said to be elastic, meaning the change in the quantity demanded is relatively more responsive to changes in price.
Michael Williams and Wei Zao’s survey of the research on the price elasticity of demand concludes that demand for internet services has traditionally been inelastic and has “become increasingly so over time.” They report a 2019 elasticity of –0.05 (down from –0.69 in 2008). George Ford’s 2021 study estimates an elasticity ranging from –0.58 to –0.33. These results indicate that a subsidy program that reduced the price of internet services by 10% would increase adoption by anywhere from 0.5% (i.e., one-half of one percent) to 5.8%. In other words, a range from approximately zero to a small but significant increase.
It is unsurprising that the demand for internet services is so inelastic, especially among those who do not subscribe to broadband or smartphone service. One reason is the nature of demand curves. Generally speaking, as quantity demanded increases (i.e., moves downward along the demand curve), the demand curve becomes less elastic, as shown in the figure below (which is an illustration of a hypothetical demand curve). With adoption currently at more than 90% of households, the remaining nonadopters are much less likely to adopt at any price.
Thus, there is a possibility that the ACP may be so successful that the program has hit a point of significant diminishing marginal returns. Now that nearly 95% of U.S. households with access to at-home internet use at-home Internet, it may be very difficult and costly to convert the remaining 5% of nonadopters. For example, if Williams & Zao’s estimate of a price elasticity of –0.05 is correct, then even a subsidy that provided “free” Internet would convert only half of the 5% of nonadopters.
Keeping the Country Connected
With all of this in mind, it’s important to recognize that the primary metric for success should not be solely based on adoption rates.
The ACP is not an attempt to create a perfect government program, but rather to address the imperfect realities we face. Some individuals may never adopt internet services, just as some never installed home-telephone services. Even at the peak of landline use in 1998, only 96.2% of households had one.
On the other hand, those who value broadband access may be forced to discontinue service if faced with financial difficulties. Therefore, the program’s objective should encompass both connecting new users and ensuring that economically vulnerable individuals maintain access.
Instead of pursuing an ideal regulatory or subsidy program, we should focus on making the most informed decisions in a context where information is limited. We know there is general demand for internet access and that a significant number of households might discontinue services during economic downturns. And we also know that, in light of these realities, numerous stakeholders advocate for invasive interventions in the broadband market, potentially jeopardizing private investment incentives.
Thus, even if the ACP program is not perfect in itself, it goes a long way toward satisfying the need to make sure the least well-off stay connected, while also allowing private providers to continue their track record of providing high-speed, affordable broadband.
And although we do not have data at the moment demonstrating exactly how many households would discontinue internet service in the absence of subsidies, if Congress does not appropriate additional ACP funds, we may soon have an unfortunate natural experiment that helps us to find out.
In a Feb. 14 column in the Wall Street Journal, Commissioner Christine Wilson announced her intent to resign her position on the Federal Trade Commission (FTC).For those curious to know why, she beat you to the punch in the title and subtitle of her column: “Why I’m Resigning as an FTC Commissioner: Lina Khan’s disregard for the rule of law and due process make it impossible for me to continue serving.”
This is the seventh FTC roundup I’ve posted to Truth on the Market since joining the International Center for Law & Economics (ICLE) last September, having left the FTC at the end of August. Relentlessly astute readers of this column may have observed that I cited (and linked to) Commissioner Wilson’s dissents in five of my six previous efforts—actually, to three of them in my Nov. 4 post alone.
As anyone might guess, I’ve linked to Wilson’s dissents (and concurrences, etc.) for the same reason I’ve linked to other sources: I found them instructive in some significant regard. Priors and particular conclusions of law aside, I generally found Wilson’s statements to be well-grounded in established principles of antitrust law and economics. I cannot say the same about statements from the current majority.
Commission dissents are not merely the bases for blog posts or venues for venting. They can provide a valuable window into agency matters for lawmakers and, especially, for the courts. And I would suggest that they serve an important institutional role at the FTC, whatever one thinks of the merits of any specific matter. There’s really no point to having a five-member commission if all its votes are unanimous and all its opinions uniform. Moreover, establishing the realistic possibility of dissent can lend credence to those commission opinions that are unanimous. And even in these fractious times, there are such opinions.
Wilson did not spring forth fully formed from the forehead of the U.S. Senate. She began her FTC career as a Georgetown student, serving as a law clerk in the Bureau of Competition; she returned some years later to serve as chief of staff to Chairman Tim Muris; and she returned again when confirmed as a commissioner in April 2018 (later sworn in in September 2018). In between stints at the FTC, she gained antitrust experience in private practice, both in law firms and as in-house counsel. I would suggest that her agency experience, combined with her work in the private sector, provided a firm foundation for the judgments required of a commissioner.
Daniel Kaufman, former acting director of the FTC’s Bureau of Consumer Protection, reflected on Wilson’s departure here. Personally, with apologies for the platitude, I would like to thank Commissioner Wilson for her service. And, not incidentally, for her consistent support for agency staff.
Her three Democratic colleagues on the commission also thanked her for her service, if only collectively, and tersely: “While we often disagreed with Commissioner Wilson, we respect her devotion to her beliefs and are grateful for her public service. We wish her well in her next endeavor.” That was that. No doubt heartfelt. Wilson’s departure column was a stern rebuke to the Commission, so there’s that. But then, stern rebukes fly in all directions nowadays.
While I’ve never been a commissioner, I recall a far nicer and more collegial sendoff when I departed from my lowly staff position. Come to think of it, I had a nicer sendoff when I left a large D.C. law firm as a third-year associate bound for a teaching position, way back when.
So, what else is new?
In January, I noted that “the big news at the FTC is all about noncompetes”; that is, about the FTC’s proposed rule to ban the use of noncompetes more-or-less across the board The rule would cover all occupations and all income levels, with a narrow exception for the sale of the business in which the “employee” has at least a 25% ownership stake (why 25%?), and a brief nod to statutory limits on the commission’s regulatory authority with regard to nonprofits, common carriers, and some other entities.
Colleagues Brian Albrecht (and here),Alden Abbott, Gus Hurwitz, and Corbin K. Barthold also have had things to say about it. I suggested that there were legitimate reasons to be concerned about noncompetes in certain contexts—sometimes on antitrust grounds, and sometimes for other reasons. But certain contexts are far from all contexts, and a mixed and developing body of economic literature, coupled with limited FTC experience in the subject, did not militate in favor of nearly so sweeping a regulatory proposal. This is true even before we ask practical questions about staffing for enforcement or, say, whether the FTC Act conferred the requisite jurisdiction on the agency.
This is the first or second FTC competition rulemaking ever, depending on how one counts, and it is the first this century, in any case. Here’s administrative scholar Thomas Merrill on FTC competition rulemaking. Given the Supreme Court’s recent articulation of the major questions doctrine in West Virginia v. EPA, a more modest and bipartisan proposal might have been far more prudent. A bad turn at the court can lose more than the matter at hand. Comments are due March 20, by the way.
Now comes a missive from the House Judiciary Committee, along with multiple subcommittees, about the noncompete NPRM. The letter opens by stating that “The Proposed Rule exceeds its delegated authority and imposes a top-down one-size-fits-all approach that violates basic American principles of federalism and free markets.” And “[t]he Biden FTC’s proposed rule on non-compete clauses shows the radicalness of the so-called ‘hipster’ antitrust movement that values progressive outcomes over long-held legal and economic principles.”
Ouch. Other than that Mr. Jordan, how did you like the play?
There are several single-spaced pages on the “FTC’s power grab” before the letter gets to a specific, and substantial, formal document request in the service of congressional oversight. That does not stop the rulemaking process, but it does not bode well either.
Part of why this matters is that there’s still solid, empirically grounded, pro-consumer work that’s at risk. In my first Truth on the Market post, I applauded FTC staff commentsurging New York State to reject a certificate of public advantage (COPA) application. As I noted there, COPAs are rent-seeking mechanisms chiefly aimed at insulating anticompetitive mergers (and sometimes conduct) from federal antitrust scrutiny. Commission and staff opposition to COPAs was developed across several administrations on well-established competition principles and a significant body of research regarding hospital consolidation, health care prices, and quality of care.
Office of Policy Planning (OPP) Director Elizabeth Wilkins has now announced that the parties in question have abandoned their proposed merger. Wilkins thanks the staff of OPP, the Bureau of Economics, and the Bureau of Competition for their work on the matter, and rightly so. There’s no new-fangled notion of Section 5 or mergers at play. The work has developed over decades and it’s the sort of work that should continue. Notwithstanding numerous (if not legion) departures, good and experienced staff and established methods remain, and ought not to be repudiated, much less put at risk.
I won’t recapitulate the much-discussed case, but on the somewhat-less-discussed matter of the withdrawal, I’ll consider why the FTC announced that the matter “is withdrawn from adjudication, and that all proceedings before the Administrative Law Judge be and they hereby are stayed.” While the matter was not litigated to its conclusion in federal court, the substantial and workmanlike opinion denying the preliminary injunction made it clear that the FTC had lost on the facts under both of the theories of harm to potential competition that they’d advanced.
“Having reviewed and considered the objective evidence of Meta’s capabilities and incentives, the Court is not persuaded that this evidence establishes that it was ‘reasonably probable’ Meta would enter the relevant market.”
An appeal in the 9th U.S. Circuit Court of Appeals likely seemed fruitless. Stopping short of a final judgment, the FTC could have tried for a do-over in its internal administrative Part 3 process, and might have fared well before itself, but that would have demanded considerable additional resources in a case that, in the long run, was bound to be a loser. Bloomberg had previously reported that the commission voted to proceed with the case against the merger contra the staff’s recommendation. Here, the commission noted that “Complaint Counsel [the Commission’s own staff] has not registered any objection” to Meta’s motion to withdraw proceedings from adjudication.
There are novel approaches to antitrust. And there are the courts and the law. And, as noted above, many among the staff are well-versed in that law and experienced at investigations. You can’t always get what you want, but if you try sometimes, you get what you deserve.
Under a recently proposed rule, the Federal Trade Commission (FTC) would ban the use of noncompete terms in employment agreements nationwide. Noncompetes are contracts that workers sign saying they agree to not work for the employer’s competitors for a certain period. The FTC’s rule would be a major policy change, regulating future contracts and retroactively voiding current ones. With limited exceptions, it would cover everyone in the United States.
When I scan academic economists’ public commentary on the ban over the past few weeks (which basically means people on Twitter), I see almost universal support for the FTC’s proposed ban. You see similar support if you expand to general econ commentary, like Timothy Lee at Full Stack Economics. Where you see pushback, it is from people at think tanks (like me) or hushed skepticism, compared to the kind of open disagreement you see on most policy issues.
The proposed rule grew out of an executive order by President Joe Biden in 2021,which I wrote about at the time. My argument was that there is a simple economic rationale for the contract: noncompetes encourage both parties to invest in the employee-employer relationship, just like marriage contracts encourage spouses to invest in each other.
Somehow, reposting my newsletter on the economic rationale for noncompetes has turned me into a “pro-noncompete guy” on Twitter.
The discussions have been disorienting.I feel like I’m taking crazy pills! If you ask me, “what new thing should policymakers do to address labor market power?” I would probably say something about noncompetes! Employers abuse them. The stories are devastating about people unable to find a new job because noncompetes bind them.
Yet, while recognizing the problems with noncompetes, I do not support the complete ban.
That puts me out of step with most vocal economics commentators. Where does this disagreement come from? How do I think about policy generally, and why am I the odd one out?
My Interpretation of the Research
One possibility is that I’m not such a lonely voice, and that the sample of vocal Twitter users is biased toward particular policy views. The University of Chicago Booth School of Business’ Initiative on Global Markets recently conducted a poll of academic economists about noncompetes, which mostly finds differing opinions and levels of certainty about the effects of a ban. For example, 43% were uncertain that a ban would generate a “substantial increase in wages in the affected industries.” However, maybe that is because the word substantial is unclear. That’s a problem with these surveys.
Still, more economists surveyed agreed than disagreed. I would answer “disagree” to that statement, as worded.
Why do I differ? One cynical response would be that I don’t know the recent literature, and my views are outdated. From the research I’ve done for a paper that I’m writing on labor-market power, I’m fairly well-versed in the noncompete literature. I don’t know it better than the active researchers in the field, but better than the average economists responding to the FTC’s proposal and definitely better than most lawyers. My disagreement also isn’t about me being some free-market fanatic. I’m not, and someother free-market types are skeptical of noncompetes. My priors are more complicated (critics might say “confused”) than that, as I will explain below.
After much soul-searching, I’ve concluded that the disagreement is real and results from my—possibly weird—understanding of how we should go from the science of economics to the art of policy. That’s what I want to explain today and get us to think more about.
Let’s start with the literature and the science of economics. First, we need to know “the facts.” The original papers focused a lot on collecting data and facts about noncompetes. We don’t have amazing data on the prevalence of noncompetes, but we know something, which is more than we could say a decade ago. For example,Evan Starr, J.J. Prescott, & Norman Bishara (2021) conducted a large survey in which they found that “18 percent of labor force participants are bound by noncompetes, with 38 percent having agreed to at least one in the past.”[1] We need to know these things and thank the researchers for collecting data.
With these facts, we can start running regressions. In addition to the paper above, many papers develop indices of noncompete “enforceability” by state. Then we can regress things like wages on an enforceability index. Many papers—like Starr, Prescott, & Bishara above—run cross-state regressions and find that wages are higher in states with higher noncompete enforceability. They also find more training with noncompete enforceability. But that kind of correlation is littered with selection issues. High-income workers are more likely to sign noncompetes. That’s not causal. The authors carefully explain this, but sometimes correlations are the best we have—e.g., if we want to study noncompetes on doctors’ wages and their poaching of clients.
Some people will simply point to California (which has banned noncompetes for decades) and say, “see, noncompete bans don’t destroy an economy.” Unfortunately, many things make California unique, so while that is evidence, it’s hardly causal.
The most credible results come from recent changes in state policy. These allow us to run simple difference-in-difference types of analysis to uncover causal estimates. These results are reasonably transparent and easy to understand.
Michael Lipsitz & Evan Starr (2021) (are you starting to recognize thatStarr name?) study a 2008 Oregon ban on noncompetes for hourly workers. They find the ban increased hourly wages overall by 2 to 3%, which implies that those signing noncompetes may have seen wages rise as much as 14 to 21%. This 3% number is what the FTC assumes will apply to the whole economy when they estimate a $300 billion increase in wages per year under their ban. It’s a linear extrapolation.
Similarly, in 2015, Hawaii banned noncompetes for new hires within tech industries.Natarajan Balasubramanian et al. (2022) find that the ban increased new-hire wages by 4%. They also estimate that the ban increased worker mobility by 11%. Labor economists generally think of worker turnover as a good thing. Still, it is tricky here when the whole benefit of the agreement is to reduce turnover and encourage a better relationship between workers and firms.
The FTC also points to three studies that find that banning noncompetes increases innovation, according to a few different measures. I won’t say anything about these because you can infer my reaction based on what I will say below on wage studies. If anything, I’m more skeptical of innovation studies, simply because I don’t think we have a good understanding of what causes innovation generally, let alone how to measure the impact of noncompetes on innovation. You can readwhat the FTC cites on innovation and make up your own mind.
From Academic Research to an FTC Ban
Now that we understand some of the papers, how do we move to policy?
Let’s assume I read the evidence basically as the FTC does. I don’t, and will explain as much in a future paper, but that’s not the debate for this post. How do we think about the optimal policy response, given the evidence?
There are two main reasons I am not ready to extrapolate from the research to the proposed ban. Every economist knows them: the dreaded pests of external validity and general equilibrium effects.
Let’s consider external validity through theOregon ban paper and theHawaii tech ban paper. Again, these are not critiques of the papers, but of how the FTC wants to move from them to a national ban.
Notice above that I said the Oregon ban went into effect in 2008, which means it happened as the whole country was entering a major recession and financial crisis. The authors do their best to deal with differential responses to the recession, but every state in their data went through a recession. Did the recession matter for the results? It seems plausible to me.
Another important detail about the Oregon ban is that it only applied to hourly workers, while the FTC rule would apply to all workers. You can’t just confidently assume hourly workers are just like salaried workers. Hourly workers who sign noncompetes are less likely to read them, less likely to consult with their family about them, and less likely to negotiate over them. If part of the problem with noncompetes is that people don’t understand them until it is too late, you will overstate the harm if you just look at hourly workers who understand noncompetes even less than salaried workers. Also, with a partial ban, Lipsitz & Starr recognize that spillovers matter and firms respond in different ways, such as converting workers to salaried to keep the noncompete, which won’t exist with a national ban. It’s not the same experiment at a national scale. Which way will it change? How confident are we?
The effects of the Hawaii ban are likely not the same as the FTC one would be. First of all, Hawaii is weird. It has a small population, and tech is a small part of the state’s economy. The ban even excluded telecom from within the tech sector. We are talking about a targeted ban. What does the Hawaii experiment tell us about a ban on noncompetes for tech workers in a non-island location like Boston? What does it tell us about a national ban on all noncompetes, like the FTC is proposing? Maybe these things do not matter. To further complicate things, the policy change included a ban on nonsolicitation clauses. Maybe the nonsolicitation clause was unimportant. But I’d want more research and more policy experimentation to tease out these details.
As you dig into these papers, you find more and more of these issues. That’s not a knock on the papers but an inherent difficulty in moving from research to policy. It’s further compounded by the fact that this empirical literature is still relatively new.
What will happen when we scale these bans up to the national level? That’s a huge question for any policy change, especially one as large as a national ban. The FTC seems confident in what will happen, but moving from micro to macro is not trivial. Macroeconomists arestarting to really get serious about how the micro adds up to the macro, but it takes work.
I want to know more. Which effects are amplified when scaled? Which effects drop off?What’s the full National Income and Product Accounts (NIPA) accounting? I don’t know. No one does, because we don’t have any of that sort of price-theoretic, general equilibrium research. There are lots of margins that firms will adjust on. There’s always another margin that firms will adjust that we are not capturing. Instead, what the FTC did is a simple linear extrapolation from the state studies to a national ban. Studies find a 3% wage effect here. Multiply that by the number of workers.
When we are doing policy work, we would also like some sort of welfare analysis. It’s not just about measuring workers in isolation. We need a way to think about the costs and benefits and how to trade them off. All the diff-in-diff regressions in the world won’t get at it; we need a model.
Luckily, we have one paper that blends empirics and theory to do welfare analysis.[2] Liyan Shi has a paper forthcoming in Econometrica—which is no joke to publish in—titled “Optimal Regulation of Noncompete Contracts.” In it, she studies a model meant to capture the tradeoff between encouraging a firm’s investment in workers and reducing labor mobility. To bring the theory to data, she scrapes data on U.S. public firms from Securities and Exchange Commission filings and merges those with firm-level data from Compustat, plus some others, to get measures of firm investment in intangibles. She finds that when she brings her model to the data and calibrates it, the optimal policy is roughly a ban on noncompetes.
It’s an impressive paper. Again, I’m unsure how much to take from it to extrapolate to a ban on all workers. First,as I’ve written before, we know publicly traded firms are different from private firms, and that difference has changed over time. Second, it’s plausible that CEOs are different from other workers, and the relationship between CEO noncompetes and firm-level intangible investment isn’t identical to the relationship between mid-level engineers and investment in that worker.
Beyond particular issues of generalizing Shi’s paper, the larger concern is that this is the paper that does a welfare analysis. That’s troubling to me as a basis for a major policy change.
I think an analogy to taxation is helpful here. I’ve published a few papers about optimal taxation, so it’s an area I’ve thought more about. Within optimal taxation, you see this type of paper a lot. Here’s a formal model that captures something that theorists find interesting. Here’s a simple approach that takes the model to the data.
My favorite optimal-taxation papers take this approach. Take this paper that I absolutely love, “Optimal Taxation with Endogenous Insurance Markets” by Mikhail Golosov & Aleh Tsyvinski.[3] It is not a price-theory paper; it is a Theory—with a capital T—paper. I’m talking lemmas and theorems type of stuff. A bunch of QEDs and then calibrate their model to U.S. data.
How seriously should we take their quantitative exercise? After all, it was in the Quarterly Journal of Economics and my professors were assigning it, so it must be an important paper. But people who know this literature will quickly recognize that it’s not the quantitative result that makes that paper worthy of the QJE.
I was very confused by this early in my career. If we find the best paper, why not take the result completely seriously? My first publication, which was in theJournal of Economic Methodology, grew out of my confusion about how economists were evaluating optimal tax models. Why did professors think some models were good? How were the authors justifying that their paper was good? Sometimes papers are good because they closely match the data. Sometimes papers are good because they quantify an interesting normative issue. Sometimes papers are good because they expose an interesting means-ends analysis. Most of the time, papers do all three blended together, and it’s up to the reader to be sufficiently steeped in the literature to understand what the paper is really doing. Maybe I read the Shi paper wrong, but I read it mostly as a theory paper.
One difference between the optimal-taxation literature and the optimal-noncompete policy world is that the Golosov & Tsyvinski paper is situated within 100 years of formal optimal-taxation models. The knowledgeable scholar of public economics can compare and contrast. The paper has a lot of value because it does one particular thing differently than everything else in the literature.
Or think about patent policies, which was what I compared noncompetes toin my original post. There is a tradeoff between encouraging innovation and restricting monopoly. This takes a model and data to quantify the trade-off. Rafael Guthmann & David Rahman have a new paper on the optimal length of patents that Rafael summarized at Rafael’s Commentary. The basic structure is very similar to the Shi or Golosov &Tsyvinski papers: interesting models supplemented with a calibration exercise to put a number on the optimal policy. Guthmann & Rahman find four to eight years, instead of the current system of 20 years.
Is that true? I don’t know. I certainly wouldn’t want the FTC to unilaterally put the number at four years because of the paper. But I am certainly glad for their contribution to the literature and our understanding of the tradeoffs and that I can position that number in a literature asking similar questions.
I’m sorry to all the people doing great research on noncompetes, but we are just not there yet with them, by my reading. For studying optimal-noncompete policy in a model, we have one paper. It was groundbreaking to tie this theory to novel data, but it is still one welfare analysis.
My Priors: What’s Holding Me Back from the Revolution
In a world where you start without any thoughts about which direction is optimal (a uniform prior) and you observe one paper that says bans are net positive, you should think that bans are net positive. Some information is better than none and now you have some information. Make a choice.
But that’s not the world we live in. We all come to a policy question with prior beliefs that affect how much we update our beliefs.
For me, I have three slightly weird priors that I will argue you should also have but currently place me out of step with most economists.
First, I place more weight on theoretical arguments than most. No one sits back and just absorbs the data without using theory; that’s impossible. All data requires theory. Still, I think it ismeaningful to say some people place more weight on theory. I’m one of those people.
To be clear, I also care deeply about data. But I writetheory papers and a theory-heavy newsletter. And I think these theories matter for how we think about data. The theoretical justification for noncompetes has been around for a long time,as I discussed in my original post, so I won’t say more.
The second way that I differ from most economists is even weirder. I place weight on the benefits of existing agreements or institutions. The longer they have been in place, the more weight I place on the benefits.Josh Hendrickson and I have a paper with Alex Salter that basically formalized the argument from George Stigler that “every long-lasting institution is efficient.” When there are feedback mechanisms, such as with markets or democracy, the resulting institutions are the result of an evolutionary process that slowly selects more and more gains from trade. If they were so bad, people would get rid of them eventually. That’s not a free-market bias, since it also means that I think something like the Medicare system is likely an efficient form of social insurance and intertemporal bargaining for people in the United States.
Back to noncompetes, many companies use noncompetes in many different contexts. Many workers sign them. My prior is that they do so because a noncompete is a mutually beneficial contract that allows them to make trades in a world with transaction costs.As I explained in a recent post, Yoram Barzel taught us that, in a world with transaction costs, people will “erect social institutions to impose and enforce the restraints.”
One possible rebuttal is that noncompetes, while existing for a long time, have only become common in the past few decades. That is not very long-lasting, and so the FTC ban is a natural policy response to a new challenge that arose and the discovery that these contracts are actually bad. That response would persuade me more if this were a policy response brought about by a democratic bargain instead of an ideological agenda pushed by the chair of the FTC, which I think is closer to reality. That isEarl Thompson and Charlie Hickson’s spin on Stigler’s efficient institutions point. Ideology gets in the way.
Finally, relative to most economists, I place more weight on experimentation and feedback mechanisms. Most economists still think of the world through the lens of the benevolent planner doing a cost-benefit analysis. I do that sometimes, too, but I also think we need to really take our own informational limitations seriously. That’s why we talk about limited informationall the time on my newsletter. Again, if we started completely agnostic, this wouldn’t point one way or another. We recognize that we don’t know much, but a slight signal pushes us either way. But when paired with my previous point about evolution, it means I’m hesitant about a national ban.
It’s not a free-market bias, either. I’m not convinced theJones Act is bad. I’m not convinced it’s good, but Josh has convinced me that the question is complicated.
Because I’m not ready to easily say the science is settled, I want to know how we will learn if we are wrong. In a prior Truth on the Market post about the FTC rule, I quoted Thomas Sowell’s Knowledge and Decisions:
In a world where people are preoccupied with arguing about what decision should be made on a sweeping range of issues, this book argues that the most fundamental question is not what decision to make but who is to make it—through what processes and under what incentives and constraints, and with what feedback mechanisms to correct the decision if it proves to be wrong.
A national ban bypasses this and severely cuts off our ability to learn if we are wrong. That worries me.
Maybe this all means that I am too conservative and need to be more open to changing my mind. Maybe I’m inconsistent in how I apply these ideas. After all, “there’s always another margin” also means that the harm of a policy will be smaller than anticipated since people will adjust to avoid the policy. I buy that. There are a lot more questions to sort through on this topic.
Unfortunately, the discussion around noncompetes has been short-circuited by the FTC. Hopefully, this post gave you tools to think about a variety of policies going forward.
[1]The U.S. Bureau of Labor Statistics now collects data on noncompetes. Since 2017, we’ve had one question on noncompetes in the National Longitudinal Survey of Youth 1997.Donna S. Rothstein and Evan Starr (2021) also find that noncompetes cover around 18% of workers. It is very plausible this is an understatement, since noncompetes are complex legal documents, and workers may not understand that they have one.
[2] Other papers combine theory and empirics.Kurt Lavetti, Carol Simon, & William D. White (2023), build a model to derive testable implications about holdups. They use data on doctors and find noncompetes raise returns to tenure and lower turnover.
[3] It’s not exactly the same. The Golosov & Tsyvinski paper doesn’t even take the calibration seriously enough to include the details in the published version. Shi’s paper is a more serious quantitative exercise.
Gus Hurwitz called the bill dead in September. Then it passed the Senate Judiciary Committee. Now, there are some reports that suggest it could be added to the obviously unrelated National Defense Authorization Act (it should be noted that the JCPA was not included in the version of NDAA introduced in the U.S. House).
For an overview of the bill and its flaws, see Dirk Auer and Ben Sperry’s tl;dr. The JCPA would force “covered” online platforms like Facebook and Google to pay for journalism accessed through those platforms. When a user posts a news article on Facebook, which then drives traffic to the news source, Facebook would have to pay. I won’t get paid for links to my banger cat videos, no matter how popular they are, since I’m not a qualifying publication.
I’m going to focus on one aspect of the bill: the use of “final offer arbitration” (FOA) to settle disputes between platforms and news outlets. FOA is sometimes called “baseball arbitration” because it is used for contract disputes in Major League Baseball. This form of arbitration has also been implemented in other jurisdictions to govern similar disputes, notably by the Australian ACCC.
Before getting to the more complicated case, let’s start simple.
Scenario #1: I’m a corn farmer. You’re a granary who buys corn. We’re both invested in this industry, so let’s assume we can’t abandon negotiations in the near term and need to find an agreeable price. In a market, people make offers. Prices vary each year. I decide when to sell my corn based on prevailing market prices and my beliefs about when they will change.
Scenario #2: A government agency comes in (without either of us asking for it) and says the price of corn this year is $6 per bushel. In conventional economics, we call that a price regulation. Unlike a market price, where both sides sign off, regulated prices do not enjoy mutual agreement by the parties to the transaction.
Scenario #3: Instead of a price imposed independently by regulation, one of the parties (say, the corn farmer) may seek a higher price of $6.50 per bushel and petition the government. The government agrees and the price is set at $6.50. We would still call that price regulation, but the outcome reflects what at least one of the parties wanted and some may argue that it helps “the little guy.” (Let’s forget that many modern farms are large operations with bargaining power. In our head and in this story, the corn farmer is still a struggling mom-and-pop about to lose their house.)
Scenario #4: Instead of listening only to the corn farmer, both the farmer and the granary tell the government their “final offer” and the government picks one of those offers, not somewhere in between. The parties don’t give any reasons—just the offer. This is called “final offer arbitration” (FOA).
As an arbitration mechanism, FOA makes sense, even if it is not always ideal. It avoids some of the issues that can attend “splitting the difference” between the parties.
While it is better than other systems, it is still a price regulation. In the JCPA’s case, it would not be imposed immediately; the two parties can negotiate on their own (in the shadow of the imposed FOA). And the actual arbitration decision wouldn’t technically be made by the government, but by a third party. Fine. But ultimately, after stripping away the veneer, this is all just an elaborate mechanism built atop the threat of the government choosing the price in the market.
I call that price regulation. The losing party does not like the agreement and never agreed to the overall mechanism. Unlike in voluntary markets, at least one of the parties does not agree with the final price. Moreover, neither party explicitly chose the arbitration mechanism.
The JCPA’s FOA system is not precisely like the baseball situation. In baseball, there is choice on the front-end. Players and owners agree to the system. In baseball, there is also choice after negotiations start. Players can still strike; owners can enact a lockout. Under the JCPA, the platforms must carry the content. They cannot walk away.
I’m an economist, not a philosopher. The problem with force is not that it is unpleasant. Instead, the issue is that force distorts the knowledge conveyed through market transactions. That distortion prevents resources from moving to their highest valued use.
How do we know the apple is more valuable to Armen than it is to Ben? In a market, “we” don’t need to know. No benevolent outsider needs to pick the “right” price for other people. In most free markets, a seller posts a price. Buyers just need to decide whether they value it more than that price. Armen voluntarily pays Ben for the apple and Ben accepts the transaction. That’s how we know the apple is in the right hands.
Often, transactions are about more than just price. Sometimes there may be haggling and bargaining, especially on bigger purchases. Workers negotiate wages, even when the ad stipulates a specific wage. Home buyers make offers and negotiate.
But this just kicks up the issue of information to one more level. Negotiating is costly. That is why sometimes, in anticipation of costly disputes down the road, the two sides voluntarily agree to use an arbitration mechanism. MLB players agree to baseball arbitration. That is the two sides revealing that they believe the costs of disputes outweigh the losses from arbitration.
Again, each side conveys their beliefs and values by agreeing to the arbitration mechanism. Each step in the negotiation process allows the parties to convey the relevant information. No outsider needs to know “the right” answer.For a choice to convey information about relative values, it needs to be freely chosen.
At an abstract level, any trade has two parts. First, people agree to the mechanism, which determines who makes what kinds of offers. At the grocery store, the mechanism is “seller picks the price and buyer picks the quantity.” For buying and selling a house, the mechanism is “seller posts price, buyer can offer above or below and request other conditions.” After both parties agree to the terms, the mechanism plays out and both sides make or accept offers within the mechanism.
We need choice on both aspects for the price to capture each side’s private information.
For example, suppose someone comes up to you with a gun and says “give me your wallet or your watch. Your choice.” When you “choose” your watch, we don’t actually call that a choice, since you didn’t pick the mechanism. We have no way of knowing whether the watch means more to you or to the guy with the gun.
When the JCPA forces Facebook to negotiate with a local news website and Facebook offers to pay a penny per visit, it conveys no information about the relative value that the news website is generating for Facebook. Facebook may just be worried that the website will ask for two pennies and the arbitrator will pick the higher price. It is equally plausible that in a world without transaction costs, the news would pay Facebook, since Facebook sends traffic to them. Is there any chance the arbitrator will pick Facebook’s offer if it asks to be paid? Of course not, so Facebook will never make that offer.
For sure, things are imposed on us all the time. That is the nature of regulation. Energy prices are regulated. I’m not against regulation. But we should defend that use of force on its own terms and be honest that the system is one of price regulation. We gain nothing by a verbal sleight of hand that turns losing your watch into a “choice” and the JCPA’s FOA into a “negotiation” between platforms and news.
In economics, we often ask about market failures. In this case, is there a sufficient market failure in the market for links to justify regulation? Is that failure resolved by this imposition?
A recent viral video captures a prevailing sentiment in certain corners of social media, and among some competition scholars, about how mergers supposedly work in the real world: firms start competing on price, one firm loses out, that firm agrees to sell itself to the other firm and, finally, prices are jacked up.(Warning: Keep the video muted. The voice-over is painful.)
The story ends there. In this narrative, the combination offers no possible cost savings. The owner of the firm who sold doesn’t start a new firm and begin competing tomorrow, and nor does anyone else. The story ends with customers getting screwed.
And in this telling, it’s not just horizontal mergers that look like the one in the viral egg video. It is becoming a common theory of harm regarding nonhorizontal acquisitions that they are, in fact, horizontal acquisitions in disguise. The acquired party may possibly, potentially, with some probability, in the future, become a horizontal competitor. And of course, the story goes, all horizontal mergers are anticompetitive.
Therefore, we should have the same skepticism toward all mergers, regardless of whether they are horizontal or vertical. Steve Salop has argued that a problem with the Federal Trade Commission’s (FTC) 2020 vertical merger guidelines is that they failed to adopt anticompetitive presumptions.
This perspective is not just a meme on Twitter. The FTC and U.S. Justice Department (DOJ) are currently revising their guidelines for merger enforcement and have issued a request for information (RFI). The working presumption in the RFI (and we can guess this will show up in the final guidelines) is exactly the takeaway from the video: Mergers are bad. Full stop.
The RFI repeatedly requests information that would support the conclusion that the agencies should strengthen merger enforcement, rather than information that might point toward either stronger or weaker enforcement. For example, the RFI asks:
What changes in standards or approaches would appropriately strengthen enforcement against mergers that eliminate a potential competitor?
This framing presupposes that enforcement should be strengthened against mergers that eliminate a potential competitor.
Do Monopoly Profits Always Exceed Joint Duopoly Profits?
Should we assume enforcement, including vertical enforcement, needs to be strengthened? In a world with lots of uncertainty about which products and companies will succeed, why would an incumbent buy out every potential competitor? The basic idea is that, since profits are highest when there is only a single monopolist, that seller will always have an incentive to buy out any competitors.
The punchline for this anti-merger presumption is “monopoly profits exceed duopoly profits.” The argument is laid out most completely by Salop, although the argument is not unique to him. As Salop points out:
I do not think that any of the analysis in the article is new. I expect that all the points have been made elsewhere by others and myself.
Under the model that Salop puts forward, there should, in fact, be a presumption against any acquisition, not just horizontal acquisitions. He argues that:
Acquisitions of potential or nascent competitors by a dominant firm raise inherent anticompetitive concerns. By eliminating the procompetitive impact of the entry, an acquisition can allow the dominant firm to continue to exercise monopoly power and earn monopoly profits. The dominant firm also can neutralize the potential innovation competition that the entrant would provide.
We see a presumption against mergers in the recent FTC challenge of Meta’s purchase of Within. While Meta owns Oculus, a virtual-reality headset and Within owns virtual-reality fitness apps, the FTC challenged the acquisition on grounds that:
The Acquisition would cause anticompetitive effects by eliminating potential competition from Meta in the relevant market for VR dedicated fitness apps.
Given the prevalence of this perspective, it is important to examine the basic model’s assumptions. In particular, is it always true that—since monopoly profits exceed duopoly profits—incumbents have an incentive to eliminate potential competition for anticompetitive reasons?
I will argue no. The notion that monopoly profits exceed joint-duopoly profits rests on two key assumptions that hinder the simple application of the “merge to monopoly” model to antitrust.
First, even in a simple model, it is not always true that monopolists have both the ability and incentive to eliminate any potential entrant, simply because monopoly profits exceed duopoly profits.
For the simplest complication, suppose there are two possible entrants, rather than the common assumption of just one entrant at a time. The monopolist must now pay each of the entrants enough to prevent entry. But how much? If the incumbent has already paid one potential entrant not to enter, the second could then enter the market as a duopolist, rather than as one of three oligopolists. Therefore, the incumbent must pay the second entrant an amount sufficient to compensate a duopolist, not their share of a three-firm oligopoly profit. The same is true for buying the first entrant. To remain a monopolist, the incumbent would have to pay each possible competitor duopoly profits.
Because monopoly profits exceed duopoly profits, it is profitable to pay a single entrant half of the duopoly profit to prevent entry. It is not, however, necessarily profitable for the incumbent to pay both potential entrants half of the duopoly profit to avoid entry by either.
Now go back to the video. Suppose two passersby, who also happen to have chickens at home, notice that they can sell their eggs. The best part? They don’t have to sit around all day; the lady on the right will buy them. The next day, perhaps, two new egg sellers arrive.
For a simple example, consider a Cournot oligopoly model with an industry-inverse demand curve of P(Q)=1-Q and constant marginal costs that are normalized to zero. In a market with N symmetric sellers, each seller earns 1/((N+1)^2) in profits. A monopolist makes a profit of 1/4. A duopolist can expect to earn a profit of 1/9. If there are three potential entrants, plus the incumbent, the monopolist must pay each the duopoly profit of 3*1/9=1/3, which exceeds the monopoly profits of 1/4.
In the Nash/Cournot equilibrium, the incumbent will not acquire any of the competitors, since it is too costly to keep them all out. With enough potential entrants, the monopolist in any market will not want to buy any of them out. In that case, the outcome involves no acquisitions.
If we observe an acquisition in a market with many potential entrants, which any given market may or may not have, it cannot be that the merger is solely about obtaining monopoly profits, since the model above shows that the incumbent doesn’t have incentives to do that.
If our model captures the dynamics of the market (which it may or may not, depending on a given case’s circumstances) but we observe mergers, there must be another reason for that deal besides maintaining a monopoly. The presence of multiple potential entrants overturns the antitrust implications of the truism that monopoly profits exceed duopoly profits. The question turns instead to empirical analysis of the merger and market in question, as to whether it would be profitable to acquire all potential entrants.
The second simplifying assumption that restricts the applicability of Salop’s baseline model is that the incumbent has the lowest cost of production. He rules out the possibility of lower-cost entrants in Footnote 2:
Monopoly profits are not always higher. The entrant may have much lower costs or a better or highly differentiated product. But higher monopoly profits are more usually the case.
If one allows the possibility that an entrant may have lower costs (even if those lower costs won’t be achieved until the future, when the entrant gets to scale), it does not follow that monopoly profits (under the current higher-cost monopolist) necessarily exceed duopoly profits (with a lower-cost producer involved).
One cannot simply assume that all firms have the same costs or that the incumbent is always the lowest-cost producer. This is not just a modeling choice but has implications for how we think about mergers. As Geoffrey Manne, Sam Bowman, and Dirk Auer have argued:
Although it is convenient in theoretical modeling to assume that similarly situated firms have equivalent capacities to realize profits, in reality firms vary greatly in their capabilities, and their investment and other business decisions are dependent on the firm’s managers’ expectations about their idiosyncratic abilities to recognize profit opportunities and take advantage of them—in short, they rest on the firm managers’ ability to be entrepreneurial.
Given the assumptions that all firms have identical costs and there is only one potential entrant, Salop’s framework would find that all possible mergers are anticompetitive and that there are no possible efficiency gains from any merger. That’s the thrust of the video. We assume that the whole story is two identical-seeming women selling eggs. Since the acquired firm cannot, by assumption, have lower costs of production, it cannot improve on the incumbent’s costs of production.
Many Reasons for Mergers
But whether a merger is efficiency-reducing and bad for competition and consumers needs to be proven, not just assumed.
If we take the basic acquisition model literally, every industry would have just one firm. Every incumbent would acquire every possible competitor, no matter how small. After all, monopoly profits are higher than duopoly profits, and so the incumbent both wants to and can preserve its monopoly profits. The model does not give us a way to disentangle when mergers would stop without antitrust enforcement.
Mergers do not affect the production side of the economy, under this assumption, but exist solely to gain the market power to manipulate prices. Since the model finds no downsides for the incumbent to acquiring a competitor, it would naturally acquire every last potential competitor, no matter how small, unless prevented by law.
Once we allow for the possibility that firms differ in productivity, however, it is no longer true that monopoly profits are greater than industry duopoly profits. We can see this most clearly in situations where there is “competition for the market” and the market is winner-take-all. If the entrant to such a market has lower costs, the profit under entry (when one firm wins the whole market) can be greater than the original monopoly profits. In such cases, monopoly maintenance alone cannot explain an entrant’s decision to sell.
An acquisition could therefore be both procompetitive and increase consumer welfare. For example, the acquisition could allow the lower-cost entrant to get to scale quicker. The acquisition of Instagram by Facebook, for example, brought the photo-editing technology that Instagram had developed to a much larger market of Facebook users and provided a powerful monetization mechanism that was otherwise unavailable to Instagram.
In short, the notion that incumbents can systematically and profitably maintain their market position by acquiring potential competitors rests on assumptions that, in practice, will regularly and consistently fail to materialize. It is thus improper to assume that most of these acquisitions reflect efforts by an incumbent to anticompetitively maintain its market position.
Slow wage growth and rising inequality over the past few decades have pushed economists more and more toward the study of monopsony power—particularly firms’ monopsony power over workers. Antitrust policy has taken notice. For example, when the Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) initiated the process of updating their merger guidelines, their request for information included questions about how they should respond to monopsony concerns, as distinct from monopoly concerns.
From a pure economic-theory perspective, there is no important distinction between monopsony power and monopoly power. If Armen is trading his apples in exchange for Ben’s bananas, we can call Armen the seller of apples or the buyer of bananas. The labels (buyer and seller) are kind of arbitrary. It doesn’t matter as a pure theory matter. Monopsony and monopoly are just mirrored images.
Some infer from this monopoly-monopsony symmetry, however, that extending antitrust to monopsony power will be straightforward. As a practical matter for antitrust enforcement, it becomes less clear. The moment we go slightly less abstract and use the basic models that economists use, monopsony is not simply the mirror image of monopoly. The tools that antitrust economists use to identify market power differ in the two cases.
Monopsony Requires Studying Output
Suppose that the FTC and DOJ are considering a proposed merger. For simplicity, they know that the merger will generate efficiency gains (and they want to allow it) or market power (and they want to stop it) but not both. The challenge is to look at readily available data like prices and quantities to decide which it is. (Let’s ignore the ideal case that involves being able to estimate elasticities of demand and supply.)
In a monopoly case, if there are efficiency gains from a merger, the standard model has a clear prediction: the quantity sold in the output market will increase. An economist at the FTC or DOJ with sufficient data will be able to see (or estimate) the efficiencies directly in the output market. Efficiency gains result in either greater output at lower unit cost or else product-quality improvements that increase consumer demand. Since the merger lowers prices for consumers, the agencies (assume they care about the consumer welfare standard) will let the merger go through, since consumers are better off.
In contrast, if the merger simply enhances monopoly power without efficiency gains, the quantity sold will decrease, either because the merging parties raise prices or because quality declines. Again, the empirical implication of the merger is seen directly in the market in question. Since the merger raises prices for consumers, the agencies (assume they care about the consumer welfare standard) will let not the merger go through, since consumers are worse off. In both cases, you judge monopoly power by looking directly at the market that may or may not have monopoly power.
Unfortunately, the monopsony case is more complicated. Ultimately, we can be certain of the effects of monopsony only by looking at the output market, not the input market where the monopsony power is claimed.
To see why, consider again a merger that generates either efficiency gains or market (now monopsony) power. A merger that creates monopsony power will necessarily reduce the prices and quantity purchased of inputs like labor and materials. An overly eager FTC may see a lower quantity of input purchased and jump to the conclusion that the merger increased monopsony power. After all, monopsonies purchase fewer inputs than competitive firms.
Not so fast. Fewer input purchases may be because of efficiency gains. For example, if the efficiency gain arises from the elimination of redundancies in a hospital merger, the hospital will buy fewer inputs, hire fewer technicians, or purchase fewer medical supplies. This may even reduce the wages of technicians or the price of medical supplies, even if the newly merged hospitals are not exercising any market power to suppress wages.
The key point is that monopsony needs to be treated differently than monopoly. The antitrust agencies cannot simply look at the quantity of inputs purchased in the monopsony case as the flip side of the quantity sold in the monopoly case, because the efficiency-enhancing merger can look like the monopsony merger in terms of the level of inputs purchased.
How can the agencies differentiate efficiency-enhancing mergers from monopsony mergers? The easiest way may be for the agencies to look at the output market: an entirely different market than the one with the possibility of market power. Once we look at the output market, as we would do in a monopoly case, we have clear predictions. If the merger is efficiency-enhancing, there will be an increase in the output-market quantity. If the merger increases monopsony power, the firm perceives its marginal cost as higher than before the merger and will reduce output.
In short, as we look for how to apply antitrust to monopsony-power cases, the agencies and courts cannot look to the input market to differentiate them from efficiency-enhancing mergers; they must look at the output market. It is impossible to discuss monopsony power coherently without considering the output market.
In real-world cases, mergers will not necessarily be either strictly efficiency-enhancing or strictly monopsony-generating, but a blend of the two. Any rigorous consideration of merger effects must account for both and make some tradeoff between them. The question of how guidelines should address monopsony power is inextricably tied to the consideration of merger efficiencies, particularly given the point above that identifying and evaluating monopsony power will often depend on its effects in downstream markets.
This is just one complication that arises when we move from the purest of pure theory to slightly more applied models of monopoly and monopsony power. Geoffrey Manne, Dirk Auer, Eric Fruits, Lazar Radic and I go through more of the complications in our comments summited to the FTC and DOJ on updating the merger guidelines.
What Assumptions Make the Difference Between Monopoly and Monopsony?
Now that we have shown that monopsony and monopoly are different, how do we square this with the initial observation that it was arbitrary whether we say Armen has monopsony power over apples or monopoly power over bananas?
There are two differences between the standard monopoly and monopsony models. First, in a vast majority of models of monopsony power, the agent with the monopsony power is buying goods only to use them in production. They have a “derived demand” for some factors of production. That demand ties their buying decision to an output market. For monopoly power, the firm sells the goods, makes some money, and that’s the end of the story.
The second difference is that the standard monopoly model looks at one output good at a time. The standard factor-demand model uses two inputs, which introduces a tradeoff between, say, capital and labor. We could force monopoly to look like monopsony by assuming the merging parties each produce two different outputs, apples and bananas. An efficiency gain could favor apple production and hurt banana consumers. While this sort of substitution among outputs is often realistic, it is not the standard economic way of modeling an output market.
On March 31, I and several other law and economics scholars filed an amicus brief in Epic Games v. Apple, which is on appeal to the U.S. Court of Appeals for Ninth Circuit. In this post, I summarize the central arguments of the brief, which was joined by Alden Abbott, Henry Butler, Alan Meese, Aurelien Portuese, and John Yun and prepared with the assistance of Don Falk of Schaerr Jaffe LLP.
First, some background for readers who haven’t followed the case.
Epic, maker of the popular Fortnite video game, brought antitrust challenges against two policies Apple enforces against developers of third-party apps that run on iOS, the mobile operating system for Apple’s popular iPhones and iPads. One policy requires that all iOS apps be distributed through Apple’s own App Store. The other requires that any purchases of digital goods made while using an iOS app utilize Apple’s In App Purchase system (IAP). Apple collects a share of the revenue from sales made through its App Store and using IAP, so these two policies provide a way for it to monetize its innovative app platform.
Epic maintains that Apple’s app policies violate the federal antitrust laws. Following a trial, the district court disagreed, though it condemned another of Apple’s policies under California state law. Epic has appealed the antitrust rulings against it.
My fellow amici and I submitted our brief in support of Apple to draw the Ninth Circuit’s attention to a distinction that is crucial to ensuring that antitrust promotes long-term consumer welfare: the distinction between the mere extraction of surplus through the exercise of market power and the enhancement of market power via the weakening of competitive constraints.
The central claim of our brief is that Epic’s antitrust challenges to Apple’s app store policies should fail because Epic has not shown that the policies enhance Apple’s market power in any market. Moreover, condemnation of the practices would likely induce Apple to use its legitimately obtained market power to extract surplus in a different way that would leave consumers worse off than they are under the status quo.
Mere Surplus Extraction vs. Market Power Extension
As the Supreme Court has observed, “Congress designed the Sherman Act as a ‘consumer welfare prescription.’” The Act endeavors to protect consumers from harm resulting from “market power,” which is the ability of a firm lacking competitive constraints to enhance its profits by reducing its output—either quantitively or qualitatively—from the level that would persist if the firm faced vigorous competition. A monopolist, for example, might cut back on the quantity it produces (to drive up market price) or it might skimp on quality (to enhance its per-unit profit margin). A firm facing vigorous competition, by contrast, couldn’t raise market price simply by reducing its own production, and it would lose significant sales to rivals if it raised its own price or unilaterally cut back on product quality. Market power thus stems from deficient competition.
As Dennis Carlton and Ken Heyer have observed, two different types of market power-related business behavior may injure consumers and are thus candidates for antitrust prohibition. One is an exercise of market power: an action whereby a firm lacking competitive constraints increases its returns by constricting its output so as to raise price or otherwise earn higher profit margins. When a firm engages in this sort of conduct, it extracts a greater proportion of the wealth, or “surplus,” generated by its transactions with its customers.
Every voluntary transaction between a buyer and seller creates surplus, which is the difference between the subjective value the consumer attaches to an item produced and the cost of producing and distributing it. Price and other contract terms determine how that surplus is allocated between the buyer and the seller. When a firm lacking competitive constraints exercises its market power by, say, raising price, it extracts for itself a greater proportion of the surplus generated by its sale.
The other sort of market power-related business behavior involves an effort by a firm to enhance its market power by weakening competitive constraints. For example, when a firm engages in unreasonably exclusionary conduct that drives its rivals from the market or increases their costs so as to render them less formidable competitors, its market power grows.
U.S. antitrust law treats these two types of market power-related conduct differently. It forbids behavior that enhances market power and injures consumers, but it permits actions that merely exercise legitimately obtained market power without somehow enhancing it. For example, while charging a monopoly price creates immediate consumer harm by extracting for the monopolist a greater share of the surplus created by the transaction, the Supreme Court observed in Trinko that “[t]he mere possession of monopoly power, and the concomitant charging of monopoly prices, is not . . . unlawful.” (See also linkLine: “Simply possessing monopoly power and charging monopoly prices does not violate [Sherman Act] § 2….”)
Courts have similarly refused to condemn mere exercises of market power in cases involving surplus-extractive arrangements more complicated than simple monopoly pricing. For example, in its Independent Ink decision, the U.S. Supreme Court expressly declined to adopt a rule that would have effectively banned “metering” tie-ins.
In a metering tie-in, a seller with market power on some unique product that is used with a competitively supplied complement that is consumed in varying amounts—say, a highly unique printer that uses standard ink—reduces the price of its unique product (the printer), requires buyers to also purchase from it their requirements of the complement (the ink), and then charges a supracompetitive price for the latter product. This allows the seller to charge higher effective prices to high-volume users of its unique tying product (buyers who use lots of ink) and lower prices to lower-volume users.
Assuming buyers’ use of the unique product correlates with the value they ascribe to it, a metering tie-in allows the seller to price discriminate, charging higher prices to buyers who value its unique product more. This allows the seller to extract more of the surplus generated by sales of its product, but it in no way extends the seller’s market power.
In refusing to adopt a rule that would have condemned most metering tie-ins, the Independent Ink Court observed that “it is generally recognized that [price discrimination] . . . occurs in fully competitive markets” and that tying arrangements involving requirements ties may be “fully consistent with a free, competitive market.” The Court thus reasoned that mere price discrimination and surplus extraction, even when accomplished through some sort of contractual arrangement like a tie-in, are not by themselves anticompetitive harms warranting antitrust’s condemnation.
The Ninth Circuit has similarly recognized that conduct that exercises market power to extract surplus but does not somehow enhance that power does not create antitrust liability. In Qualcomm, the court refused to condemn the chipmaker’s “no license, no chips” policy, which enabled it to enhance its profits by earning royalties on original equipment manufacturers’ sales of their high-priced products.
In reversing the district court’s judgment in favor of the FTC, the Ninth Circuit conceded that Qualcomm’s policies were novel and that they allowed it to enhance its profits by extracting greater surplus. The court refused to condemn the policies, however, because they did not injure competition by weakening competitive constraints:
This is not to say that Qualcomm’s “no license, no chips” policy is not “unique in the industry” (it is), or that the policy is not designed to maximize Qualcomm’s profits (Qualcomm has admitted as much). But profit-seeking behavior alone is insufficient to establish antitrust liability. As the Supreme Court stated in Trinko, the opportunity to charge monopoly prices “is an important element of the free-market system” and “is what attracts ‘business acumen’ in the first place; it induces risk taking that produces innovation and economic growth.”
The Qualcomm court’s reference to Trinko highlights one reason courts should not condemn exercises of market power that merely extract surplus without enhancing market power: allowing such surplus extraction furthers dynamic efficiency—welfare gain that accrues over time from the development of new and improved products and services.
Dynamic efficiency results from innovation, which entails costs and risks. Firms are more willing to incur those costs and risks if their potential payoff is higher, and an innovative firm’s ability to earn supracompetitive profits off its “better mousetrap” enhances its payoff.
Allowing innovators to extract such profits also helps address the fact most of the benefits of product innovation inure to people other than the innovator. Private actors often engage in suboptimal levels of behaviors that produce such benefit spillovers, or “positive externalities,” because they bear all the costs of those behaviors but capture just a fraction of the benefit produced. By enhancing the benefits innovators capture from their innovative efforts, allowing non-power-enhancing surplus extraction helps generate a closer-to-optimal level of innovative activity.
Not only do supracompetitive profits extracted through the exercise of legitimately obtained market power motivate innovation, they also enable it by helping to fund innovative efforts. Whereas businesses that are forced by competition to charge prices near their incremental cost must secure external funding for significant research and development (R&D) efforts, firms collecting supracompetitive returns can finance R&D internally. Indeed, of the top fifteen global spenders on R&D in 2018, eleven were either technology firms accused of possessing monopoly power (#1 Apple, #2 Alphabet/Google, #5 Intel, #6 Microsoft, #7 Apple, and #14 Facebook) or pharmaceutical companies whose patent protections insulate their products from competition and enable supracompetitive pricing (#8 Roche, #9 Johnson & Johnson, #10 Merck, #12 Novartis, and #15 Pfizer).
In addition to fostering dynamic efficiency by motivating and enabling innovative efforts, a policy acquitting non-power-enhancing exercises of market power allows courts to avoid an intractable question: which instances of mere surplus extraction should be precluded?
Precluding all instances of surplus extraction by firms with market power would conflict with precedents like Trinko and linkLine (which say that legitimate monopolists may legally charge monopoly prices) and would be impracticable given the ubiquity of above-cost pricing in niche and brand-differentiated markets.
A rule precluding surplus extraction when accomplished by a practice more complicated that simple monopoly pricing—say, some practice that allows price discrimination against buyers who highly value a product—would be both arbitrary and backward. The rule would be arbitrary because allowing supracompetitive profits from legitimately obtained market power motivates and enables innovation regardless of the means used to extract surplus. The rule would be backward because, while simple monopoly pricing always reduces overall market output (as output-reduction is the very means by which the producer causes price to rise), more complicated methods of extracting surplus, such as metering tie-ins, often enhance market output and overall social welfare.
A third possibility would be to preclude exercising market power to extract more surplus than is necessary to motivate and enable innovation. That position, however, would require courts to determine how much surplus extraction is required to induce innovative efforts. Courts are poorly positioned to perform such a task, and their inevitable mistakes could significantly chill entrepreneurial activity.
Consider, for example, a firm contemplating a $5 million investment that might return up to $50 million. Suppose the managers of the firm weighed expected costs and benefits and decided the risky gamble was just worth taking. If the gamble paid off but a court stepped in and capped the firm’s returns at $20 million—a seemingly generous quadrupling of the firm’s investment—future firms in the same position would not make similar investments. After all, the firm here thought this gamble was just barely worth taking, given the high risk of failure, when available returns were $50 million.
In the end, then, the best policy is to draw the line as both the U.S. Supreme Court and the Ninth Circuit have done: Whereas enhancements of market power are forbidden, merely exercising legitimately obtained market power to extract surplus is permitted.
Apple’s Policies Do Not Enhance Its Market Power
Under the legal approach described above, the two Apple policies Epic has challenged do not give rise to antitrust liability. While the policies may boost Apple’s profits by facilitating its extraction of surplus from app transactions on its mobile devices, they do not enhance Apple’s market power in any conceivable market.
As the creator and custodian of the iOS operating system, Apple has the ability to control which applications will run on its iPhones and iPads. Developers cannot produce operable iOS apps unless Apple grants them access to the Application Programming Interfaces (APIs) required to enable the functionality of the operating system and hardware. In addition, Apple can require developers to obtain digital certificates that will enable their iOS apps to operate. As the district court observed, “no certificate means the code will not run.”
Because Apple controls which apps will work on the operating system it created and maintains, Apple could collect the same proportion of surplus it currently extracts from iOS app sales and in-app purchases on iOS apps even without the policies Epic is challenging. It could simply withhold access to the APIs or digital certificates needed to run iOS apps unless developers promised to pay it 30% of their revenues from app sales and in-app purchases of digital goods.
This means that the challenged policies do not give Apple any power it doesn’t already possess in the putative markets Epic identified: the markets for “iOS app distribution” and “iOS in-app payment processing.”
The district court rejected those market definitions on the ground that Epic had not established cognizable aftermarkets for iOS-specific services. It defined the relevant market instead as “mobile gaming transactions.” But no matter. The challenged policies would not enhance Apple’s market power in that broader market either.
In “mobile gaming transactions” involving non-iOS (e.g., Android) mobile apps, Apple’s policies give it no power at all. Apple doesn’t distribute non-iOS apps or process in-app payments on such apps. Moreover, even if Apple were to being doing so—say, by distributing Android apps in its App Store or allowing producers of Android apps to include IAP as their in-app payment system—it is implausible that Apple’s policies would allow it to gain new market power. There are giant, formidable competitors in non-iOS app distribution (e.g., Google’s Play Store) and in payment processing for non-iOS in-app purchases (e.g., Google Play Billing). It is inconceivable that Apple’s policies would allow it to usurp so much scale from those rivals that Apple could gain market power over non-iOS mobile gaming transactions.
That leaves only the iOS segment of the mobile gaming transactions market. And, as we have just seen, Apple’s policies give it no new power to extract surplus from those transactions; because it controls access to iOS, it could do so using other means.
Nor do the challenged policies enable Apple to maintain its market power in any conceivable market. This is not a situation like Microsoft where a firm in a market adjacent to a monopolist’s could somehow pose a challenge to that monopolist, and the monopolist nips the potential competition in the bud by reducing the potential rival’s scale. There is no evidence in the record to support the (implausible) notion that rival iOS app stores or in-app payment processing systems could ever evolve in a manner that would pose a challenge to Apple’s position in mobile devices, mobile operating systems, or any other market in which it conceivably has market power.
Epic might retort that but for the challenged policies, rivals could challenge Apple’s market share in iOS app distribution and in-app purchase processing. Rivals could not, however, challenge Apple’s market power in such markets, as that power stems from its control of iOS. The challenged policies therefore do not enable Apple to shore up any existing market power.
Alternative Means of Extracting Surplus Would Likely Reduce Consumer Welfare
Because the policies Epic has challenged are not the source of Apple’s ability to extract surplus from iOS app transactions, judicial condemnation of the policies would likely induce Apple to extract surplus using different means. Changing how it earns profits off iOS app usage, however, would likely leave consumers worse off than they are under the status quo.
Apple could simply charge third-party app developers a flat fee for access to the APIs needed to produce operable iOS apps but then allow them to distribute their apps and process in-app payments however they choose. Such an approach would allow Apple to monetize its innovative app platform while permitting competition among providers of iOS app distribution and in-app payment processing services. Relative to the status quo, though, such a model would likely reduce consumer welfare by:
Reducing the number of free and niche apps,as app developers could no longer avoid a fee to Apple by adopting a free (likely ad-supported) business model, and producers of niche apps may not generate enough revenue to justify Apple’s flat fee;
Raising business risks for app developers, who, if Apple cannot earn incremental revenue off sales and use of their apps, may face a greater likelihood that the functionality of those apps will be incorporated into future versions of iOS;
Reducing Apple’s incentive to improve iOS and its mobile devices, as eliminating Apple’s incremental revenue from app usage reduces its motivation to make costly enhancements that keep users on their iPhones and iPads;
Raising the price of iPhones and iPadsand generating deadweight loss, as Apple could no longer charge higher effective prices to people who use apps more heavily and would thus likely hike up its device prices, driving marginal consumers from the market; and
Reducing user privacy and security, as jettisoning a closed app distribution model (App Store only) would impair Apple’s ability to screen iOS apps for features and bugs that create security and privacy risks.
An alternative approach—one that would avoid many of the downsides just stated by allowing Apple to continue earning incremental revenue off iOS app usage—would be for Apple to charge app developers a revenue-based fee for access to the APIs and other amenities needed to produce operable iOS apps. That approach, however, would create other costs that would likely leave consumers worse off than they are under the status quo.
The policies Epic has challenged allow Apple to collect a share of revenues from iOS app transactions immediately at the point of sale. Replacing those policies with a revenue-based API license system would require Apple to incur additional costs of collecting revenues and ensuring that app developers are accurately reporting them. In order to extract the same surplus it currently collects—and to which it is entitled given its legitimately obtained market power—Apple would have to raise its revenue-sharing percentage above its current commission rate to cover its added collection and auditing costs.
The fact that Apple has elected not to adopt this alternative means of collecting the revenues to which it is entitled suggests that the added costs of moving to the alternative approach (extra collection and auditing costs) would exceed any additional consumer benefit such a move would produce. Because Apple can collect the same revenue percentage from app transactions two different ways, it has an incentive to select the approach that maximizes iOS app transaction revenues. That is the approach that creates the greatest value for consumers and also for Apple.
If Apple believed that the benefits to app users of competition in app distribution and in-app payment processing would exceed the extra costs of collection and auditing, it would have every incentive to switch to a revenue-based licensing regime and increase its revenue share enough to cover its added collection and auditing costs. As such an approach would enhance the net value consumers receive when buying apps and making in-app purchases, it would raise overall app revenues, boosting Apple’s bottom line. The fact that Apple has not gone in this direction, then, suggests that it does not believe consumers would receive greater benefit under the alternative system. Apple might be wrong, of course. But it has a strong motivation to make the consumer welfare-enhancing decision here, as doing so maximizes its own profits.
The policies Epic has challenged do not enhance or shore up Apple’s market power, a salutary pre-requisite to antitrust liability. Furthermore, condemning the policies would likely lead Apple to monetize its innovative app platform in a manner that would reduce consumer welfare relative to the status quo. The Ninth Circuit should therefore affirm the district court’s rejection of Epic’s antitrust claims.
In Fleites v. MindGeek—currently before the U.S. District Court for the District of Central California, Southern Division—plaintiffs seek to hold MindGeek subsidiary PornHub liable for alleged instances of human trafficking under the Racketeer Influenced and Corrupt Organizations (RICO) and the Trafficking Victims Protection Reauthorization Act (TVPRA). Writing for the International Center for Law & Economics (ICLE), we have filed a motion for leave to submit an amicus brief regarding whether it is valid to treat co-defendant Visa Inc. as a proper party under principles of collateral liability.
The proposed brief draws on our previous work on the law & economics of collateral liability, and argues that holding Visa liable as a participant under RICO or TVPRA would amount to stretching collateral liability far beyond what is reasonable. Such a move, we posit, would “generate a massive amount of social cost that would outweigh the potential deterrent or compensatory gains sought.”
Collateral liability can make sense when intermediaries are in a position to effectively monitor and control potential harms. That is, it can be appropriate to apply collateral liability to parties who are what is often referred to as a “least cost avoider.” As we write:
In some circumstances it is indeed proper to hold third parties liable even though they are not primary actors directly implicated in wrongdoing. Most significantly, such liability may be appropriate when a collateral actor stands in a relationship to the wrongdoing (or wrongdoers or victims) such that the threat of liability can incentivize it to take action (or refrain from taking action) to prevent or mitigate the wrongdoing. That is to say, collateral liability may be appropriate when the third party has a significant enough degree of control over the primary actors such that its actions can cause them to reduce the risk of harm at reasonable cost. Importantly, however, such liability is appropriate only when direct deterrence is insufficient and/or the third party can prevent harm at lower cost or more effectively than direct enforcement… From an economic perspective, liability should be imposed upon the party or parties best positioned to deter the harms in question, such that the costs of enforcement do not exceed the social gains realized.
The law of negligence under the common law, as well as contributory infringement under copyright law, both help illustrate this principle. Under the common law, collateral actors have a duty in only limited circumstances, when the harms are “reasonably foreseeable” and the actor has special access to particularized information about the victims or the perpetrators, as well as a special ability to control harmful conditions. Under copyright law, collateral liability is similarly limited to circumstances where collateral actors are best positioned to prevent the harm, and the benefits of holding such actors liable exceed the harms.
Neither of these conditions are true in Fleites v. MindGeek: Visa is not the type of collateral actor that has any access to specialized information or the ability to control actual bad actors. Visa, as a card-payment network, simply processes payments. The only tool at the disposal of Visa is a giant sledgehammer: it can foreclose all transactions to particular sites that run over its network. There is no dispute that the vast majority of content hosted on sites like MindGeek is lawful, however awful one may believe pornography to be. Holding card networks liable here would create incentives to avoid processing payments for such sites altogether in order to avoid legal consequences.
The potential costs of the theory of liability asserted here stretch far beyond Visa or this particular case. The plaintiffs’ theory would hold anyone liable who provides services that “allow[] the alleged principal actors to continue to do business.” This would mean that Federal Express, for example, would be liable for continuing to deliver packages to MindGeek’s address or that a waste-management company could be liable for providing custodial services to the building where MindGeek has an office.
According to the plaintiffs, even the mere existence of a newspaper article alleging a company is doing something illegal is sufficient to find that professionals who have provided services to that company “participate” in a conspiracy. This would have ripple effects for professionals from many other industries—from accountants to bankers to insurance—who all would see significantly increased risk of liability.
This post is the first in a three-part series. The second installment can be found here and the third can be found here.
The interplay among political philosophy, competition, and competition law remains, with some notable exceptions, understudied in the literature. Indeed, while examinations of the intersection between economics and competition law have taught us much, relatively little has been said about the value frameworks within which different visions of competition and competition law operate.
As Ronald Coase reminds us, questions of economics and political philosophy are interrelated, so that “problems of welfare economics must ultimately dissolve into a study of aesthetics and morals.” When we talk about economics, we talk about political philosophy, and vice versa. Every political philosophy reproduces economic prescriptions that reflect its core tenets. And every economic arrangement, in turn, evokes the normative values that undergird it. This is as true for socialism and fascism as it is for liberalism and neoliberalism.
Many economists have understood this. Milton Friedman, for instance, who spent most of his career studying social welfare, not ethics, admitted in Free to Choose that he was ultimately concerned with the preservation of a value: the liberty of the individual. Similarly, the avowed purpose of Friedrich Hayek’s The Constitution of Liberty was to maximize the state of human freedom, with coercion—i.e., the opposite of freedom—described as evil. James Buchanan fought to preserve political philosophy within the economic discipline, particularly worrying that:
Political economy was becoming unmoored from the types of philosophic and institutional analysis which were previously central to the field. In its flight from reality, Buchanan feared economics was in danger of abandoning social-philosophic issues for exclusively technical questions.
— John Kroencke, “Three Essays in the History of Economics”
Against this background, I propose to look at competition and competition law from a perspective that explicitly recognizes this connection. The goal is not to substitute, but rather to complement, our comparatively broad understanding of competition economics with a better grasp of the deeper normative implications of regulating competition in a certain way. If we agree with Robert Bork that antitrust is a subcategory of ideology that reflects and reacts upon deeper tensions in our society, the exercise might also be relevant beyond the relatively narrow confines of antitrust scholarship (which, on the other hand, seem to be getting wider and wider).
The Classical Liberal Revolution and the Unshackling of Competition
Mercantilism
When Adam Smith’s The Wealth of Nations was published in 1776, heavy economic regulation of the market through laws, by-laws, tariffs, and special privileges was the norm. Restrictions on imports were seen as protecting national wealth by preventing money from flowing out of the country—a policy premised on the conflation of money with wealth. A morass of legally backed and enforceable monopoly rights, granted either by royal decree or government-sanctioned by-laws, marred competition. Guilds reigned over tradesmen by restricting entry into the professions and segregating markets along narrow geographic lines. At every turn, economic activity was shot through with rules, restrictions, and regulations.
The Revolution in Political Economy
Classical liberals like Smith departed from the then-dominant mercantilist paradigm by arguing that nations prospered through trade and competition, and not protectionism and monopoly privileges. He demonstrated that both the seller and the buyer benefited from trade; and theorized the market as an automatic mechanism that allocated resources efficiently through the spontaneous, self-interested interaction of individuals.
Undergirding this position was the notion of the natural order, which Smith carried over from his own Theory of Moral Sentiments and which elaborated on arguments previously espoused by the French physiocrats (a neologism meaning “the rule of nature”), such as Anne Robert Jacques Turgot, François Quesnay, and Jacques Claude Marie Vincent de Gournay. The basic premise was that there existed a harmonious order of things established and maintained by means of subconscious balancing of the egoism of the individual and the greatest welfare for all.
The implications of this modest insight, which clashed directly with established mercantilist orthodoxy, were tremendous. If human freedom maximized social welfare, the justification for detailed government intervention in the economy was untenable. The principles of laissez-faire (a term probably coined by Gournay, who had been Turgot’s mentor) instead prescribed that the government should adopt a “night watchman” role, tending to modest tasks such as internal and external defense, the mediation of disputes, and certain public works that were not deemed profitable for the individual.
Freeing Competition from the Mercantilist Yoke
Smith’s general attitude also carried over to competition. Following the principles described above, classical liberals believed that price and product adjustments following market interactions among tradesmen (i.e., competition) would automatically maximize social utility. As Smith argued:
In general, if any branch of trade, or any division of labor, be advantageous to the public, the freer and more general the competition, it will always be the more so.
This did not mean that competition occurred in a legal void. Rather, Smith’s point was that there was no need to construct a comprehensive system of competition regulation, as markets would oversee themselves so long as a basic legal and institutional framework was in place and government refrained from actively abetting monopolies. Under this view, the only necessary “competition law” would be those individual laws that made competition possible, such as private property rights, contracts, unfair competition laws, and the laws against government and guild restrictions.
Liberal Political Philosophy: Utilitarian and Deontological Perspectives on Liberty and Individuality
Of course, this sort of volte face in political economy needed to be buttressed by a robust philosophical conception of the individual and the social order. Such ontological and moral theories were articulated in, among others, the Theory of Moral Sentiments and John Stuart Mill’s On Liberty. At the heart of the liberal position was the idea that undue restrictions on human freedom and individuality were not only intrinsically despotic, but also socially wasteful, as they precluded men from enjoying the fruits of the exercise of such freedoms. For instance, infringing the freedom to trade and to compete would rob the public of cheaper goods, while restrictions on freedom of expression would arrest the development of thoughts and ideas through open debate.
It is not clear whether the material or the ethical argument for freedom came first. In other words, whether classical liberalism constituted an ex-post rationalization of a moral preference for individual liberty, or precisely the reverse. The question may be immaterial, as classical liberals generally believed that the deontological and the consequentialist cases for liberty—save in the most peripheral of cases (e.g., violence against others)—largely overlapped.
Conclusion
In sum, classical liberalism offered a holistic, integrated view of societies, markets, morals, and individuals that was revolutionary for the time. The notion of competition as a force to be unshackled—rather than actively constructed and chaperoned—flowed organically from that account and its underlying values and assumptions. These included such values as personal freedom and individualism, along with foundational metaphysical presuppositions, such as the existence of a harmonious natural order that seamlessly guided individual actions for the benefit of the whole.
Where such base values and presumptions are eroded, however, the notion of a largely spontaneous, self-sustaining competitive process loses much of its rational, ethical, and moral legitimacy. Competition thus ceases to be tenable on its “own two feet” and must either be actively engineered and protected, or abandoned altogether as a viable organizing principle. In this sense, the crisis of liberalism the West experienced in the late 19th and early 20th centuries—which attacked the very foundations of classical liberal doctrine—can also be read as a crisis of competition.
In my next post, I’ll discuss the collectivist backlash against liberalism.
There has been a rapid proliferation of proposals in recent years to closely regulate competition among large digital platforms. The European Union’s Digital Markets Act (DMA, which will become effective in 2023) imposes a variety of data-use, interoperability, and non-self-preferencing obligations on digital “gatekeeper” firms. A host of other regulatory schemes are being considered in Australia, France, Germany, and Japan, among other countries (for example, see here). The United Kingdom has established a Digital Markets Unit “to operationalise the future pro-competition regime for digital markets.” Recently introduced U.S. Senate and House Bills—although touted as “antitrust reform” legislation—effectively amount to “regulation in disguise” of disfavored business activities by very large companies, including the major digital platforms (see here and here).
Sorely missing from these regulatory proposals is any sense of the fallibility of regulation. Indeed, proponents of new regulatory proposals seem to implicitly assume that government regulation of platforms will enhance welfare, ignoring real-life regulatory costs and regulatory failures (see here, for example). Without evidence, new regulatory initiatives are put forth as superior to long-established, consumer-based antitrust law enforcement.
The hope that new regulatory tools will somehow “solve” digital market competitive “problems” stems from the untested assumption that established consumer welfare-based antitrust enforcement is “not up to the task.” Untested assumptions, however, are an unsound guide to public policy decisions. Rather, in order to optimize welfare, all proposed government interventions in the economy, including regulation and antitrust, should be subject to decision-theoretic analysis that is designed to minimize the sum of error and decision costs (see here). What might such an analysis reveal?
Wonder no more. In a just-released Mercatus Center Working Paper, Professor Thom Lambert has conducted a decision-theoretic analysis that evaluates the relative merits of U.S. consumer welfare-based antitrust, ex ante regulation, and ongoing agency oversight in addressing the market power of large digital platforms. While explaining that antitrust and its alternatives have their respective costs and benefits, Lambert concludes that antitrust is the welfare-superior approach to dealing with platform competition issues. According to Lambert:
This paper provides a comparative institutional analysis of the leading approaches to addressing the market power of large digital platforms: (1) the traditional US antitrust approach; (2) imposition of ex ante conduct rules such as those in the EU’s Digital Markets Act and several bills recently advanced by the Judiciary Committee of the US House of Representatives; and (3) ongoing agency oversight, exemplified by the UK’s newly established “Digital Markets Unit.” After identifying the advantages and disadvantages of each approach, this paper examines how they might play out in the context of digital platforms. It first examines whether antitrust is too slow and indeterminate to tackle market power concerns arising from digital platforms. It next considers possible error costs resulting from the most prominent proposed conduct rules. It then shows how three features of the agency oversight model—its broad focus, political susceptibility, and perpetual control—render it particularly vulnerable to rent-seeking efforts and agency capture. The paper concludes that antitrust’s downsides (relative indeterminacy and slowness) are likely to be less significant than those of ex ante conduct rules (large error costs resulting from high informational requirements) and ongoing agency oversight (rent-seeking and agency capture).
Lambert’s analysis should be carefully consulted by American legislators and potential rule-makers (including at the Federal Trade Commission) before they institute digital platform regulation. One also hopes that enlightened foreign competition officials will also take note of Professor Lambert’s well-reasoned study.