Archives For

One of the biggest names in economics, Daron Acemoglu, recently joined the mess that is Twitter. He wasted no time in throwing out big ideas for discussion and immediately getting tons of, let us say, spirited replies. 

One of Acemoglu’s threads involved a discussion of F.A. Hayek’s famous essay “The Use of Knowledge in Society,” wherein Hayek questions central planners’ ability to acquire and utilize such knowledge. Echoing many other commentators, Acemoglu asks: can supercomputers and artificial intelligence get around Hayek’s concerns? 

Coming back to Hayek’s argument, there was another aspect of it that has always bothered me. What if computational power of central planners improved tremendously? Would Hayek then be happy with central planning?

While there are a few different layers to Hayek’s argument, at least one key aspect does not rest at all on computational power. Hayek argues that markets do not require users to have much information in order to make their decisions. 

To use Hayek’s example, when the price of tin increases: “All that the users of tin need to know is that some of the tin they used to consume is now more profitably employed elsewhere.” Knowing whether demand or supply shifted to cause the price increase would be redundant information for the tin user; the price provides all the information about market conditions that the user needs. 

To Hayek, this informational role of prices is what makes markets unique (compared to central planning):

The most significant fact about this [market] system is the economy of knowledge with which it operates, or how little the individual participants need to know in order to take the right action.

Good computers, bad computers—it doesn’t matter. Markets just require less information from their individual participants. This was made precise in the 1970s and 1980s in a series of papers on the “informational efficiency” of competitive markets.

This post will give an explanation of what the formal results say. From there, we can go back to debating the relevance for Acemoglu’s argument and the future of central planning with AI.

From Hayek to Hurwicz

First, let’s run through an oversimplified history of economic thought. Hayek developed his argument about information and markets during the socialist-calculation debate between Hayek and Ludwig von Mises on one side and Oskar Lange and Abba Lerner on the other. Lange and Lerner argued that a planned socialist economy could replicate a market economy. Mises and Hayek argued that it could not, because the socialist planner would not have the relevant information.

In response to the socialist-calculation debate, Leonid Hurwicz—who studied with Hayek at the London School of Economics, overlapped with Mises in Geneva, and would ultimately be awarded the Nobel Memorial Prize in 2007—developed the formal language in the 1960s and 1970s that became what we now call “mechanism design.”

Specifically, Hurwicz developed an abstract way to measure how much information a system needed. What does it mean for a system to require little information? What is the “efficient” (i.e., minimal) amount of information? Two later papers (Mount and Reiter (1974) and Jordan (1982)) used Hurwicz’s framework to prove that competitive markets are informationally efficient.

Understanding the Meaning of Informational Efficiency

How much information do people need to achieve a competitive outcome? This is where Hurwicz’s theory comes in. He gave us a formal way to discuss more and less information: the size of the message space. 

To understand the message space’s size, consider an economy with six people: three buyers and three sellers. Some buyers—call them type B3—are willing to pay $3. Type B2 is willing to pay $2. Sellers of type S0 are willing to sell for $0. S1 for $1, and so on. Each buyer knows their valuation for the good, and each seller knows their cost.

Here’s the weird exercise. Along comes an oracle who knows everything. The oracle decides to figure out a competitive price that will clear the market, so he draws out the supply curve (in orange), and the demand curve (in blue) and picks an equilibrium point where they cross (in red). 

So the oracle knows a price of $1.50 and a quantity of 2 is an equilibrium.

Now, we, the ignorant outsiders, come along and want to verify that the oracle is telling the truth and knows that it is an equilibrium. But we shouldn’t take the oracle’s word for it.

How can the oracle convince us that this is an equilibrium? We don’t know anyone’s valuation.

The oracle puts forward a game to the six players. The oracle says:

  • The price is $1.50, meaning that if you buy 1, you pay $1.50; if you sell 1, you receive $1.50.
  • If you say you’re B3 (which means you value the good at $3), you must buy 1.
  • If you say you’re B2, you must buy 1.
  • If you say you’re B1, you must buy 0.
  • If you say you’re S0, you must sell 1.
  • If you say you’re S1, you must sell 1.
  • If you say you’re S2, you must sell 0.

The oracle then asks everyone: do you accept the terms of this mechanism? Everyone says yes, because only the buyers who value it more than $1.50 buy and only the sellers with a cost less than $1.50 sell. By everyone agreeing, we (the ignorant outsiders) can verify that the oracle did, in fact, know people’s valuations.

Now, let’s count how much information the oracle needed to communicate. He needed to send a message that included the price and the trades for each type. Technically, he didn’t need to say S2 sells zero, because it is implied by the fact that the quantity bought must equal the quantity sold. In total, he needs to send six messages.

The formal exercise amounts to counting each message that needs to be sent. With a formally specified way of measuring how much information is required in competitive markets, we can now ask whether this is a lot. 

If you don’t care about efficiency, you can always save on information and not say anything, don’t have anyone trade, and have a message space of size 0. That saves on information; just do nothing.

But in the context of the socialist-calculation debate, the argument was over how much information was needed to achieve “good” outcomes. Lange and Lerner argued that market socialism could be efficient, not that it would result in zero trade, so efficiency is the welfare benchmark we are aiming for.

If you restrict your attention to efficient outcomes, Mount and Reiter (1974) showed you cannot use less information than competitive markets. In a later paper, Jordan (1982) showed that there is no way to match the competitive mechanism in terms of information. The competitive mechanism is the unique mechanism with this dimension. 

Acemoglu reads Hayek as saying “central planning wouldn’t work because it would be impossible to collect and compute the right allocation of resources.” But the Jordan and Mount & Reiter papers don’t claim that computation is impossible for central planners. Take whatever computational abilities exist, from the first computer to the newest AI—competitive markets always require the least information possible. Supercomputers or AI do not, and cannot, change that relative comparison. 

Beyond Computational Issues

In terms of information costs, the best a central planner could hope for is to mimic exactly the market mechanism. But then, of what use is the planner? She’s just one more actor who could divert the system toward her own interest. As Acemoglu points out, “if the planner could collect all of that information, she could do lots of bad things with it.” 

The incentive problem is a separate problem, which is why Hayek tried to focus solely on information. Think about building a road. There is a concern that markets will not provide roads because people would be unwilling to pay for them without being coerced through taxes. You cannot simply ask people how much they are willing to pay for the road and charge them that price. People will lie and say they do not care about roads. No amount of computing power fixes incentives. Again, computing power is tangential to the question of markets versus planning. Superior computational power doesn’t help. 

There’s a lot buried in Hayek and all of those ideas are important and worth considering. They are just further complications with which we should grapple. A handful of theory papers will never solve all of our questions about the nature of markets and central planning. Instead, the formal papers tell us, in a very stylized setting, what it would even mean to quantify the “amount of information.” And once we quantify it, we have an explicit way to ask: do markets use minimal information?

For several decades, we have known that the answer is yes. In recent work, Rafael Guthmann and I show that informational efficiency can extend to big platforms coordinating buyers and sellers—what we call market-makers.

The bigger problem with Acemoglu’s suggestion that computational abilities can solve Hayek’s challenge is that Hayek wasn’t merely thinking about computation and the communication of information. Instead, Hayek was concerned about our ability to even articulate our desires. In the example above, the buyers know exactly how much they are willing to pay and sellers know exactly how much they are willing to sell for. But in the real world, people have tacit knowledge that they cannot communicate to third parties. This is especially true when we think about a dynamic world of innovation. How do you communicate to a central planner a new product? 

The real issue is the market dynamics require entrepreneurs who are imagining new futures with new products like the iPhone. Major innovations will never be able to be articulated and communicated to a central planner. All of these readings of Hayek and the market’s ability to communicate information—from formal informational efficiency to tacit knowledge—are independent of computational capabilities. 

Under a recently proposed rule, the Federal Trade Commission (FTC) would ban the use of noncompete terms in employment agreements nationwide. Noncompetes are contracts that workers sign saying they agree to not work for the employer’s competitors for a certain period. The FTC’s rule would be a major policy change, regulating future contracts and retroactively voiding current ones. With limited exceptions, it would cover everyone in the United States.

When I scan academic economists’ public commentary on the ban over the past few weeks (which basically means people on Twitter), I see almost universal support for the FTC’s proposed ban. You see similar support if you expand to general econ commentary, like Timothy Lee at Full Stack Economics. Where you see pushback, it is from people at think tanks (like me) or hushed skepticism, compared to the kind of open disagreement you see on most policy issues.

The proposed rule grew out of an executive order by President Joe Biden in 2021, which I wrote about at the time. My argument was that there is a simple economic rationale for the contract: noncompetes encourage both parties to invest in the employee-employer relationship, just like marriage contracts encourage spouses to invest in each other.

Somehow, reposting my newsletter on the economic rationale for noncompetes has turned me into a “pro-noncompete guy” on Twitter.

The discussions have been disorienting. I feel like I’m taking crazy pills! If you ask me, “what new thing should policymakers do to address labor market power?” I would probably say something about noncompetes! Employers abuse them. The stories are devastating about people unable to find a new job because noncompetes bind them.

Yet, while recognizing the problems with noncompetes, I do not support the complete ban.

That puts me out of step with most vocal economics commentators. Where does this disagreement come from? How do I think about policy generally, and why am I the odd one out?

My Interpretation of the Research

One possibility is that I’m not such a lonely voice, and that the sample of vocal Twitter users is biased toward particular policy views. The University of Chicago Booth School of Business’ Initiative on Global Markets recently conducted a poll of academic economists  about noncompetes, which mostly finds differing opinions and levels of certainty about the effects of a ban. For example, 43% were uncertain that a ban would generate a “substantial increase in wages in the affected industries.” However, maybe that is because the word substantial is unclear. That’s a problem with these surveys.

Still, more economists surveyed agreed than disagreed. I would answer “disagree” to that statement, as worded.

Why do I differ? One cynical response would be that I don’t know the recent literature, and my views are outdated. From the research I’ve done for a paper that I’m writing on labor-market power, I’m fairly well-versed in the noncompete literature. I don’t know it better than the active researchers in the field, but better than the average economists responding to the FTC’s proposal and definitely better than most lawyers. My disagreement also isn’t about me being some free-market fanatic. I’m not, and some other free-market types are skeptical of noncompetes. My priors are more complicated (critics might say “confused”) than that, as I will explain below.

After much soul-searching, I’ve concluded that the disagreement is real and results from my—possibly weird—understanding of how we should go from the science of economics to the art of policy. That’s what I want to explain today and get us to think more about.

Let’s start with the literature and the science of economics. First, we need to know “the facts.” The original papers focused a lot on collecting data and facts about noncompetes. We don’t have amazing data on the prevalence of noncompetes, but we know something, which is more than we could say a decade ago. For example, Evan Starr, J.J. Prescott, & Norman Bishara (2021) conducted a large survey in which they found that “18 percent of labor force participants are bound by noncompetes, with 38 percent having agreed to at least one in the past.”[1] We need to know these things and thank the researchers for collecting data.

With these facts, we can start running regressions. In addition to the paper above, many papers develop indices of noncompete “enforceability” by state. Then we can regress things like wages on an enforceability index. Many papers—like Starr, Prescott, & Bishara above—run cross-state regressions and find that wages are higher in states with higher noncompete enforceability. They also find more training with noncompete enforceability. But that kind of correlation is littered with selection issues. High-income workers are more likely to sign noncompetes. That’s not causal. The authors carefully explain this, but sometimes correlations are the best we have—e.g., if we want to study noncompetes on doctors’ wages and their poaching of clients.

Some people will simply point to California (which has banned noncompetes for decades) and say, “see, noncompete bans don’t destroy an economy.” Unfortunately, many things make California unique, so while that is evidence, it’s hardly causal.

The most credible results come from recent changes in state policy. These allow us to run simple difference-in-difference types of analysis to uncover causal estimates. These results are reasonably transparent and easy to understand.

Michael Lipsitz & Evan Starr (2021) (are you starting to recognize that Starr name?) study a 2008 Oregon ban on noncompetes for hourly workers. They find the ban increased hourly wages overall by 2 to 3%, which implies that those signing noncompetes may have seen wages rise as much as 14 to 21%. This 3% number is what the FTC assumes will apply to the whole economy when they estimate a $300 billion increase in wages per year under their ban. It’s a linear extrapolation.

Similarly, in 2015, Hawaii banned noncompetes for new hires within tech industries. Natarajan Balasubramanian et al. (2022) find that the ban increased new-hire wages by 4%. They also estimate that the ban increased worker mobility by 11%. Labor economists generally think of worker turnover as a good thing. Still, it is tricky here when the whole benefit of the agreement is to reduce turnover and encourage a better relationship between workers and firms.

The FTC also points to three studies that find that banning noncompetes increases innovation, according to a few different measures. I won’t say anything about these because you can infer my reaction based on what I will say below on wage studies. If anything, I’m more skeptical of innovation studies, simply because I don’t think we have a good understanding of what causes innovation generally, let alone how to measure the impact of noncompetes on innovation. You can read what the FTC cites on innovation and make up your own mind.

From Academic Research to an FTC Ban

Now that we understand some of the papers, how do we move to policy?

Let’s assume I read the evidence basically as the FTC does. I don’t, and will explain as much in a future paper, but that’s not the debate for this post. How do we think about the optimal policy response, given the evidence?

There are two main reasons I am not ready to extrapolate from the research to the proposed ban. Every economist knows them: the dreaded pests of external validity and general equilibrium effects.

Let’s consider external validity through the Oregon ban paper and the Hawaii tech ban paper. Again, these are not critiques of the papers, but of how the FTC wants to move from them to a national ban.

Notice above that I said the Oregon ban went into effect in 2008, which means it happened as the whole country was entering a major recession and financial crisis. The authors do their best to deal with differential responses to the recession, but every state in their data went through a recession. Did the recession matter for the results? It seems plausible to me.

Another important detail about the Oregon ban is that it only applied to hourly workers, while the FTC rule would apply to all workers. You can’t just confidently assume hourly workers are just like salaried workers. Hourly workers who sign noncompetes are less likely to read them, less likely to consult with their family about them, and less likely to negotiate over them. If part of the problem with noncompetes is that people don’t understand them until it is too late, you will overstate the harm if you just look at hourly workers who understand noncompetes even less than salaried workers. Also, with a partial ban, Lipsitz & Starr recognize that spillovers matter and firms respond in different ways, such as converting workers to salaried to keep the noncompete, which won’t exist with a national ban. It’s not the same experiment at a national scale. Which way will it change? How confident are we?

The effects of the Hawaii ban are likely not the same as the FTC one would be. First of all, Hawaii is weird. It has a small population, and tech is a small part of the state’s economy. The ban even excluded telecom from within the tech sector. We are talking about a targeted ban. What does the Hawaii experiment tell us about a ban on noncompetes for tech workers in a non-island location like Boston? What does it tell us about a national ban on all noncompetes, like the FTC is proposing? Maybe these things do not matter. To further complicate things, the policy change included a ban on nonsolicitation clauses. Maybe the nonsolicitation clause was unimportant. But I’d want more research and more policy experimentation to tease out these details.

As you dig into these papers, you find more and more of these issues. That’s not a knock on the papers but an inherent difficulty in moving from research to policy. It’s further compounded by the fact that this empirical literature is still relatively new.

What will happen when we scale these bans up to the national level? That’s a huge question for any policy change, especially one as large as a national ban. The FTC seems confident in what will happen, but moving from micro to macro is not trivial. Macroeconomists are starting to really get serious about how the micro adds up to the macro, but it takes work.

I want to know more. Which effects are amplified when scaled? Which effects drop off? What’s the full National Income and Product Accounts (NIPA) accounting? I don’t know. No one does, because we don’t have any of that sort of price-theoretic, general equilibrium research. There are lots of margins that firms will adjust on. There’s always another margin that firms will adjust that we are not capturing. Instead, what the FTC did is a simple linear extrapolation from the state studies to a national ban. Studies find a 3% wage effect here. Multiply that by the number of workers.

When we are doing policy work, we would also like some sort of welfare analysis. It’s not just about measuring workers in isolation. We need a way to think about the costs and benefits and how to trade them off. All the diff-in-diff regressions in the world won’t get at it; we need a model.

Luckily, we have one paper that blends empirics and theory to do welfare analysis.[2] Liyan Shi has a paper forthcoming in Econometrica—which is no joke to publish in—titled “Optimal Regulation of Noncompete Contracts.” In it, she studies a model meant to capture the tradeoff between encouraging a firm’s investment in workers and reducing labor mobility. To bring the theory to data, she scrapes data on U.S. public firms from Securities and Exchange Commission filings and merges those with firm-level data from Compustat, plus some others, to get measures of firm investment in intangibles. She finds that when she brings her model to the data and calibrates it, the optimal policy is roughly a ban on noncompetes.

It’s an impressive paper. Again, I’m unsure how much to take from it to extrapolate to a ban on all workers. First, as I’ve written before, we know publicly traded firms are different from private firms, and that difference has changed over time. Second, it’s plausible that CEOs are different from other workers, and the relationship between CEO noncompetes and firm-level intangible investment isn’t identical to the relationship between mid-level engineers and investment in that worker.

Beyond particular issues of generalizing Shi’s paper, the larger concern is that this is the paper that does a welfare analysis. That’s troubling to me as a basis for a major policy change.

I think an analogy to taxation is helpful here. I’ve published a few papers about optimal taxation, so it’s an area I’ve thought more about. Within optimal taxation, you see this type of paper a lot. Here’s a formal model that captures something that theorists find interesting. Here’s a simple approach that takes the model to the data.

My favorite optimal-taxation papers take this approach. Take this paper that I absolutely love, “Optimal Taxation with Endogenous Insurance Markets” by Mikhail Golosov & Aleh Tsyvinski.[3] It is not a price-theory paper; it is a Theory—with a capital T—paper. I’m talking lemmas and theorems type of stuff. A bunch of QEDs and then calibrate their model to U.S. data.

How seriously should we take their quantitative exercise? After all, it was in the Quarterly Journal of Economics and my professors were assigning it, so it must be an important paper. But people who know this literature will quickly recognize that it’s not the quantitative result that makes that paper worthy of the QJE.

I was very confused by this early in my career. If we find the best paper, why not take the result completely seriously? My first publication, which was in the Journal of Economic Methodology, grew out of my confusion about how economists were evaluating optimal tax models. Why did professors think some models were good? How were the authors justifying that their paper was good? Sometimes papers are good because they closely match the data. Sometimes papers are good because they quantify an interesting normative issue. Sometimes papers are good because they expose an interesting means-ends analysis. Most of the time, papers do all three blended together, and it’s up to the reader to be sufficiently steeped in the literature to understand what the paper is really doing. Maybe I read the Shi paper wrong, but I read it mostly as a theory paper.

One difference between the optimal-taxation literature and the optimal-noncompete policy world is that the Golosov & Tsyvinski paper is situated within 100 years of formal optimal-taxation models. The knowledgeable scholar of public economics can compare and contrast. The paper has a lot of value because it does one particular thing differently than everything else in the literature.

Or think about patent policies, which was what I compared noncompetes to in my original post. There is a tradeoff between encouraging innovation and restricting monopoly. This takes a model and data to quantify the trade-off. Rafael Guthmann & David Rahman have a new paper on the optimal length of patents that Rafael summarized at Rafael’s Commentary. The basic structure is very similar to the Shi or Golosov &Tsyvinski papers: interesting models supplemented with a calibration exercise to put a number on the optimal policy. Guthmann & Rahman find four to eight years, instead of the current system of 20 years.

Is that true? I don’t know. I certainly wouldn’t want the FTC to unilaterally put the number at four years because of the paper. But I am certainly glad for their contribution to the literature and our understanding of the tradeoffs and that I can position that number in a literature asking similar questions.

I’m sorry to all the people doing great research on noncompetes, but we are just not there yet with them, by my reading. For studying optimal-noncompete policy in a model, we have one paper. It was groundbreaking to tie this theory to novel data, but it is still one welfare analysis.

My Priors: What’s Holding Me Back from the Revolution

In a world where you start without any thoughts about which direction is optimal (a uniform prior) and you observe one paper that says bans are net positive, you should think that bans are net positive. Some information is better than none and now you have some information. Make a choice.

But that’s not the world we live in. We all come to a policy question with prior beliefs that affect how much we update our beliefs.

For me, I have three slightly weird priors that I will argue you should also have but currently place me out of step with most economists.

First, I place more weight on theoretical arguments than most. No one sits back and just absorbs the data without using theory; that’s impossible. All data requires theory. Still, I think it is meaningful to say some people place more weight on theory. I’m one of those people.

To be clear, I also care deeply about data. But I write theory papers and a theory-heavy newsletter. And I think these theories matter for how we think about data. The theoretical justification for noncompetes has been around for a long time, as I discussed in my original post, so I won’t say more.

The second way that I differ from most economists is even weirder. I place weight on the benefits of existing agreements or institutions. The longer they have been in place, the more weight I place on the benefits. Josh Hendrickson and I have a paper with Alex Salter that basically formalized the argument from George Stigler that “every long-lasting institution is efficient.” When there are feedback mechanisms, such as with markets or democracy, the resulting institutions are the result of an evolutionary process that slowly selects more and more gains from trade. If they were so bad, people would get rid of them eventually. That’s not a free-market bias, since it also means that I think something like the Medicare system is likely an efficient form of social insurance and intertemporal bargaining for people in the United States.

Back to noncompetes, many companies use noncompetes in many different contexts. Many workers sign them. My prior is that they do so because a noncompete is a mutually beneficial contract that allows them to make trades in a world with transaction costs. As I explained in a recent post, Yoram Barzel taught us that, in a world with transaction costs, people will “erect social institutions to impose and enforce the restraints.”

One possible rebuttal is that noncompetes, while existing for a long time, have only become common in the past few decades. That is not very long-lasting, and so the FTC ban is a natural policy response to a new challenge that arose and the discovery that these contracts are actually bad. That response would persuade me more if this were a policy response brought about by a democratic bargain instead of an ideological agenda pushed by the chair of the FTC, which I think is closer to reality. That is Earl Thompson and Charlie Hickson’s spin on Stigler’s efficient institutions point. Ideology gets in the way.

Finally, relative to most economists, I place more weight on experimentation and feedback mechanisms. Most economists still think of the world through the lens of the benevolent planner doing a cost-benefit analysis. I do that sometimes, too, but I also think we need to really take our own informational limitations seriously. That’s why we talk about limited information all the time on my newsletter. Again, if we started completely agnostic, this wouldn’t point one way or another. We recognize that we don’t know much, but a slight signal pushes us either way. But when paired with my previous point about evolution, it means I’m hesitant about a national ban.

I don’t think the science is settled on lots of things that people want to tell us the science is settled on. For example, I’m not convinced we know markups are rising. I’m not convinced market concentration has skyrocketed, as others want to claim.

It’s not a free-market bias, either. I’m not convinced the Jones Act is bad. I’m not convinced it’s good, but Josh has convinced me that the question is complicated.

Because I’m not ready to easily say the science is settled, I want to know how we will learn if we are wrong. In a prior Truth on the Market post about the FTC rule, I quoted Thomas Sowell’s Knowledge and Decisions:

In a world where people are preoccupied with arguing about what decision should be made on a sweeping range of issues, this book argues that the most fundamental question is not what decision to make but who is to make it—through what processes and under what incentives and constraints, and with what feedback mechanisms to correct the decision if it proves to be wrong.

A national ban bypasses this and severely cuts off our ability to learn if we are wrong. That worries me.

Maybe this all means that I am too conservative and need to be more open to changing my mind. Maybe I’m inconsistent in how I apply these ideas. After all, “there’s always another margin” also means that the harm of a policy will be smaller than anticipated since people will adjust to avoid the policy. I buy that. There are a lot more questions to sort through on this topic.

Unfortunately, the discussion around noncompetes has been short-circuited by the FTC. Hopefully, this post gave you tools to think about a variety of policies going forward.


[1] The U.S. Bureau of Labor Statistics now collects data on noncompetes. Since 2017, we’ve had one question on noncompetes in the National Longitudinal Survey of Youth 1997. Donna S. Rothstein and Evan Starr (2021) also find that noncompetes cover around 18% of workers. It is very plausible this is an understatement, since noncompetes are complex legal documents, and workers may not understand that they have one.

[2] Other papers combine theory and empirics. Kurt Lavetti, Carol Simon, & William D. White (2023), build a model to derive testable implications about holdups. They use data on doctors and find noncompetes raise returns to tenure and lower turnover.

[3] It’s not exactly the same. The Golosov & Tsyvinski paper doesn’t even take the calibration seriously enough to include the details in the published version. Shi’s paper is a more serious quantitative exercise.

One of my favorite books is Thomas Sowell’s Knowledge and Decisions, in which he builds on Friedrich Hayek’s insight that knowledge is dispersed throughout society. Hayek’s insight that markets can bring dispersed but important knowledge to bear with substantial effectiveness is one that many of us, especially economists, pay lip service to, but it often gets lost in day-to-day debates about policy. Sowell uses Hayek’s insight to understand and critique social, economic, and political institutions, which he judges in terms of “what kinds of knowledge can be brought to bear and with what effectiveness.” 

I’m reminded of Sowell in witnessing the current debate surrounding the Federal Trade Commission’s (FTC) proposed rule to enact a nationwide ban on noncompetes in employment agreements. A major policy change like this obviously sets off debate. Among economists, the discussion surrounds economic arguments and empirical evidence on the effects of noncompetes. Among lawyers, it largely centers on the legality of the rule.

But all of the discussion seems to ignore Sowell’s insights. He writes:

In a world where people are preoccupied with arguing about what decision should be made on a sweeping range of issues, this book argues that the most fundamental question is not what decision to make but who is to make it—through what processes and under what incentives and constraints, and with what feedback mechanisms to correct the decision if it proves to be wrong. (emphasis added)

Once we recognize that knowledge doesn’t simply exist out in the ether for us all to grab, but depends instead on the institutions within which we operate, the outcome is going to hinge on who gets to decide and how their knowledge evolves. How easily can the decision maker respond to new information and update their beliefs? How easily can they make incremental changes to incremental information?

To take two extremes, stock markets are institutions where decision makers take account of new information by the minute, allowing for rapid and marginal changes in decisions. At the other extreme is the Supreme Court, where precedents take years or decades to overturn if they are based on information that becomes outdated.

Let’s accept for the sake of argument that all of the best experts today agree that noncompetes are a net negative for society. We have to deal with the fact that we can be proven wrong in the future, and different regimes will deal with that future change differently.

If implemented, the FTC’s total ban of noncompetes replaces the decision making of businesses and workers, as well as the oversight of state governments, with a one-size-fits-all approach. Under that new regime, we need to ask: How quickly will they respond to new information—for example, that it had destructive implications? How easily can they make incremental changes?

One may hope the FTC, as an expert-led agency, could easily adjust to incoming evidence. They will just follow the science! But that response would be self-contradictory here. The FTC just showed that it is happy to go from 0 to 100 with its rules. It went from doing hardly any work on noncompetes to a total ban. In no optimal policy model where the benevolent regulator is responding to information is that how a regulator would process and act on information.

This is part of a long-run trend in politics. Sowell again:

Even within democratic nations, the locus of decision making has drifted away from the individual, the family, and voluntary associations of various sorts, and toward government. And within government, it has moved away from elected officials subject to voter feedback, and toward more insulated governmental institutions, such as bureaucracies and the appointed judiciary.

We may want that. Not every decision should be left up to the individual. We have rights and policies that constrain individuals. The U.S. Constitution, for example, doesn’t allow states to regulate interstate commerce. But the takeaway is not that decentralization is always better. Rather, the point is that we need to consider the tradeoff.

[The following is adapted from a piece in the Economic Forces newsletter, which you can subscribe to on Substack.]

Everyone is worried about growing concentration in U.S. markets. President Joe Biden’s July 2021 executive order on competition begins with the assertion that “excessive market concentration threatens basic economic liberties, democratic accountability, and the welfare of workers, farmers, small businesses, startups, and consumers.” No word on the threat of concentration to baby puppies, but the takeaway is clear. Concentration is everywhere, and it’s bad.

On the academic side, Ufuk Akcigit and Sina Ates have an interesting paper on “ten facts”—worrisome facts, in my reading—about business dynamism. Fact No. 1: “Market concentration has risen.” Can’t get higher than No. 1, last time I checked.

Unlike most people commenting on concentration, I don’t see any reason to see high or rising concentration itself as a bad thing (although it may be a sign of problems). One key takeaway from industrial organization is that high concentration tells us nothing about levels of competition and so has no direct normative implication. I bring this up all the time (see 1234).

So without worrying about whether rising concentration is a good or bad thing, this post asks, “is rising concentration a thing?” Is there any there there? Where is it rising? For what measures? Just the facts, ma’am.

How to Measure Concentration

I will focus here primarily on product-market concentration and save labor-market concentration for a later post. The following is a brief literature review. I do not cover every paper. If I missed an important one, tell me in the comments.

There are two steps to calculating concentration. First, define the market. In empirical work, a market usually includes the product sold or the input bought (e.g., apples) and a relevant geographic region (United States). With those two bits of information decided, we have a “market” (apples sold in the United States).

Once we have defined the relevant market, we need a measure of concentration within that market. The most straightforward measure to use is to look at the use-concentration ratio of some number of firms. If you see “CR4,” it refers to the percentage of total sales in the market is by the four largest firms? One problem with this measure is that CR4 ignores everything about the fifth largest and smaller firms.

The other option used to quantify concentration is called the Herfindahl-Hirschman index (HHI), which is a number between 0 and 10,000 (or 0 and 1, if it is normalized), with 10,000 meaning all of the sales go to one firm and 0 being the limit as many firms each have smaller and smaller shares. The benefit of the HHI is that it uses information on the whole distribution of firms, not just the top few.[1]

The Biggest Companies

With those preliminaries out of the way, let’s start with concentration among the biggest firms over the longest time-period and work our way to more granular data.

When people think of “corporate concentration,” they think of the giant companies like Standard Oil, Ford, Walmart, and Google. People maybe even picture a guy with a monocle, that sort of thing.

How much of total U.S. sales go to the biggest firms? How has that changed over time? These questions are the focus of Spencer Y. Kwon, Yueran Ma, and Kaspar Zimmermann’s (2022) “100 Years of Rising Corporate Concentration.”

Spoiler alert: they find rising corporate concentration. But what does that mean?

They look at the concentration of assets and sales concentrated among the largest 1% and 0.1% of businesses. For sales, due to data limitations, they need to use net income (excluding firms with negative net income) for the first half and receipts (sales) for the second half.

In 1920, the top 1% of firms had about 60% of total sales. Now, that number is above 80%. For the top 0.1%, the number rose from about 35% to 65%. Asset concentration (blue below) is even more striking, rising to almost 100% for the top 1% of firms.

Kwon, Ma, and Zimmermann (2022)

Is this just mechanical from the definitions? That was my first concern. Suppose you have a bunch of small firms enter that have no effect on the economy. Everyone starts a Substack that makes no money. 🤔 This mechanically bumps big firms in the top 1.1% into the top 1% and raises the share. The authors had thought about this more than my 2 minutes of reading, so they did something simple.

The simple comparison is to limit the economy to just the top 10% of firms. What share goes to the top 1%? In that world, when small firms enter, there is still a bump from the top 1.1% to 1%, but there is also a bump from 10.1% to 10%. Both the numerator and denominator of the ratio are mechanically increasing. That doesn’t perfectly solve the issue, since the bump to the 1.1% firm is, by definition, bigger than the bump from the 10.1% firm, but it’s a quick comparison. Still, we see a similar rise in the top 1%.

Big companies are getting bigger, even relatively.

I’m not sure how much weight to put on this paper for thinking about concentration trends. It’s an interesting paper, and that’s why I started with it. But I’m very hesitant to think of “all goods and services in the United States” as a relevant market for any policy question, especially antitrust-type questions, which is where we see the most talk about concentration. But if you’re interested in corporate concentration influencing politics, these numbers may be super relevant.

At the industry level, which is closer to an antitrust market but still not one, they find similar trends. The paper’s website (yes, the paper has a website. Your papers don’t?) has a simple display of the industry-level trends. They match the aggregate change, but the timing differs.

Industry-Level Concentration Trends, Public Firms

Moving down from big to small, we can start asking about publicly traded firms. These tend to be larger firms, but the category doesn’t capture all firms and is biased, as I’ve pointed out before.

Grullon, Larkin, and Michaely (2019) look at the average HHI at the 3-digit NAICS level (for example, oil and gas is “a market”). Below is the plot of the (sales-weighted) average HHI for publicly traded firms. It dropped in the 80s and early 90s, rose rapidly in the late 90s and early 2000s, and has slowly risen since. I’d say “concentration is rising” is the takeaway.

Average publicly-traded HHI (3-digit NAICS) from Gullon, Larkin, and Michaely (2019)

The average hides how the distribution has changed. For antitrust, we may care whether a few industries have seen a large increase in concentration or all industries have seen a small increase.

The figure below plots from 1997-2012. We’ve seen many industries with a large increase (>40%) in the HHI. We get a similar picture if we look at the share of sales to the top 4 firms.

Distribution of changes in publicly traded HHI (3-digit NAICS) between 1997-2012 from Gullon, Larkin, and Michaely (2019)

One issue with NAICS is that it was designed to lump firms together from a producer’s perspective, not the consumer’s perspective. We will say more about that below.

Another issue in Compustat is that we only have industry at the firm level, not the establishment level. For example, every 3M office or plant gets labeled as “Miscellaneous Manufactured Commodities” and doesn’t separate out the plants that make tape (like my hometown) from those that make surgical gear.

But firms are increasingly doing wider and wider business. That may not matter if you’re worried about political corruption from concentration. But if you’re thinking about markets, it seems problematic that, in Compustat, all of Amazon’s web services (cloud servers) revenue gets lumped into NAICS 454 “Nonstore Retailers,” since that’s Amazon’s firm-level designation.

Hoberg and Phillips (2022) try to account for this increasing “scope” of businesses. They make an adjustment to allow a firm to exist in multiple industries. After making this correction, they find a falling average HHI.

Hoberg and Phillips (2021)

Industry-Level Concentration Trends, All Firms

Why stick to just publicly traded firms? That could be especially problematic since we know that the set of public firms is different from private firms, and the differences have changed over time. Public firms compete with private firms and so are in the same market for many questions.

And we have data on public and private firms. Well, I don’t. I’m stuck with Compustat data. But big names have the data.

Autor, Dorn, Katz, Patterson, and Van Reenen (2020), in their famous “superstar firms” paper, have U.S. Census panel data at the firm and establishment level, covering six major sectors: manufacturing, retail trade, wholesale trade, services, utilities and transportation, and finance. They focus on the share of the top 4 (CR4) or the top 20 (CR20) firms, both in terms of sales and employment. Every series, besides employment in manufacturing, has seen an increase. In retail, there has been nearly a doubling of the sales share to the top 4 firms.

Autor, Dorn, Katz, Patterson, and Van Reenen (2020)

I guess that settles it. Three major papers show the same trend. It’s settled… If only economic trends were so simple.

What About Narrower Product Markets?

For antitrust cases, we define markets slightly differently. We don’t use NAICS codes, since they are designed to lump together similar producers, not similar products. We also don’t use the six “major industries” in the Census, since those are also too large to be meaningful for antitrust. Instead, the product level is much smaller.

Luckily, Benkard, Yurukoglu, and Zhang (2021) construct concentration measures that are intended to capture consumption-based product markets. They have respondent-level data from the annual “Survey of the American Consumer” available from MRI Simmons, a market-research firm. The survey asks specific questions about which brands consumers buy.

They define markets into 457 product-market categories, separated into 29 locations. Product “markets” are then aggregated into “sectors.” Another interesting feature is that they know the ownership of different products, even if the brand name is different. Ownership is what matters for antitrust.

They find falling concentration at the market level (the narrowest product), both at the local and the national level. At the sector level (which aggregates markets), there is a slight increase.

Benkard, Yurukoglu, and Zhang (2021)

If you focus on industries with an HHI above 2500, the level that is considered “highly concentrated” in the U.S. Horizontal Merger Guidelines, the “highly concentrated” fell from 48% in 1994 to 39% in 2019. I’m not sure how seriously to take this threshold, since the merger guidelines take a different approach to defining markets. Overall, the authors say, “we find no evidence that market power (sic) has been getting worse over time in any broad-based way.”

Is the United States a Market?

Markets are local

Benkard, Yurukoglu, and Zhang make an important point about location. In what situations is the United States the appropriate geographic region? The U.S. housing market is not a meaningful market. If my job and family are in Minnesota, I’m not considering buying a house in California. Those are different markets.

While the first few papers above focused on concentration in the United States as a whole or within U.S. companies, is that really the appropriate market? Maybe markets are much more localized, and the trends could be different.

Along comes Rossi-Hansberg, Sarte, and Trachter (2021) with a paper titled “Diverging Trends in National and Local Concentration.” In that paper, they argue that there are, you guessed it, diverging trends in national and local concentration. If we look at concentration at different geographic levels, we get a different story. Their main figure shows that, as we move to smaller geographic regions, concentration goes from rising over time to falling over time.

Figure 1 from Rossi-Hansberg, Sarte, and Trachter (2020)

How is it possible to have such a different story depending on area?

Imagine a world where each town has its own department store. At the national level, concentration is low, but each town has a high concentration. Now Walmart enters the picture and sets up shop in 10,000 towns. That increases national concentration while reducing local concentration, which goes from one store to two. That sort of dynamic seems plausible, and the authors spend a lot of time discussing Walmart.

The paper was really important, because it pushed people to think more carefully about the type of concentration that they wanted to study. Just because data tends to be at the national level doesn’t mean that’s appropriate.

As with all these papers, however, the data source matters. There are a few concerns with the “National Establishment Time Series” (NETS) data used, as outlined in Crane and Decker (2020). Lots of the data is imputed, meaning it was originally missing and then filled in with statistical techniques. Almost every Walmart stores has exactly the median sales to worker ratio. This suggests the data starts with the number of workers and imputes the sales data from there. That’s fine if you are interested in worker concentration, but this paper is about sales.

Instead of relying on NETS data, Smith and Ocampo (2022) have Census data on product-level revenue for all U.S. retail stores between 1992 and 2012. The downside is that it is only retail, but that’s an important sector and can help us make sense of the “Walmart enters town” concentration story.

Unlike Rossi-Hansberg, Sarte, and Trachter, Smith and Ocampo find rising concentration at both the local and national levels. It depends on the exact specification. They find changes in local concentration between -1.5 and 12.6 percentage points. Regardless, the –17 percentage points of Rossi-Hansberg, Sarte, and Trachter is well outside their estimates. To me, that suggests we should be careful with the “declining local concentration” story.

Smith and Ocampo (2022).

Ultimately, for local stories, data is the limitation. Take all of the data issues at the aggregate level and then try to drill down to the ZIP code or city level. It’s tough. It just doesn’t exist in general, outside of Census data for a few sectors. The other option is to dig into a particular industry, Miller, Osborne, Sheu, and Sileo (2022) study the cement industry. 😱 (They find rising concentration.)

Markets are global

Instead of going more local, what if we go the other way? What makes markets unique in 2022 vs. 1980 is not that they are local but that they are global. Who cares if U.S. manufacturing is more concentrated if U.S. firms now compete in a global market?

The standard approach (used in basically all the papers above) computes market shares based on where the good was manufactured and doesn’t look at where the goods end up. (Compustat data is more of a mess because it includes lots of revenue from foreign establishments of U.S. firms.)

What happens when we look at where goods are ultimately sold? Again, that’s relevant for antitrust. Amiti and Heise (2021) augment the usual Census of Manufacturers with transaction-level import data from the Longitudinal Firm Trade Transactions Database (LFTTD) of the Census Bureau. They see U.S. customs forms. That’s “export-adjusted.”

They then do something similar for imports to come up with “market concentration.” That is their measure of concentration for all firms selling in the U.S., irrespective of where the firm is located. That line is completely flat from 1992-2012.

Again, this is only manufacturing, but it is a striking example of how we need to be careful with our measures of concentration. This seems like a very important correction of concentration for most questions and for many industries. Tech is clearly a global market.

Conclusion

If I step back from all of these results, I think it is safe to say that concentration is rising by most measures. However, there are lots of caveats. In a sector like manufacturing, the relevant global market is not more concentrated. The Rossi-Hansberg, Sarte, and Trachter paper suggests, despite data issues, local concentration could be falling. Again, we need to be careful.

Alex Tabarrok says trust literatures, not papers. What does that imply here?

Take the last paper by Amiti and Heise. Yes, it is only one industry, but in the one industry that we have the import/export correction, the concentration results flip. That leaves me unsure of what is going on.


[1] There’s often a third step. If we are interested in what is going on in the overall economy, we need to somehow average across different markets. There is sometimes debate about how to average a bunch of HHIs. Let’s not worry too much about that for purposes of this post. Generally, if you’re looking at the concentration of sales, the industries are weighted by sales.

“Just when I thought I was out, they pull me back in!” says Al Pacino’s character, Michael Corleone, in Godfather III. That’s how Facebook and Google must feel about S. 673, the Journalism Competition and Preservation Act (JCPA)

Gus Hurwitz called the bill dead in September. Then it passed the Senate Judiciary Committee. Now, there are some reports that suggest it could be added to the obviously unrelated National Defense Authorization Act (it should be noted that the JCPA was not included in the version of NDAA introduced in the U.S. House).

For an overview of the bill and its flaws, see Dirk Auer and Ben Sperry’s tl;dr. The JCPA would force “covered” online platforms like Facebook and Google to pay for journalism accessed through those platforms. When a user posts a news article on Facebook, which then drives traffic to the news source, Facebook would have to pay. I won’t get paid for links to my banger cat videos, no matter how popular they are, since I’m not a qualifying publication.

I’m going to focus on one aspect of the bill: the use of “final offer arbitration” (FOA) to settle disputes between platforms and news outlets. FOA is sometimes called “baseball arbitration” because it is used for contract disputes in Major League Baseball. This form of arbitration has also been implemented in other jurisdictions to govern similar disputes, notably by the Australian ACCC.

Before getting to the more complicated case, let’s start simple.

Scenario #1: I’m a corn farmer. You’re a granary who buys corn. We’re both invested in this industry, so let’s assume we can’t abandon negotiations in the near term and need to find an agreeable price. In a market, people make offers. Prices vary each year. I decide when to sell my corn based on prevailing market prices and my beliefs about when they will change.

Scenario #2: A government agency comes in (without either of us asking for it) and says the price of corn this year is $6 per bushel. In conventional economics, we call that a price regulation. Unlike a market price, where both sides sign off, regulated prices do not enjoy mutual agreement by the parties to the transaction.

Scenario #3:  Instead of a price imposed independently by regulation, one of the parties (say, the corn farmer) may seek a higher price of $6.50 per bushel and petition the government. The government agrees and the price is set at $6.50. We would still call that price regulation, but the outcome reflects what at least one of the parties wanted and  some may argue that it helps “the little guy.” (Let’s forget that many modern farms are large operations with bargaining power. In our head and in this story, the corn farmer is still a struggling mom-and-pop about to lose their house.)

Scenario #4: Instead of listening only to the corn farmer,  both the farmer and the granary tell the government their “final offer” and the government picks one of those offers, not somewhere in between. The parties don’t give any reasons—just the offer. This is called “final offer arbitration” (FOA). 

As an arbitration mechanism, FOA makes sense, even if it is not always ideal. It avoids some of the issues that can attend “splitting the difference” between the parties. 

While it is better than other systems, it is still a price regulation.  In the JCPA’s case, it would not be imposed immediately; the two parties can negotiate on their own (in the shadow of the imposed FOA). And the actual arbitration decision wouldn’t technically be made by the government, but by a third party. Fine. But ultimately, after stripping away the veneer,  this is all just an elaborate mechanism built atop the threat of the government choosing the price in the market. 

I call that price regulation. The losing party does not like the agreement and never agreed to the overall mechanism. Unlike in voluntary markets, at least one of the parties does not agree with the final price. Moreover, neither party explicitly chose the arbitration mechanism. 

The JCPA’s FOA system is not precisely like the baseball situation. In baseball, there is choice on the front-end. Players and owners agree to the system. In baseball, there is also choice after negotiations start. Players can still strike; owners can enact a lockout. Under the JCPA, the platforms must carry the content. They cannot walk away.

I’m an economist, not a philosopher. The problem with force is not that it is unpleasant. Instead, the issue is that force distorts the knowledge conveyed through market transactions. That distortion prevents resources from moving to their highest valued use. 

How do we know the apple is more valuable to Armen than it is to Ben? In a market, “we” don’t need to know. No benevolent outsider needs to pick the “right” price for other people. In most free markets, a seller posts a price. Buyers just need to decide whether they value it more than that price. Armen voluntarily pays Ben for the apple and Ben accepts the transaction. That’s how we know the apple is in the right hands.

Often, transactions are about more than just price. Sometimes there may be haggling and bargaining, especially on bigger purchases. Workers negotiate wages, even when the ad stipulates a specific wage. Home buyers make offers and negotiate. 

But this just kicks up the issue of information to one more level. Negotiating is costly. That is why sometimes, in anticipation of costly disputes down the road, the two sides voluntarily agree to use an arbitration mechanism. MLB players agree to baseball arbitration. That is the two sides revealing that they believe the costs of disputes outweigh the losses from arbitration. 

Again, each side conveys their beliefs and values by agreeing to the arbitration mechanism. Each step in the negotiation process allows the parties to convey the relevant information. No outsider needs to know “the right” answer.For a choice to convey information about relative values, it needs to be freely chosen.

At an abstract level, any trade has two parts. First, people agree to the mechanism, which determines who makes what kinds of offers. At the grocery store, the mechanism is “seller picks the price and buyer picks the quantity.” For buying and selling a house, the mechanism is “seller posts price, buyer can offer above or below and request other conditions.” After both parties agree to the terms, the mechanism plays out and both sides make or accept offers within the mechanism. 

We need choice on both aspects for the price to capture each side’s private information. 

For example, suppose someone comes up to you with a gun and says “give me your wallet or your watch. Your choice.” When you “choose” your watch, we don’t actually call that a choice, since you didn’t pick the mechanism. We have no way of knowing whether the watch means more to you or to the guy with the gun. 

When the JCPA forces Facebook to negotiate with a local news website and Facebook offers to pay a penny per visit, it conveys no information about the relative value that the news website is generating for Facebook. Facebook may just be worried that the website will ask for two pennies and the arbitrator will pick the higher price. It is equally plausible that in a world without transaction costs, the news would pay Facebook, since Facebook sends traffic to them. Is there any chance the arbitrator will pick Facebook’s offer if it asks to be paid? Of course not, so Facebook will never make that offer. 

For sure, things are imposed on us all the time. That is the nature of regulation. Energy prices are regulated. I’m not against regulation. But we should defend that use of force on its own terms and be honest that the system is one of price regulation. We gain nothing by a verbal sleight of hand that turns losing your watch into a “choice” and the JCPA’s FOA into a “negotiation” between platforms and news.

In economics, we often ask about market failures. In this case, is there a sufficient market failure in the market for links to justify regulation? Is that failure resolved by this imposition?

[This post is a contribution to Truth on the Market‘s continuing digital symposium “FTC Rulemaking on Unfair Methods of Competition.” You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

The current Federal Trade Commission (FTC) appears to have one overarching goal: find more ways to sue companies. The three Democratic commissioners (with the one Republican dissenting) issued a new policy statement earlier today that brings long-abandoned powers back into the FTC’s toolkit. Under Chair Lina Khan’s leadership, the FTC wants to bring challenges against “unfair methods of competition in or affecting commerce.” If that sounds extremely vague, that’s because it is. 

For the past few decades, antitrust violations have fallen into two categories. Actions like price-fixing with competitors are assumed to be illegal. Other actions are only considered illegal if they are proven to sufficiently restrain trade. This latter approach is called the “rule of reason.”

The FTC now wants to return to a time when they could also challenge conduct it viewed as unfair. The policy statement says the commission will go after behavior that is “coercive, exploitative, collusive, abusive, deceptive, predatory, or involve the use of economic power of a similar nature.” Who could argue against stopping coercive behavior? The problem is what it means in practice for actual antitrust cases. No one knows: businesses or courts. It’s up to the whims of the FTC.

This is how antitrust used to be. In 1984, the 2nd U.S. Circuit Court of Appeals admonished the FTC and argued that “the Commission owes a duty to define the conditions under which conduct … would be unfair so that businesses will have an inkling as to what they can lawfully do rather than be left in a state of complete unpredictability.” Fairness, as the Clayton Act puts forward, proved unworkable as an antitrust standard.

The FTC’s movement to clarify what “unfair” means led to a 2015 policy statement, which the new statement supersedes. In the 2015 statement, the Obama-era FTC, with bipartisan support, issued new rules laying out what would qualify as unfair methods of competition. In doing so, they rolled “unfair methods” under the rule of reason. The consequences of the action matter.

The 2015 statement is part of a longer-run trend of incorporating more economic analysis into antitrust. For the past few decades, courts have followed in antitrust law is called the “consumer welfare standard.”  The basic idea is that the goal of antitrust decisions should be to choose whatever outcome helps consumers, or as economists would put it, whatever increases “consumer welfare.” Once those are the terms of the dispute, economic analysis can help the courts sort out whether an action is anticompetitive.

Beyond helping to settle particular cases, these features of modern antitrust—like the consumer welfare standard and the rule of reason—give market participants some sense of what is illegal and what is not. That’s necessary for the rule of law to prevail and for markets to function.

The new FTC rules explicitly reject any appeal to consumer benefits or welfare. Efficiency gains from the action—labeled “pecuniary gains” to suggest they are merely about money—do not count as a defense. The FTC makes explicit that parties cannot justify behavior based on efficiencies or cost-benefit analysis.

Instead, as Commissioner Christine S. Wilson points out in her dissent, “the Policy Statement adopts an ‘I know it when I see it’ approach premised on a list of nefarious-sounding adjectives.” If the FTC claims some conduct is unfair, why worry about studying the consequences of the conduct?

The policy statement is an attempt to roll back the clock on antitrust and return to the incoherence of 1950s and 1960s antitrust. The FTC seeks to protect other companies, not competition or consumers. As Khan herself said, “for a lot of businesses it comes down to whether they’re going to be able to sink or swim.”

But President Joe Biden’s antitrust enforcers have struggled to win traditional antitrust cases. On mergers, for example, they have challenged a smaller percentage of mergers and were less successful than the FTC and DOJ under President Donald Trump.

A recent viral video captures a prevailing sentiment in certain corners of social media, and among some competition scholars, about how mergers supposedly work in the real world: firms start competing on price, one firm loses out, that firm agrees to sell itself to the other firm and, finally, prices are jacked up.(Warning: Keep the video muted. The voice-over is painful.)

The story ends there. In this narrative, the combination offers no possible cost savings. The owner of the firm who sold doesn’t start a new firm and begin competing tomorrow, and nor does anyone else. The story ends with customers getting screwed.

And in this telling, it’s not just horizontal mergers that look like the one in the viral egg video. It is becoming a common theory of harm regarding nonhorizontal acquisitions that they are, in fact, horizontal acquisitions in disguise. The acquired party may possibly, potentially, with some probability, in the future, become a horizontal competitor. And of course, the story goes, all horizontal mergers are anticompetitive.

Therefore, we should have the same skepticism toward all mergers, regardless of whether they are horizontal or vertical. Steve Salop has argued that a problem with the Federal Trade Commission’s (FTC) 2020 vertical merger guidelines is that they failed to adopt anticompetitive presumptions.

This perspective is not just a meme on Twitter. The FTC and U.S. Justice Department (DOJ) are currently revising their guidelines for merger enforcement and have issued a request for information (RFI). The working presumption in the RFI (and we can guess this will show up in the final guidelines) is exactly the takeaway from the video: Mergers are bad. Full stop.

The RFI repeatedly requests information that would support the conclusion that the agencies should strengthen merger enforcement, rather than information that might point toward either stronger or weaker enforcement. For example, the RFI asks:

What changes in standards or approaches would appropriately strengthen enforcement against mergers that eliminate a potential competitor?

This framing presupposes that enforcement should be strengthened against mergers that eliminate a potential competitor.

Do Monopoly Profits Always Exceed Joint Duopoly Profits?

Should we assume enforcement, including vertical enforcement, needs to be strengthened? In a world with lots of uncertainty about which products and companies will succeed, why would an incumbent buy out every potential competitor? The basic idea is that, since profits are highest when there is only a single monopolist, that seller will always have an incentive to buy out any competitors.

The punchline for this anti-merger presumption is “monopoly profits exceed duopoly profits.” The argument is laid out most completely by Salop, although the argument is not unique to him. As Salop points out:

I do not think that any of the analysis in the article is new. I expect that all the points have been made elsewhere by others and myself.

Under the model that Salop puts forward, there should, in fact, be a presumption against any acquisition, not just horizontal acquisitions. He argues that:

Acquisitions of potential or nascent competitors by a dominant firm raise inherent anticompetitive concerns. By eliminating the procompetitive impact of the entry, an acquisition can allow the dominant firm to continue to exercise monopoly power and earn monopoly profits. The dominant firm also can neutralize the potential innovation competition that the entrant would provide.

We see a presumption against mergers in the recent FTC challenge of Meta’s purchase of Within. While Meta owns Oculus, a virtual-reality headset and Within owns virtual-reality fitness apps, the FTC challenged the acquisition on grounds that:

The Acquisition would cause anticompetitive effects by eliminating potential competition from Meta in the relevant market for VR dedicated fitness apps.

Given the prevalence of this perspective, it is important to examine the basic model’s assumptions. In particular, is it always true that—since monopoly profits exceed duopoly profits—incumbents have an incentive to eliminate potential competition for anticompetitive reasons?

I will argue no. The notion that monopoly profits exceed joint-duopoly profits rests on two key assumptions that hinder the simple application of the “merge to monopoly” model to antitrust.

First, even in a simple model, it is not always true that monopolists have both the ability and incentive to eliminate any potential entrant, simply because monopoly profits exceed duopoly profits.

For the simplest complication, suppose there are two possible entrants, rather than the common assumption of just one entrant at a time. The monopolist must now pay each of the entrants enough to prevent entry. But how much? If the incumbent has already paid one potential entrant not to enter, the second could then enter the market as a duopolist, rather than as one of three oligopolists. Therefore, the incumbent must pay the second entrant an amount sufficient to compensate a duopolist, not their share of a three-firm oligopoly profit. The same is true for buying the first entrant. To remain a monopolist, the incumbent would have to pay each possible competitor duopoly profits.

Because monopoly profits exceed duopoly profits, it is profitable to pay a single entrant half of the duopoly profit to prevent entry. It is not, however, necessarily profitable for the incumbent to pay both potential entrants half of the duopoly profit to avoid entry by either. 

Now go back to the video. Suppose two passersby, who also happen to have chickens at home, notice that they can sell their eggs. The best part? They don’t have to sit around all day; the lady on the right will buy them. The next day, perhaps, two new egg sellers arrive.

For a simple example, consider a Cournot oligopoly model with an industry-inverse demand curve of P(Q)=1-Q and constant marginal costs that are normalized to zero. In a market with N symmetric sellers, each seller earns 1/((N+1)^2) in profits. A monopolist makes a profit of 1/4. A duopolist can expect to earn a profit of 1/9. If there are three potential entrants, plus the incumbent, the monopolist must pay each the duopoly profit of 3*1/9=1/3, which exceeds the monopoly profits of 1/4.

In the Nash/Cournot equilibrium, the incumbent will not acquire any of the competitors, since it is too costly to keep them all out. With enough potential entrants, the monopolist in any market will not want to buy any of them out. In that case, the outcome involves no acquisitions.

If we observe an acquisition in a market with many potential entrants, which any given market may or may not have, it cannot be that the merger is solely about obtaining monopoly profits, since the model above shows that the incumbent doesn’t have incentives to do that.

If our model captures the dynamics of the market (which it may or may not, depending on a given case’s circumstances) but we observe mergers, there must be another reason for that deal besides maintaining a monopoly. The presence of multiple potential entrants overturns the antitrust implications of the truism that monopoly profits exceed duopoly profits. The question turns instead to empirical analysis of the merger and market in question, as to whether it would be profitable to acquire all potential entrants.

The second simplifying assumption that restricts the applicability of Salop’s baseline model is that the incumbent has the lowest cost of production. He rules out the possibility of lower-cost entrants in Footnote 2:

Monopoly profits are not always higher. The entrant may have much lower costs or a better or highly differentiated product. But higher monopoly profits are more usually the case.

If one allows the possibility that an entrant may have lower costs (even if those lower costs won’t be achieved until the future, when the entrant gets to scale), it does not follow that monopoly profits (under the current higher-cost monopolist) necessarily exceed duopoly profits (with a lower-cost producer involved).

One cannot simply assume that all firms have the same costs or that the incumbent is always the lowest-cost producer. This is not just a modeling choice but has implications for how we think about mergers. As Geoffrey Manne, Sam Bowman, and Dirk Auer have argued:

Although it is convenient in theoretical modeling to assume that similarly situated firms have equivalent capacities to realize profits, in reality firms vary greatly in their capabilities, and their investment and other business decisions are dependent on the firm’s managers’ expectations about their idiosyncratic abilities to recognize profit opportunities and take advantage of them—in short, they rest on the firm managers’ ability to be entrepreneurial.

Given the assumptions that all firms have identical costs and there is only one potential entrant, Salop’s framework would find that all possible mergers are anticompetitive and that there are no possible efficiency gains from any merger. That’s the thrust of the video. We assume that the whole story is two identical-seeming women selling eggs. Since the acquired firm cannot, by assumption, have lower costs of production, it cannot improve on the incumbent’s costs of production.

Many Reasons for Mergers

But whether a merger is efficiency-reducing and bad for competition and consumers needs to be proven, not just assumed.

If we take the basic acquisition model literally, every industry would have just one firm. Every incumbent would acquire every possible competitor, no matter how small. After all, monopoly profits are higher than duopoly profits, and so the incumbent both wants to and can preserve its monopoly profits. The model does not give us a way to disentangle when mergers would stop without antitrust enforcement.

Mergers do not affect the production side of the economy, under this assumption, but exist solely to gain the market power to manipulate prices. Since the model finds no downsides for the incumbent to acquiring a competitor, it would naturally acquire every last potential competitor, no matter how small, unless prevented by law. 

Once we allow for the possibility that firms differ in productivity, however, it is no longer true that monopoly profits are greater than industry duopoly profits. We can see this most clearly in situations where there is “competition for the market” and the market is winner-take-all. If the entrant to such a market has lower costs, the profit under entry (when one firm wins the whole market) can be greater than the original monopoly profits. In such cases, monopoly maintenance alone cannot explain an entrant’s decision to sell.

An acquisition could therefore be both procompetitive and increase consumer welfare. For example, the acquisition could allow the lower-cost entrant to get to scale quicker. The acquisition of Instagram by Facebook, for example, brought the photo-editing technology that Instagram had developed to a much larger market of Facebook users and provided a powerful monetization mechanism that was otherwise unavailable to Instagram.

In short, the notion that incumbents can systematically and profitably maintain their market position by acquiring potential competitors rests on assumptions that, in practice, will regularly and consistently fail to materialize. It is thus improper to assume that most of these acquisitions reflect efforts by an incumbent to anticompetitively maintain its market position.

Slow wage growth and rising inequality over the past few decades have pushed economists more and more toward the study of monopsony power—particularly firms’ monopsony power over workers. Antitrust policy has taken notice. For example, when the Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) initiated the process of updating their merger guidelines, their request for information included questions about how they should respond to monopsony concerns, as distinct from monopoly concerns. ​

From a pure economic-theory perspective, there is no important distinction between monopsony power and monopoly power. If Armen is trading his apples in exchange for Ben’s bananas, we can call Armen the seller of apples or the buyer of bananas. The labels (buyer and seller) are kind of arbitrary. It doesn’t matter as a pure theory matter. Monopsony and monopoly are just mirrored images.

Some infer from this monopoly-monopsony symmetry, however, that extending antitrust to monopsony power will be straightforward. As a practical matter for antitrust enforcement, it becomes less clear. The moment we go slightly less abstract and use the basic models that economists use, monopsony is not simply the mirror image of monopoly. The tools that antitrust economists use to identify market power differ in the two cases.

Monopsony Requires Studying Output

Suppose that the FTC and DOJ are considering a proposed merger. For simplicity, they know that the merger will generate efficiency gains (and they want to allow it) or market power (and they want to stop it) but not both. The challenge is to look at readily available data like prices and quantities to decide which it is. (Let’s ignore the ideal case that involves being able to estimate elasticities of demand and supply.)

In a monopoly case, if there are efficiency gains from a merger, the standard model has a clear prediction: the quantity sold in the output market will increase. An economist at the FTC or DOJ with sufficient data will be able to see (or estimate) the efficiencies directly in the output market. Efficiency gains result in either greater output at lower unit cost or else product-quality improvements that increase consumer demand. Since the merger lowers prices for consumers, the agencies (assume they care about the consumer welfare standard) will let the merger go through, since consumers are better off.

In contrast, if the merger simply enhances monopoly power without efficiency gains, the quantity sold will decrease, either because the merging parties raise prices or because quality declines. Again, the empirical implication of the merger is seen directly in the market in question. Since the merger raises prices for consumers, the agencies (assume they care about the consumer welfare standard) will let not the merger go through, since consumers are worse off. In both cases, you judge monopoly power by looking directly at the market that may or may not have monopoly power.

Unfortunately, the monopsony case is more complicated. Ultimately, we can be certain of the effects of monopsony only by looking at the output market, not the input market where the monopsony power is claimed.

To see why, consider again a merger that generates either efficiency gains or market (now monopsony) power. A merger that creates monopsony power will necessarily reduce the prices and quantity purchased of inputs like labor and materials. An overly eager FTC may see a lower quantity of input purchased and jump to the conclusion that the merger increased monopsony power. After all, monopsonies purchase fewer inputs than competitive firms.

Not so fast. Fewer input purchases may be because of efficiency gains. For example, if the efficiency gain arises from the elimination of redundancies in a hospital merger, the hospital will buy fewer inputs, hire fewer technicians, or purchase fewer medical supplies. This may even reduce the wages of technicians or the price of medical supplies, even if the newly merged hospitals are not exercising any market power to suppress wages.

The key point is that monopsony needs to be treated differently than monopoly. The antitrust agencies cannot simply look at the quantity of inputs purchased in the monopsony case as the flip side of the quantity sold in the monopoly case, because the efficiency-enhancing merger can look like the monopsony merger in terms of the level of inputs purchased.

How can the agencies differentiate efficiency-enhancing mergers from monopsony mergers? The easiest way may be for the agencies to look at the output market: an entirely different market than the one with the possibility of market power. Once we look at the output market, as we would do in a monopoly case, we have clear predictions. If the merger is efficiency-enhancing, there will be an increase in the output-market quantity. If the merger increases monopsony power, the firm perceives its marginal cost as higher than before the merger and will reduce output. 

In short, as we look for how to apply antitrust to monopsony-power cases, the agencies and courts cannot look to the input market to differentiate them from efficiency-enhancing mergers; they must look at the output market. It is impossible to discuss monopsony power coherently without considering the output market.

In real-world cases, mergers will not necessarily be either strictly efficiency-enhancing or strictly monopsony-generating, but a blend of the two. Any rigorous consideration of merger effects must account for both and make some tradeoff between them. The question of how guidelines should address monopsony power is inextricably tied to the consideration of merger efficiencies, particularly given the point above that identifying and evaluating monopsony power will often depend on its effects in downstream markets.

This is just one complication that arises when we move from the purest of pure theory to slightly more applied models of monopoly and monopsony power. Geoffrey Manne, Dirk Auer, Eric Fruits, Lazar Radic and I go through more of the complications in our comments summited to the FTC and DOJ on updating the merger guidelines.

What Assumptions Make the Difference Between Monopoly and Monopsony?

Now that we have shown that monopsony and monopoly are different, how do we square this with the initial observation that it was arbitrary whether we say Armen has monopsony power over apples or monopoly power over bananas?

There are two differences between the standard monopoly and monopsony models. First, in a vast majority of models of monopsony power, the agent with the monopsony power is buying goods only to use them in production. They have a “derived demand” for some factors of production. That demand ties their buying decision to an output market. For monopoly power, the firm sells the goods, makes some money, and that’s the end of the story.

The second difference is that the standard monopoly model looks at one output good at a time. The standard factor-demand model uses two inputs, which introduces a tradeoff between, say, capital and labor. We could force monopoly to look like monopsony by assuming the merging parties each produce two different outputs, apples and bananas. An efficiency gain could favor apple production and hurt banana consumers. While this sort of substitution among outputs is often realistic, it is not the standard economic way of modeling an output market.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]

As one of the few economic theorists in this symposium, I believe my comparative advantage is in that: economic theory. In this post, I want to remind people of the basic economic theories that we have at our disposal, “off the shelf,” to make sense of the U.S. Department of Justice’s lawsuit against Google. I do not mean this to be a proclamation of “what economics has to say about X,” but merely just to help us frame the issue.

In particular, I’m going to focus on the economic concerns of Google paying phone manufacturers (Apple, in particular) to be the default search engine installed on phones. While there is not a large literature on the economic effects of default contracts, there is a large literature on something that I will argue is similar: trade promotions, such as slotting contracts, where a manufacturer pays a retailer for shelf space. Despite all the bells and whistles of the Google case, I will argue that, from an economic point of view, the contracts that Google signed are just trade promotions. No more, no less. And trade promotions are well-established as part of a competitive process that ultimately helps consumers. 

However, it is theoretically possible that such trade promotions hurt customers, so it is theoretically possible that Google’s contracts hurt consumers. Ultimately, the theoretical possibility of anticompetitive behavior that harms consumers does not seem plausible to me in this case.

Default Status

There are two reasons that Google paying Apple to be its default search engine is similar to a trade promotion. First, the deal brings awareness to the product, which nudges certain consumers/users to choose the product when they would not otherwise do so. Second, the deal does not prevent consumers from choosing the other product.

In the case of retail trade promotions, a promotional space given to Coca-Cola makes it marginally easier for consumers to pick Coke, and therefore some consumers will switch from Pepsi to Coke. But it does not reduce any consumer’s choice. The store will still have both items.

This is the same for a default search engine. The marginal searchers, who do not have a strong preference for either search engine, will stick with the default. But anyone can still install a new search engine, install a new browser, etc. It takes a few clicks, just as it takes a few steps to walk down the aisle to get the Pepsi; it is still an available choice.

If we were to stop the analysis there, we could conclude that consumers are worse off (if just a tiny bit). Some customers will have to change the default app. We also need to remember that this contract is part of a more general competitive process. The retail stores are also competing with one another, as are smartphone manufacturers.

Despite popular claims to the contrary, Apple cannot charge anything it wants for its phone. It is competing with Samsung, etc. Therefore, Apple has to pass through some of Google’s payments to customers in order to compete with Samsung. Prices are lower because of this payment. As I phrased it elsewhere, Google is effectively subsidizing the iPhone. This cross-subsidization is a part of the competitive process that ultimately benefits consumers through lower prices.

These contracts lower consumer prices, even if we assume that Apple has market power. Those who recall your Econ 101 know that a monopolist chooses a quantity where the marginal revenue equals marginal cost. With a payment from Google, the marginal cost of producing a phone is lower, therefore Apple will increase the quantity and lower price. This is shown below:

One of the surprising things about markets is that buyers’ and sellers’ incentives can be aligned, even though it seems like they must be adversarial. Companies can indirectly bargain for their consumers. Commenting on Standard Fashion Co. v. Magrane-Houston Co., where a retail store contracted to only carry Standard’s products, Robert Bork (1978, pp. 306–7) summarized this idea as follows:

The store’s decision, made entirely in its own interest, necessarily reflects the balance of competing considerations that determine consumer welfare. Put the matter another way. If no manufacturer used exclusive dealing contracts, and if a local retail monopolist decided unilaterally to carry only Standard’s patterns because the loss in product variety was more than made up in the cost saving, we would recognize that decision was in the consumer interest. We do not want a variety that costs more than it is worth … If Standard finds it worthwhile to purchase exclusivity … the reason is not the barring of entry, but some more sensible goal, such as obtaining the special selling effort of the outlet.

How trade promotions could harm customers

Since Bork’s writing, many theoretical papers have shown exceptions to Bork’s logic. There are times that the retailers’ incentives are not aligned with the customers. And we need to take those possibilities seriously.

The most common way to show the harm of these deals (or more commonly exclusivity deals) is to assume:

  1. There are large, fixed costs so that a firm must acquire a sufficient number of customers in order to enter the market; and
  2. An incumbent can lock in enough customers to prevent the entrant from reaching an efficient size.

Consumers can be locked-in because there is some fixed cost of changing suppliers or because of some coordination problems. If that’s true, customers can be made worse off, on net, because the Google contracts reduce consumer choice.

To understand the logic, let’s simplify the model to just search engines and searchers. Suppose there are two search engines (Google and Bing) and 10 searchers. However, to operate profitably, each search engine needs at least three searchers. If Google can entice eight searchers to use its product, Bing cannot operate profitably, even if Bing provides a better product. This holds even if everyone knows Bing would be a better product. The consumers are stuck in a coordination failure.

We should be skeptical of coordination failure models of inefficient outcomes. The problem with any story of coordination failures is that it is highly sensitive to the exact timing of the model. If Bing can preempt Google and offer customers an even better deal (the new entrant is better by assumption), then the coordination failure does not occur.

To argue that Bing could not execute a similar contract, the most common appeal is that the new entrant does not have the capital to pay upfront for these contracts, since it will only make money from its higher-quality search engine down the road. That makes sense until you remember that we are talking about Microsoft. I’m skeptical that capital is the real constraint. It seems much more likely that Google just has a more popular search engine.

The other problem with coordination failure arguments is that they are almost non-falsifiable. There is no way to tell, in the model, whether Google is used because of a coordination failure or whether it is used because it is a better product. If Google is a better product, then the outcome is efficient. The two outcomes are “observationally equivalent.” Compare this to the standard theory of monopoly, where we can (in principle) establish an inefficiency if the price is greater than marginal cost. While it is difficult to measure marginal cost, it can be done.

There is a general economic idea in these models that we need to pay attention to. If Google takes an action that prevents Bing from reaching efficient size, that may be an externality, sometimes called a network effect, and so that action may hurt consumer welfare.

I’m not sure how seriously to take these network effects. If more searchers allow Bing to make a better product, then literally any action (competitive or not) by Google is an externality. Making a better product that takes away consumers from Bing lowers Bing’s quality. That is, strictly speaking, an externality. Surely, that is not worthy of antitrust scrutiny simply because we find an externality.

And Bing also “takes away” searchers from Google, thus lowering Google’s possible quality. With network effects, bigger is better and it may be efficient to have only one firm. Surely, that’s not an argument we want to put forward as a serious antitrust analysis.

Put more generally, it is not enough to scream “NETWORK EFFECT!” and then have the antitrust authority come in, lawsuits-a-blazing. Well, it shouldn’t be enough.

For me to take the network effect argument seriously from an economic point of view, compared to a legal perspective, I would need to see a real restriction on consumer choice, not just an externality. One needs to argue that:

  1. No competitor can cover their fixed costs to make a reasonable search engine; and
  2. These contracts are what prevent the competing search engines from reaching size.

That’s the challenge I would like to put forward to supporters of the lawsuit. I’m skeptical.