Archives For economics

[The following is adapted from a piece in the Economic Forces newsletter, which you can subscribe to on Substack.]

Everyone is worried about growing concentration in U.S. markets. President Joe Biden’s July 2021 executive order on competition begins with the assertion that “excessive market concentration threatens basic economic liberties, democratic accountability, and the welfare of workers, farmers, small businesses, startups, and consumers.” No word on the threat of concentration to baby puppies, but the takeaway is clear. Concentration is everywhere, and it’s bad.

On the academic side, Ufuk Akcigit and Sina Ates have an interesting paper on “ten facts”—worrisome facts, in my reading—about business dynamism. Fact No. 1: “Market concentration has risen.” Can’t get higher than No. 1, last time I checked.

Unlike most people commenting on concentration, I don’t see any reason to see high or rising concentration itself as a bad thing (although it may be a sign of problems). One key takeaway from industrial organization is that high concentration tells us nothing about levels of competition and so has no direct normative implication. I bring this up all the time (see 1234).

So without worrying about whether rising concentration is a good or bad thing, this post asks, “is rising concentration a thing?” Is there any there there? Where is it rising? For what measures? Just the facts, ma’am.

How to Measure Concentration

I will focus here primarily on product-market concentration and save labor-market concentration for a later post. The following is a brief literature review. I do not cover every paper. If I missed an important one, tell me in the comments.

There are two steps to calculating concentration. First, define the market. In empirical work, a market usually includes the product sold or the input bought (e.g., apples) and a relevant geographic region (United States). With those two bits of information decided, we have a “market” (apples sold in the United States).

Once we have defined the relevant market, we need a measure of concentration within that market. The most straightforward measure to use is to look at the use-concentration ratio of some number of firms. If you see “CR4,” it refers to the percentage of total sales in the market is by the four largest firms? One problem with this measure is that CR4 ignores everything about the fifth largest and smaller firms.

The other option used to quantify concentration is called the Herfindahl-Hirschman index (HHI), which is a number between 0 and 10,000 (or 0 and 1, if it is normalized), with 10,000 meaning all of the sales go to one firm and 0 being the limit as many firms each have smaller and smaller shares. The benefit of the HHI is that it uses information on the whole distribution of firms, not just the top few.[1]

The Biggest Companies

With those preliminaries out of the way, let’s start with concentration among the biggest firms over the longest time-period and work our way to more granular data.

When people think of “corporate concentration,” they think of the giant companies like Standard Oil, Ford, Walmart, and Google. People maybe even picture a guy with a monocle, that sort of thing.

How much of total U.S. sales go to the biggest firms? How has that changed over time? These questions are the focus of Spencer Y. Kwon, Yueran Ma, and Kaspar Zimmermann’s (2022) “100 Years of Rising Corporate Concentration.”

Spoiler alert: they find rising corporate concentration. But what does that mean?

They look at the concentration of assets and sales concentrated among the largest 1% and 0.1% of businesses. For sales, due to data limitations, they need to use net income (excluding firms with negative net income) for the first half and receipts (sales) for the second half.

In 1920, the top 1% of firms had about 60% of total sales. Now, that number is above 80%. For the top 0.1%, the number rose from about 35% to 65%. Asset concentration (blue below) is even more striking, rising to almost 100% for the top 1% of firms.

Kwon, Ma, and Zimmermann (2022)

Is this just mechanical from the definitions? That was my first concern. Suppose you have a bunch of small firms enter that have no effect on the economy. Everyone starts a Substack that makes no money. 🤔 This mechanically bumps big firms in the top 1.1% into the top 1% and raises the share. The authors had thought about this more than my 2 minutes of reading, so they did something simple.

The simple comparison is to limit the economy to just the top 10% of firms. What share goes to the top 1%? In that world, when small firms enter, there is still a bump from the top 1.1% to 1%, but there is also a bump from 10.1% to 10%. Both the numerator and denominator of the ratio are mechanically increasing. That doesn’t perfectly solve the issue, since the bump to the 1.1% firm is, by definition, bigger than the bump from the 10.1% firm, but it’s a quick comparison. Still, we see a similar rise in the top 1%.

Big companies are getting bigger, even relatively.

I’m not sure how much weight to put on this paper for thinking about concentration trends. It’s an interesting paper, and that’s why I started with it. But I’m very hesitant to think of “all goods and services in the United States” as a relevant market for any policy question, especially antitrust-type questions, which is where we see the most talk about concentration. But if you’re interested in corporate concentration influencing politics, these numbers may be super relevant.

At the industry level, which is closer to an antitrust market but still not one, they find similar trends. The paper’s website (yes, the paper has a website. Your papers don’t?) has a simple display of the industry-level trends. They match the aggregate change, but the timing differs.

Industry-Level Concentration Trends, Public Firms

Moving down from big to small, we can start asking about publicly traded firms. These tend to be larger firms, but the category doesn’t capture all firms and is biased, as I’ve pointed out before.

Grullon, Larkin, and Michaely (2019) look at the average HHI at the 3-digit NAICS level (for example, oil and gas is “a market”). Below is the plot of the (sales-weighted) average HHI for publicly traded firms. It dropped in the 80s and early 90s, rose rapidly in the late 90s and early 2000s, and has slowly risen since. I’d say “concentration is rising” is the takeaway.

Average publicly-traded HHI (3-digit NAICS) from Gullon, Larkin, and Michaely (2019)

The average hides how the distribution has changed. For antitrust, we may care whether a few industries have seen a large increase in concentration or all industries have seen a small increase.

The figure below plots from 1997-2012. We’ve seen many industries with a large increase (>40%) in the HHI. We get a similar picture if we look at the share of sales to the top 4 firms.

Distribution of changes in publicly traded HHI (3-digit NAICS) between 1997-2012 from Gullon, Larkin, and Michaely (2019)

One issue with NAICS is that it was designed to lump firms together from a producer’s perspective, not the consumer’s perspective. We will say more about that below.

Another issue in Compustat is that we only have industry at the firm level, not the establishment level. For example, every 3M office or plant gets labeled as “Miscellaneous Manufactured Commodities” and doesn’t separate out the plants that make tape (like my hometown) from those that make surgical gear.

But firms are increasingly doing wider and wider business. That may not matter if you’re worried about political corruption from concentration. But if you’re thinking about markets, it seems problematic that, in Compustat, all of Amazon’s web services (cloud servers) revenue gets lumped into NAICS 454 “Nonstore Retailers,” since that’s Amazon’s firm-level designation.

Hoberg and Phillips (2022) try to account for this increasing “scope” of businesses. They make an adjustment to allow a firm to exist in multiple industries. After making this correction, they find a falling average HHI.

Hoberg and Phillips (2021)

Industry-Level Concentration Trends, All Firms

Why stick to just publicly traded firms? That could be especially problematic since we know that the set of public firms is different from private firms, and the differences have changed over time. Public firms compete with private firms and so are in the same market for many questions.

And we have data on public and private firms. Well, I don’t. I’m stuck with Compustat data. But big names have the data.

Autor, Dorn, Katz, Patterson, and Van Reenen (2020), in their famous “superstar firms” paper, have U.S. Census panel data at the firm and establishment level, covering six major sectors: manufacturing, retail trade, wholesale trade, services, utilities and transportation, and finance. They focus on the share of the top 4 (CR4) or the top 20 (CR20) firms, both in terms of sales and employment. Every series, besides employment in manufacturing, has seen an increase. In retail, there has been nearly a doubling of the sales share to the top 4 firms.

Autor, Dorn, Katz, Patterson, and Van Reenen (2020)

I guess that settles it. Three major papers show the same trend. It’s settled… If only economic trends were so simple.

What About Narrower Product Markets?

For antitrust cases, we define markets slightly differently. We don’t use NAICS codes, since they are designed to lump together similar producers, not similar products. We also don’t use the six “major industries” in the Census, since those are also too large to be meaningful for antitrust. Instead, the product level is much smaller.

Luckily, Benkard, Yurukoglu, and Zhang (2021) construct concentration measures that are intended to capture consumption-based product markets. They have respondent-level data from the annual “Survey of the American Consumer” available from MRI Simmons, a market-research firm. The survey asks specific questions about which brands consumers buy.

They define markets into 457 product-market categories, separated into 29 locations. Product “markets” are then aggregated into “sectors.” Another interesting feature is that they know the ownership of different products, even if the brand name is different. Ownership is what matters for antitrust.

They find falling concentration at the market level (the narrowest product), both at the local and the national level. At the sector level (which aggregates markets), there is a slight increase.

Benkard, Yurukoglu, and Zhang (2021)

If you focus on industries with an HHI above 2500, the level that is considered “highly concentrated” in the U.S. Horizontal Merger Guidelines, the “highly concentrated” fell from 48% in 1994 to 39% in 2019. I’m not sure how seriously to take this threshold, since the merger guidelines take a different approach to defining markets. Overall, the authors say, “we find no evidence that market power (sic) has been getting worse over time in any broad-based way.”

Is the United States a Market?

Markets are local

Benkard, Yurukoglu, and Zhang make an important point about location. In what situations is the United States the appropriate geographic region? The U.S. housing market is not a meaningful market. If my job and family are in Minnesota, I’m not considering buying a house in California. Those are different markets.

While the first few papers above focused on concentration in the United States as a whole or within U.S. companies, is that really the appropriate market? Maybe markets are much more localized, and the trends could be different.

Along comes Rossi-Hansberg, Sarte, and Trachter (2021) with a paper titled “Diverging Trends in National and Local Concentration.” In that paper, they argue that there are, you guessed it, diverging trends in national and local concentration. If we look at concentration at different geographic levels, we get a different story. Their main figure shows that, as we move to smaller geographic regions, concentration goes from rising over time to falling over time.

Figure 1 from Rossi-Hansberg, Sarte, and Trachter (2020)

How is it possible to have such a different story depending on area?

Imagine a world where each town has its own department store. At the national level, concentration is low, but each town has a high concentration. Now Walmart enters the picture and sets up shop in 10,000 towns. That increases national concentration while reducing local concentration, which goes from one store to two. That sort of dynamic seems plausible, and the authors spend a lot of time discussing Walmart.

The paper was really important, because it pushed people to think more carefully about the type of concentration that they wanted to study. Just because data tends to be at the national level doesn’t mean that’s appropriate.

As with all these papers, however, the data source matters. There are a few concerns with the “National Establishment Time Series” (NETS) data used, as outlined in Crane and Decker (2020). Lots of the data is imputed, meaning it was originally missing and then filled in with statistical techniques. Almost every Walmart stores has exactly the median sales to worker ratio. This suggests the data starts with the number of workers and imputes the sales data from there. That’s fine if you are interested in worker concentration, but this paper is about sales.

Instead of relying on NETS data, Smith and Ocampo (2022) have Census data on product-level revenue for all U.S. retail stores between 1992 and 2012. The downside is that it is only retail, but that’s an important sector and can help us make sense of the “Walmart enters town” concentration story.

Unlike Rossi-Hansberg, Sarte, and Trachter, Smith and Ocampo find rising concentration at both the local and national levels. It depends on the exact specification. They find changes in local concentration between -1.5 and 12.6 percentage points. Regardless, the –17 percentage points of Rossi-Hansberg, Sarte, and Trachter is well outside their estimates. To me, that suggests we should be careful with the “declining local concentration” story.

Smith and Ocampo (2022).

Ultimately, for local stories, data is the limitation. Take all of the data issues at the aggregate level and then try to drill down to the ZIP code or city level. It’s tough. It just doesn’t exist in general, outside of Census data for a few sectors. The other option is to dig into a particular industry, Miller, Osborne, Sheu, and Sileo (2022) study the cement industry. 😱 (They find rising concentration.)

Markets are global

Instead of going more local, what if we go the other way? What makes markets unique in 2022 vs. 1980 is not that they are local but that they are global. Who cares if U.S. manufacturing is more concentrated if U.S. firms now compete in a global market?

The standard approach (used in basically all the papers above) computes market shares based on where the good was manufactured and doesn’t look at where the goods end up. (Compustat data is more of a mess because it includes lots of revenue from foreign establishments of U.S. firms.)

What happens when we look at where goods are ultimately sold? Again, that’s relevant for antitrust. Amiti and Heise (2021) augment the usual Census of Manufacturers with transaction-level import data from the Longitudinal Firm Trade Transactions Database (LFTTD) of the Census Bureau. They see U.S. customs forms. That’s “export-adjusted.”

They then do something similar for imports to come up with “market concentration.” That is their measure of concentration for all firms selling in the U.S., irrespective of where the firm is located. That line is completely flat from 1992-2012.

Again, this is only manufacturing, but it is a striking example of how we need to be careful with our measures of concentration. This seems like a very important correction of concentration for most questions and for many industries. Tech is clearly a global market.

Conclusion

If I step back from all of these results, I think it is safe to say that concentration is rising by most measures. However, there are lots of caveats. In a sector like manufacturing, the relevant global market is not more concentrated. The Rossi-Hansberg, Sarte, and Trachter paper suggests, despite data issues, local concentration could be falling. Again, we need to be careful.

Alex Tabarrok says trust literatures, not papers. What does that imply here?

Take the last paper by Amiti and Heise. Yes, it is only one industry, but in the one industry that we have the import/export correction, the concentration results flip. That leaves me unsure of what is going on.


[1] There’s often a third step. If we are interested in what is going on in the overall economy, we need to somehow average across different markets. There is sometimes debate about how to average a bunch of HHIs. Let’s not worry too much about that for purposes of this post. Generally, if you’re looking at the concentration of sales, the industries are weighted by sales.

[This post is a contribution to Truth on the Market‘s continuing digital symposium “FTC Rulemaking on Unfair Methods of Competition.” You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

When Congress created the Federal Trade Commission (FTC) in 1914, it charged the agency with condemning “unfair methods of competition.” That’s not the language Congress used in writing America’s primary antitrust statute, the Sherman Act, which prohibits “monopoliz[ation]” and “restraint[s] of trade.”

Ever since, the question has lingered whether the FTC has the authority to go beyond the Sherman Act to condemn conduct that is unfair, but not necessarily monopolizing or trade-restraining.

According to a new policy statement, the FTC’s current leadership seems to think that the answer is “yes.” But the peculiar strand of progressivism that is currently running the agency lacks the intellectual foundation needed to tell us what conduct that is unfair but not monopolizing might actually be—and misses an opportunity to bring about an expansion of its powers that courts might actually accept.

Better to Keep the Rule of Reason but Eliminate the Monopoly-Power Requirement

The FTC’s policy statement reads like a thesaurus. What is unfair competition? Answer: conduct that is “coercive, exploitative, collusive, abusive, deceptive, predatory, or involve[s] the use of economic power of a similar nature.”

In other words: the FTC has no idea. Presumably, the agency thinks, like Justice Potter Stewart did of obscenity, it will know it when it sees it. Given the courts’ long history of humiliating the FTC by rejecting its cases, even when the agency is able to provide a highly developed account of why challenged conduct is bad for America, one shudders to think of the reception such an approach to fairness will receive.

The one really determinate proposal in the policy statement is to attack bad conduct regardless whether the defendant has monopoly power. “Section 5 does not require a separate showing of market power or market definition when the evidence indicates that such conduct tends to negatively affect competitive conditions,” writes the FTC.

If only the agency had proposed this change alone, instead of cracking open the thesaurus to try to redefine bad conduct as well. Dropping the monopoly-power requirement would, by itself, greatly increase the amount of conduct subject to the FTC’s writ without forcing the agency to answer the metaphysical question: what is fair?

Under the present rule-of-reason approach, the courts let consumers answer the question of what constitutes bad conduct. Or to be precise, the courts assume that the only thing consumers care about is the product—its quality and price—and they try to guess whether consumers prefer the changes that the defendant’s conduct had on products in the market. If a court thinks consumers don’t prefer the changes, then the court condemns the conduct. But only if the defendant happens to have monopoly power in the market for those products.

Preserving this approach to identifying bad conduct would let the courts continue to maintain the pretense that they are doing the bidding of consumers—a role they will no doubt prefer to deciding what is fair as an absolute matter.

The FTC can safely discard the monopoly-power requirement without disturbing the current test for bad conduct because—as I argue in a working paper and as Timothy J. Brennen has long insisted—the monopoly-power requirement is directed at the wrong level of the supply chain: the market in which the defendant has harmed competition rather than the input market through which the defendant causes harm.

Power, not just in markets but in all social life, is rooted in one thing only: control over what others need. Harm to competition depends not on how much a defendant can produce relative to competitors but on whether a defendant controls an input that competitors need, but which the defendant can deny to them.

What others need, they do not buy from the market for which they produce. They buy what they need from other markets: input markets. It follows that the only power that should matter for antitrust—the only power that determines whether a firm can harm competition—is power over input markets, not power in the market in which competition is harmed.

And yet, apart from vertical-merger and contracting cases, where an inquiry into foreclosure of inputs still occasionally makes an appearance, antitrust today never requires systematic proof of power in input markets. The efforts of economists are wasted on the proof of power at the wrong level of the supply chain.

That represents an opportunity for the FTC, which can at one stroke greatly expand its authority to encompass conduct by firms having little power in the markets in which they harm competition.

To be sure, really getting the rule of reason right would require that proof of monopoly power continue to be required, only now at the input level instead of in the downstream market in which competition is harmed. But the courts have traditionally required only informal proof of power over inputs. The FTC could probably eliminate the economics-intensive process of formal proof of monopoly power entirely, instead of merely kicking it up one level in the supply chain.

That is surely an added plus for a current leadership so fearful of computation that it was at pains in the policy statement specifically to forswear “numerical” cost-benefit analysis.

Whatever Happened to No Fault?  

The FTC’s interest in expanding enforcement by throwing off the monopoly-power requirement is a marked departure from progressive antimonopolisms of the past. Mid-20th century radicals did not attack the monopoly-power side of antitrust’s two-part test, but rather the anticompetitive-conduct side.

For more than two decades, progressives mooted establishing a “no-fault” monopolization regime in which the only requirement for liability was size. By contrast, the present movement has sought to focus on conduct, rather than size, its own anti-concentration rhetoric notwithstanding.

Anti-Economism

That might, in part, be a result of the movement’s hostility toward economics. Proof of monopoly power is a famously economics-heavy undertaking.

The origin of contemporary antimonopolism is in activism by journalists against the social-media companies that are outcompeting newspapers for ad revenue, not in academia. As a result, the best traditions of the left, which involve intellectually outflanking opponents by showing how economic theory supports progressive positions, are missing here.

Contemporary antimonopolism has no “Capital” (Karl Marx), no “Progress and Poverty” (Henry George), and no “Freedom through Law” (Robert Hale). The most recent installment in this tradition of left-wing intellectual accomplishment is “Capital in the 21st Century” (Thomas Piketty). Unfortunately for progressive antimonopolists, it states: “pure and perfect competition cannot alter . . . inequality[.]’”

The contrast with the last revolution to sweep antitrust—that of the Chicago School—could not be starker. That movement was born in academia and its triumph was a triumph of ideas, however flawed they may in fact have been.

If one wishes to understand how Chicago School thinking put an end to the push for “no-fault” monopolization, one reads the Airlie House conference volume. In the conversations reproduced therein, one finds the no-faulters slowly being won over by the weight of data and theory deployed against them in support of size.

No equivalent watershed moment exists for contemporary antimonopolism, which bypassed academia (including the many progressive scholars doing excellent work therein) and went straight to the press and the agencies.

There is an ongoing debate about whether recent increases in markups result from monopolization or scarcity. It has not been resolved.

Rather than occupy economics, contemporary antimonopolists—and, perhaps, current FTC leadership—recoil from it. As one prominent antimonopolist lamented to a New York Times reporter, merger cases should be a matter of counting to four, and “[w]e don’t need economists to help us count to four.”

As the policy statement puts it: “The unfair methods of competition framework explicitly contemplates a variety of non-quantifiable harms, and justifications and purported benefits may be unquantifiable as well.”

Moralism

Contemporary antimonopolism’s focus on conduct might also be due to moralism—as reflected in the litany of synonyms for “bad” in the FTC’s policy statement.

For earlier progressives, antitrust was largely a means to an end—a way of ensuring that wages were high, consumer prices were low, and products were safe and of good quality. The fate of individual business entities within markets was of little concern, so long as these outcomes could be achieved.

What mattered were people. While contemporary antimonopolism cares about people, too, it differs from earlier antimonopolisms in that it personifies the firm.

If the firm dies, we are to be sad. If the firm is treated roughly by others, starved of resources or denied room to grow and reach its full potential, we are to be outraged, just as we would be if a child were starved. And, just as in the case of a child, we are to be outraged even if the firm would not have grown up to contribute anything of worth to society.

The irony, apparently lost on antimonopolists, is that the same personification of the firm as a rights-bearing agent, operating in other areas of law, undermines progressive policies.

The firm personified not only has a right to be treated gently by competing firms but also to be treated well by other people. But that means that people no longer come first relative to firms. When the Supreme Court holds that a business firm has a First Amendment right to influence politics, the Court takes personification of the firm to its logical extreme.

The alternative is not to make the market a morality play among firms, but to focus instead on market outcomes that matter to people—wages, prices, and product quality. We should not care whether a firm is “coerc[ed], exploitat[ed], collu[ded against], abus[ed], dece[ived], predate[ed], or [subjected to] economic power of a similar nature” except insofar as such treatment fails to serve people.

If one firm wishes to hire away the talent of another, for example, depriving the target of its lifeblood and killing it, so much the better if the result is better products, lower prices, or higher wages.

Antitrust can help maintain this focus on people only in part—by stopping unfair conduct that degrades products. I have argued elsewhere that the rest is for price regulation, taxation, and direct regulation to undertake.  

Can We Be Fairer and Still Give Product-Improving Conduct a Pass?

The intellectual deficit in contemporary antimonopolism is also evident in the care that the FTC’s policy statement puts into exempting behavior that creates superior products.

For one cannot expand the FTC’s powers to reach bad conduct without condemning product-improving conduct when the major check on enforcement today under the rule of reason (apart from the monopoly-power requirement) is precisely that conduct that improves products is exempt.

Under the rule of reason, bad conduct is a denial of inputs to a competitor that does not help consumers, meaning that the denial degrades the competitor’s products without improving the defendant’s products. Bad conduct is, in other words, unfairness that does not improve products.

If the FTC’s goal is to increase fairness relative to a regime that already pursues it, except when unfairness improves products, the additional fairness must come at the cost of product improvement.

The reference to superior products in the policy statement may be an attempt to compromise with the rule of reason. Unlike the elimination of the monopoly-power requirement, it is not a coherent compromise.

The FTC doesn’t need an economist to grasp this either.  

[This post is a contribution to Truth on the Market‘s continuing digital symposium “FTC Rulemaking on Unfair Methods of Competition.” You can find other posts at the symposium page here. Truth on the Market also invites academics, practitioners, and other antitrust/regulation commentators to send us 1,500-4,000 word responses for potential inclusion in the symposium.]

The current Federal Trade Commission (FTC) appears to have one overarching goal: find more ways to sue companies. The three Democratic commissioners (with the one Republican dissenting) issued a new policy statement earlier today that brings long-abandoned powers back into the FTC’s toolkit. Under Chair Lina Khan’s leadership, the FTC wants to bring challenges against “unfair methods of competition in or affecting commerce.” If that sounds extremely vague, that’s because it is. 

For the past few decades, antitrust violations have fallen into two categories. Actions like price-fixing with competitors are assumed to be illegal. Other actions are only considered illegal if they are proven to sufficiently restrain trade. This latter approach is called the “rule of reason.”

The FTC now wants to return to a time when they could also challenge conduct it viewed as unfair. The policy statement says the commission will go after behavior that is “coercive, exploitative, collusive, abusive, deceptive, predatory, or involve the use of economic power of a similar nature.” Who could argue against stopping coercive behavior? The problem is what it means in practice for actual antitrust cases. No one knows: businesses or courts. It’s up to the whims of the FTC.

This is how antitrust used to be. In 1984, the 2nd U.S. Circuit Court of Appeals admonished the FTC and argued that “the Commission owes a duty to define the conditions under which conduct … would be unfair so that businesses will have an inkling as to what they can lawfully do rather than be left in a state of complete unpredictability.” Fairness, as the Clayton Act puts forward, proved unworkable as an antitrust standard.

The FTC’s movement to clarify what “unfair” means led to a 2015 policy statement, which the new statement supersedes. In the 2015 statement, the Obama-era FTC, with bipartisan support, issued new rules laying out what would qualify as unfair methods of competition. In doing so, they rolled “unfair methods” under the rule of reason. The consequences of the action matter.

The 2015 statement is part of a longer-run trend of incorporating more economic analysis into antitrust. For the past few decades, courts have followed in antitrust law is called the “consumer welfare standard.”  The basic idea is that the goal of antitrust decisions should be to choose whatever outcome helps consumers, or as economists would put it, whatever increases “consumer welfare.” Once those are the terms of the dispute, economic analysis can help the courts sort out whether an action is anticompetitive.

Beyond helping to settle particular cases, these features of modern antitrust—like the consumer welfare standard and the rule of reason—give market participants some sense of what is illegal and what is not. That’s necessary for the rule of law to prevail and for markets to function.

The new FTC rules explicitly reject any appeal to consumer benefits or welfare. Efficiency gains from the action—labeled “pecuniary gains” to suggest they are merely about money—do not count as a defense. The FTC makes explicit that parties cannot justify behavior based on efficiencies or cost-benefit analysis.

Instead, as Commissioner Christine S. Wilson points out in her dissent, “the Policy Statement adopts an ‘I know it when I see it’ approach premised on a list of nefarious-sounding adjectives.” If the FTC claims some conduct is unfair, why worry about studying the consequences of the conduct?

The policy statement is an attempt to roll back the clock on antitrust and return to the incoherence of 1950s and 1960s antitrust. The FTC seeks to protect other companies, not competition or consumers. As Khan herself said, “for a lot of businesses it comes down to whether they’re going to be able to sink or swim.”

But President Joe Biden’s antitrust enforcers have struggled to win traditional antitrust cases. On mergers, for example, they have challenged a smaller percentage of mergers and were less successful than the FTC and DOJ under President Donald Trump.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]

26 July, 10 A.F. (after fairness)

Dear Fellow Inquisitors,

It has been more than a decade now since the Federal Neutrality Commission, born of the ashes of the old world, ushered in the Age of Fairness. 

As you all know, the FNC was created during the Online Era, when the emergence of the largest companies in human history opened our eyes to the original sin of the competitive process: unfairness.

In the course of their evolution, digital platforms—the vanity fairs of the XXI century—had created entire ecosystems that offered integrated services that were so comfortable to use together that they led to a double-sin: sloth on the part of the consumers, and the unfair exclusion of competitors, who were barred from exercising their God-given right to participate in every market and every platform—and to prosper.

Digital stores selling their own branded goods, social-media apps with their own messaging services, search engines using search statistics to generate optimally efficient tools that surpassed the (legitimate) confines of their core functions and spilled over into the dominion of job search, flight booking, or housing apps … App stores were even using their own recognizable software to guarantee that the apps they distributed met the highest standards of security and trustworthiness!

While these things might not seem entirely unreasonable (especially to the heathens: selfish and individualistic consumers who care about nothing other than satisfying their base hedonistic desires), they in fact led to unspeakable evils that flouted the common good.

For example, they made it very, very uncomfortable for someone who wanted to start their own real-estate business to compete with such strong rival companies, who could leverage their superior efficiency in their core markets to become nigh-unbeatable in offering the cheapest, most relevant housing ads. To make matters worse, the gargantuan spending of the digital platforms on research and development built additional moats of quality and innovation around their products—both core and adjacent—that made them utterly unimpregnable to rivals specializing in just one area.

By constantly leveraging their core services to offer better and improved products on adjacent markets, digital platforms had made it unfairly difficult for other companies to join the race and deliver us to “perfect competition”—the euphoric state of blissful equilibrium foretold by the high priests of the only true belief system, Economics.

But not all was lost, and we hadn’t been forsaken. In those dark and faithless days, it was revealed to us by Sen. Amy Klobuchar—praise be her name—that the loathsome practice whereby online companies favored their own products and services over their rivals had a name, “self-preferencing,” and that it was a sin. And, most importantly, that it could be eradicated.

Fortunately, and thanks to the vigilance of the FNC, legal steps were swiftly taken to make the praxis of the Digital Economy more closely resemble its theory, as passed on to us by our forefathers.

And it worked, brothers and sisters! The prohibition of self-preferencing in digital markets made online products much more homogenous, thus validating one of the main assumptions of Economics. In addition, new competition-law Acts, with mechanisms such as forced data sharing, have eliminated all the messy experimentation that had hitherto led to varied (and risky) business models and diversified approaches. By turning competition into forced collaboration, we had finally made it stable, equal, and predictable; in one word: fair.

And what of the sinner in every one of us? Before the great revelation, blasphemous “consumers”—an anachronistic and reductive term for “socially responsible citizens”—were committing the sin of laziness: sloth. Now, choice is finally mandated, and nothing can be pre-selected or even integrated. No more arbitrary safe-browsing mechanisms, integrated malware detectors and spam filters. Where digital platforms experimented and imposed results on us, we are now coercively free to experiment by ourselves—and on ourselves! Online searches today lead to thousands of indistinguishable links hiding an infinity of surprises, requiring us to be more circumspect and informed than ever before. In one word: the prohibition of self-preferencing has improved the moral character of the human stock.

It is universally known that we owe the dawn of the Age of Fairness to the American Innovation and Choice Online Act, adopted by Congress in the year 2022; and the unwavering vigilance of the FNC. What is lesser known—and what I am here to instill in you today—is that that was just the beginning. The success of AICOA has opened our eyes to an even more ancient and perverse evil: self-preferencing in offline markets. It revealed to us that—for centuries, if not millennia—companies in various industries—from togas to wine, from cosmetics to insurance—had, in fact, always preferred their own initiatives over those of their rivals!

Just as the ancient chariot constructors designed chariots to suit the build of their own thoroughbred horses (thereby foreclosing horses raised by other breeders), the XX century car producers were using spare parts delivered by a supplier organizationally related to their company.

This realization has accelerated the birth pangs of the American Innovation and Choice Offline Act, which we are here to announce today. With it, the FNC will eliminate all remnants of unfair rivalry—online and offline—so that we, as one community of faith, can finally enjoy the true benefits of competition. But we must never forget that this tenuous equilibrium hangs by a thread, and that we owe it all to the indefatigable efforts of the FNC agents patrolling the streets, supermarkets, restaurants, gyms, factories, and just about everything else every single day.

Of course, there is still a lot to be done. But every long journey must begin somewhere.

Today, I want to warn you against sin and urge you to adopt the religion of fairness[1] before the day of judgment comes.

Amen.


[1] Or any other religion that condemns self-preferencing. I want to recommend them all equally.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]

Much ink has been spilled regarding the potential harm to the economy and to the rule of law that could stem from enactment of the primary federal antitrust legislative proposal, the American Innovation and Choice Online Act (AICOA) (see here). AICOA proponents, of course, would beg to differ, emphasizing the purported procompetitive benefits of limiting the business freedom of “Big Tech monopolists.”

There is, however, one inescapable reality—as night follows day, passage of AICOA would usher in an extended period of costly litigation over the meaning of a host of AICOA terms. As we will see, this would generate business uncertainty and dampen innovative conduct that might be covered by new AICOA statutory terms. 

The history of antitrust illustrates the difficulties inherent in clarifying the meaning of novel federal statutory language. It was not until 21 years after passage of the Sherman Antitrust Act that the Supreme Court held that Section 1 of the act’s prohibition on contracts, combinations, and conspiracies “in restraint of trade” only covered unreasonable restraints of trade (see Standard Oil Co. of New Jersey v. United States, 221 U.S. 1 (1911)). Furthermore, courts took decades to clarify that certain types of restraints (for example, hardcore price fixing and horizontal market division) were inherently unreasonable and thus per se illegal, while others would be evaluated on a case-by-case basis under a “rule of reason.”

In addition, even far more specific terms related to exclusive dealing, tying, and price discrimination found within the Clayton Antitrust Act gave rise to uncertainty over the scope of their application. This uncertainty had to be sorted out through judicial case-law tests developed over many decades.

Even today, there is no simple, easily applicable test to determine whether conduct in the abstract constitutes illegal monopolization under Section 2 of the Sherman Act. Rather, whether Section 2 has been violated in any particular instance depends upon the application of economic analysis and certain case-law principles to matter-specific facts.

As is the case with current antitrust law, the precise meaning and scope of AICOA’s terms will have to be fleshed out over many years. Scholarly critiques of AICOA’s language underscore the seriousness of this problem.

In its April 2022 public comment on AICOA, the American Bar Association (ABA)  Antitrust Law Section explains in some detail the significant ambiguities inherent in specific AICOA language that the courts will have to address. These include “ambiguous terminology … regarding fairness, preferencing, materiality, and harm to competition on covered platforms”; and “specific language establishing affirmative defenses [that] creates significant uncertainty”. The ABA comment further stresses that AICOA’s failure to include harm to the competitive process as a prerequisite for a statutory violation departs from a broad-based consensus understanding within the antitrust community and could have the unintended consequence of disincentivizing efficient conduct. This departure would, of course, create additional interpretive difficulties for federal judges, further complicating the task of developing coherent case-law principles for the new statute.

Lending support to the ABA’s concerns, Northwestern University professor of economics Dan Spulber notes that AICOA “may have adverse effects on innovation and competition because of imprecise concepts and terminology.”

In a somewhat similar vein, Stanford Law School Professor (and former acting assistant attorney general for antitrust during the Clinton administration) Douglas Melamed complains that:

[AICOA] does not include the normal antitrust language (e.g., “competition in the market as a whole,” “market power”) that gives meaning to the idea of harm to competition, nor does it say that the imprecise language it does use is to be construed as that language is construed by the antitrust laws. … The bill could be very harmful if it is construed to require, not increased market power, but simply harm to rivals.

In sum, ambiguities inherent in AICOA’s new terminology will generate substantial uncertainty among affected businesses. This uncertainty will play out in the courts over a period of years. Moreover, the likelihood that judicial statutory constructions of AICOA language will support “efficiency-promoting” interpretations of behavior is diminished by the fact that AICOA’s structural scheme (which focuses on harm to rivals) does not harmonize with traditional antitrust concerns about promoting a vibrant competitive process.

Knowing this, the large high-tech firms covered by AICOA will become risk averse and less likely to innovate. (For example, they will be reluctant to improve algorithms in a manner that would increase efficiency and benefit consumers, but that might be seen as disadvantaging rivals.) As such, American innovation will slow, and consumers will suffer. (See here for an estimate of the enormous consumer-welfare gains generated by high tech platforms—gains of a type that AICOA’s enactment may be expected to jeopardize.) It is to be hoped that Congress will take note and consign AICOA to the rubbish heap of disastrous legislative policy proposals.

Slow wage growth and rising inequality over the past few decades have pushed economists more and more toward the study of monopsony power—particularly firms’ monopsony power over workers. Antitrust policy has taken notice. For example, when the Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) initiated the process of updating their merger guidelines, their request for information included questions about how they should respond to monopsony concerns, as distinct from monopoly concerns. ​

From a pure economic-theory perspective, there is no important distinction between monopsony power and monopoly power. If Armen is trading his apples in exchange for Ben’s bananas, we can call Armen the seller of apples or the buyer of bananas. The labels (buyer and seller) are kind of arbitrary. It doesn’t matter as a pure theory matter. Monopsony and monopoly are just mirrored images.

Some infer from this monopoly-monopsony symmetry, however, that extending antitrust to monopsony power will be straightforward. As a practical matter for antitrust enforcement, it becomes less clear. The moment we go slightly less abstract and use the basic models that economists use, monopsony is not simply the mirror image of monopoly. The tools that antitrust economists use to identify market power differ in the two cases.

Monopsony Requires Studying Output

Suppose that the FTC and DOJ are considering a proposed merger. For simplicity, they know that the merger will generate efficiency gains (and they want to allow it) or market power (and they want to stop it) but not both. The challenge is to look at readily available data like prices and quantities to decide which it is. (Let’s ignore the ideal case that involves being able to estimate elasticities of demand and supply.)

In a monopoly case, if there are efficiency gains from a merger, the standard model has a clear prediction: the quantity sold in the output market will increase. An economist at the FTC or DOJ with sufficient data will be able to see (or estimate) the efficiencies directly in the output market. Efficiency gains result in either greater output at lower unit cost or else product-quality improvements that increase consumer demand. Since the merger lowers prices for consumers, the agencies (assume they care about the consumer welfare standard) will let the merger go through, since consumers are better off.

In contrast, if the merger simply enhances monopoly power without efficiency gains, the quantity sold will decrease, either because the merging parties raise prices or because quality declines. Again, the empirical implication of the merger is seen directly in the market in question. Since the merger raises prices for consumers, the agencies (assume they care about the consumer welfare standard) will let not the merger go through, since consumers are worse off. In both cases, you judge monopoly power by looking directly at the market that may or may not have monopoly power.

Unfortunately, the monopsony case is more complicated. Ultimately, we can be certain of the effects of monopsony only by looking at the output market, not the input market where the monopsony power is claimed.

To see why, consider again a merger that generates either efficiency gains or market (now monopsony) power. A merger that creates monopsony power will necessarily reduce the prices and quantity purchased of inputs like labor and materials. An overly eager FTC may see a lower quantity of input purchased and jump to the conclusion that the merger increased monopsony power. After all, monopsonies purchase fewer inputs than competitive firms.

Not so fast. Fewer input purchases may be because of efficiency gains. For example, if the efficiency gain arises from the elimination of redundancies in a hospital merger, the hospital will buy fewer inputs, hire fewer technicians, or purchase fewer medical supplies. This may even reduce the wages of technicians or the price of medical supplies, even if the newly merged hospitals are not exercising any market power to suppress wages.

The key point is that monopsony needs to be treated differently than monopoly. The antitrust agencies cannot simply look at the quantity of inputs purchased in the monopsony case as the flip side of the quantity sold in the monopoly case, because the efficiency-enhancing merger can look like the monopsony merger in terms of the level of inputs purchased.

How can the agencies differentiate efficiency-enhancing mergers from monopsony mergers? The easiest way may be for the agencies to look at the output market: an entirely different market than the one with the possibility of market power. Once we look at the output market, as we would do in a monopoly case, we have clear predictions. If the merger is efficiency-enhancing, there will be an increase in the output-market quantity. If the merger increases monopsony power, the firm perceives its marginal cost as higher than before the merger and will reduce output. 

In short, as we look for how to apply antitrust to monopsony-power cases, the agencies and courts cannot look to the input market to differentiate them from efficiency-enhancing mergers; they must look at the output market. It is impossible to discuss monopsony power coherently without considering the output market.

In real-world cases, mergers will not necessarily be either strictly efficiency-enhancing or strictly monopsony-generating, but a blend of the two. Any rigorous consideration of merger effects must account for both and make some tradeoff between them. The question of how guidelines should address monopsony power is inextricably tied to the consideration of merger efficiencies, particularly given the point above that identifying and evaluating monopsony power will often depend on its effects in downstream markets.

This is just one complication that arises when we move from the purest of pure theory to slightly more applied models of monopoly and monopsony power. Geoffrey Manne, Dirk Auer, Eric Fruits, Lazar Radic and I go through more of the complications in our comments summited to the FTC and DOJ on updating the merger guidelines.

What Assumptions Make the Difference Between Monopoly and Monopsony?

Now that we have shown that monopsony and monopoly are different, how do we square this with the initial observation that it was arbitrary whether we say Armen has monopsony power over apples or monopoly power over bananas?

There are two differences between the standard monopoly and monopsony models. First, in a vast majority of models of monopsony power, the agent with the monopsony power is buying goods only to use them in production. They have a “derived demand” for some factors of production. That demand ties their buying decision to an output market. For monopoly power, the firm sells the goods, makes some money, and that’s the end of the story.

The second difference is that the standard monopoly model looks at one output good at a time. The standard factor-demand model uses two inputs, which introduces a tradeoff between, say, capital and labor. We could force monopoly to look like monopsony by assuming the merging parties each produce two different outputs, apples and bananas. An efficiency gain could favor apple production and hurt banana consumers. While this sort of substitution among outputs is often realistic, it is not the standard economic way of modeling an output market.

Interrogations concerning the role that economic theory should play in policy decisions are nothing new. Milton Friedman famously drew a distinction between “positive” and “normative” economics, notably arguing that theoretical models were valuable, despite their unrealistic assumptions. Kenneth Arrow and Gerard Debreu’s highly theoretical work on General Equilibrium Theory is widely acknowledged as one of the most important achievements of modern economics.

But for all their intellectual value and academic merit, the use of models to inform policy decisions is not uncontroversial. There is indeed a long and unfortunate history of influential economic models turning out to be poor depictions (and predictors) of real-world outcomes.

This raises a key question: should policymakers use economic models to inform their decisions and, if so, how? This post uses the economics of externalities to illustrate both the virtues and pitfalls of economic modeling. Throughout economic history, externalities have routinely been cited to support claims of market failure and calls for government intervention. However, as explained below, these fears have frequently failed to withstand empirical scrutiny.

Today, similar models are touted to support government intervention in digital industries. Externalities are notably said to prevent consumers from switching between platforms, allegedly leading to unassailable barriers to entry and deficient venture-capital investment. Unfortunately, as explained below, the models that underpin these fears are highly abstracted and far removed from underlying market realities.

Ultimately, this post argues that, while models provide a powerful way of thinking about the world, naïvely transposing them to real-world settings is misguided. This is not to say that models are useless—quite the contrary. Indeed, “falsified” models can shed powerful light on economic behavior that would otherwise prove hard to understand.

Bees

Fears surrounding economic externalities are as old as modern economics. For example, in the 1950s, economists routinely cited bee pollination as a source of externalities and, ultimately, market failure.

The basic argument was straightforward: Bees and orchards provide each other with positive externalities. Bees cross-pollinate flowers and orchards contain vast amounts of nectar upon which bees feed, thus improving honey yields. Accordingly, several famous economists argued that there was a market failure; bees fly where they please and farmers cannot prevent bees from feeding on their blossoming flowers—allegedly causing underinvestment in both. This led James Meade to conclude:

[T]he apple-farmer provides to the beekeeper some of his factors free of charge. The apple-farmer is paid less than the value of his marginal social net product, and the beekeeper receives more than the value of his marginal social net product.

A finding echoed by Francis Bator:

If, then, apple producers are unable to protect their equity in apple-nectar and markets do not impute to apple blossoms their correct shadow value, profit-maximizing decisions will fail correctly to allocate resources at the margin. There will be failure “by enforcement.” This is what I would call an ownership externality. It is essentially Meade’s “unpaid factor” case.

It took more than 20 years and painstaking research by Steven Cheung to conclusively debunk these assertions. So how did economic agents overcome this “insurmountable” market failure?

The answer, it turns out, was extremely simple. While bees do fly where they please, the relative placement of beehives and orchards has a tremendous impact on both fruit and honey yields. This is partly because bees have a very limited mean foraging range (roughly 2-3km). This left economic agents with ample scope to prevent free-riding.

Using these natural sources of excludability, they built a web of complex agreements that internalize the symbiotic virtues of beehives and fruit orchards. To cite Steven Cheung’s research

Pollination contracts usually include stipulations regarding the number and strength of the colonies, the rental fee per hive, the time of delivery and removal of hives, the protection of bees from pesticide sprays, and the strategic placing of hives. Apiary lease contracts differ from pollination contracts in two essential aspects. One is, predictably, that the amount of apiary rent seldom depends on the number of colonies, since the farmer is interested only in obtaining the rent per apiary offered by the highest bidder. Second, the amount of apiary rent is not necessarily fixed. Paid mostly in honey, it may vary according to either the current honey yield or the honey yield of the preceding year.

But what of neighboring orchards? Wouldn’t these entail a more complex externality (i.e., could one orchard free-ride on agreements concluded between other orchards and neighboring apiaries)? Apparently not:

Acknowledging the complication, beekeepers and farmers are quick to point out that a social rule, or custom of the orchards, takes the place of explicit contracting: during the pollination period the owner of an orchard either keeps bees himself or hires as many hives per area as are employed in neighboring orchards of the same type. One failing to comply would be rated as a “bad neighbor,” it is said, and could expect a number of inconveniences imposed on him by other orchard owners. This customary matching of hive densities involves the exchange of gifts of the same kind, which apparently entails lower transaction costs than would be incurred under explicit contracting, where farmers would have to negotiate and make money payments to one another for the bee spillover.

In short, not only did the bee/orchard externality model fail, but it failed to account for extremely obvious counter-evidence. Even a rapid flip through the Yellow Pages (or, today, a search on Google) would have revealed a vibrant market for bee pollination. In short, the bee externalities, at least as presented in economic textbooks, were merely an economic “fable.” Unfortunately, they would not be the last.

The Lighthouse

Lighthouses provide another cautionary tale. Indeed, Henry Sidgwick, A.C. Pigou, John Stuart Mill, and Paul Samuelson all cited the externalities involved in the provision of lighthouse services as a source of market failure.

Here, too, the problem was allegedly straightforward. A lighthouse cannot prevent ships from free-riding on its services when they sail by it (i.e., it is mostly impossible to determine whether a ship has paid fees and to turn off the lighthouse if that is not the case). Hence there can be no efficient market for light dues (lighthouses were seen as a “public good”). As Paul Samuelson famously put it:

Take our earlier case of a lighthouse to warn against rocks. Its beam helps everyone in sight. A businessman could not build it for a profit, since he cannot claim a price from each user. This certainly is the kind of activity that governments would naturally undertake.

He added that:

[E]ven if the operators were able—say, by radar reconnaissance—to claim a toll from every nearby user, that fact would not necessarily make it socially optimal for this service to be provided like a private good at a market-determined individual price. Why not? Because it costs society zero extra cost to let one extra ship use the service; hence any ships discouraged from those waters by the requirement to pay a positive price will represent a social economic loss—even if the price charged to all is no more than enough to pay the long-run expenses of the lighthouse.

More than a century after it was first mentioned in economics textbooks, Ronald Coase finally laid the lighthouse myth to rest—rebutting Samuelson’s second claim in the process.

What piece of evidence had eluded economists for all those years? As Coase observed, contemporary economists had somehow overlooked the fact that large parts of the British lighthouse system were privately operated, and had been for centuries:

[T]he right to operate a lighthouse and to levy tolls was granted to individuals by Acts of Parliament. The tolls were collected at the ports by agents (who might act for several lighthouses), who might be private individuals but were commonly customs officials. The toll varied with the lighthouse and ships paid a toll, varying with the size of the vessel, for each lighthouse passed. It was normally a rate per ton (say 1/4d or 1/2d) for each voyage. Later, books were published setting out the lighthouses passed on different voyages and the charges that would be made.

In other words, lighthouses used a simple physical feature to create “excludability” and prevent free-riding. The main reason ships require lighthouses is to avoid hitting rocks when they make their way to a port. By tying port fees and light dues, lighthouse owners—aided by mild government-enforced property rights—could easily earn a return on their investments, thus disproving the lighthouse free-riding myth.

Ultimately, this meant that a large share of the British lighthouse system was privately operated throughout the 19th century, and this share would presumably have been more pronounced if government-run “Trinity House” lighthouses had not crowded out private investment:

The position in 1820 was that there were 24 lighthouses operated by Trinity House and 22 by private individuals or organizations. But many of the Trinity House lighthouses had not been built originally by them but had been acquired by purchase or as the result of the expiration of a lease.

Of course, this system was not perfect. Some ships (notably foreign ones that did not dock in the United Kingdom) might free-ride on this arrangement. It also entailed some level of market power. The ability to charge light dues meant that prices were higher than the “socially optimal” baseline of zero (the marginal cost of providing light is close to zero). Though it is worth noting that tying port fees and light dues might also have decreased double marginalization, to the benefit of sailors.

Samuelson was particularly weary of this market power that went hand in hand with the private provision of public goods, including lighthouses:

Being able to limit a public good’s consumption does not make it a true-blue private good. For what, after all, are the true marginal costs of having one extra family tune in on the program? They are literally zero. Why then prevent any family which would receive positive pleasure from tuning in on the program from doing so?

However, as Coase explained, light fees represented only a tiny fraction of a ship’s costs. In practice, they were thus unlikely to affect market output meaningfully:

[W]hat is the gain which Samuelson sees as coming from this change in the way in which the lighthouse service is financed? It is that some ships which are now discouraged from making a voyage to Britain because of the light dues would in future do so. As it happens, the form of the toll and the exemptions mean that for most ships the number of voyages will not be affected by the fact that light dues are paid. There may be some ships somewhere which are laid up or broken up because of the light dues, but the number cannot be great, if indeed there are any ships in this category.

Samuelson’s critique also falls prey to the Nirvana Fallacy pointed out by Harold Demsetz: markets might not be perfect, but neither is government intervention. Market power and imperfect appropriability are the two (paradoxical) pitfalls of the first; “white elephants,” underinvestment, and lack of competition (and the information it generates) tend to stem from the latter.

Which of these solutions is superior, in each case, is an empirical question that early economists had simply failed to consider—assuming instead that market failure was systematic in markets that present prima facie externalities. In other words, models were taken as gospel without any circumspection about their relevance to real-world settings.

The Tragedy of the Commons

Externalities were also said to undermine the efficient use of “common pool resources,” such grazing lands, common irrigation systems, and fisheries—resources where one agent’s use diminishes that of others, and where exclusion is either difficult or impossible.

The most famous formulation of this problem is Garret Hardin’s highly influential (over 47,000 cites) “tragedy of the commons.” Hardin cited the example of multiple herdsmen occupying the same grazing ground:

The rational herdsman concludes that the only sensible course for him to pursue is to add another animal to his herd. And another; and another … But this is the conclusion reached by each and every rational herdsman sharing a commons. Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit—in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons.

In more technical terms, each economic agent purportedly exerts an unpriced negative externality on the others, thus leading to the premature depletion of common pool resources. Hardin extended this reasoning to other problems, such as pollution and allegations of global overpopulation.

Although Hardin hardly documented any real-world occurrences of this so-called tragedy, his policy prescriptions were unequivocal:

The most important aspect of necessity that we must now recognize, is the necessity of abandoning the commons in breeding. No technical solution can rescue us from the misery of overpopulation. Freedom to breed will bring ruin to all.

As with many other theoretical externalities, empirical scrutiny revealed that these fears were greatly overblown. In her Nobel-winning work, Elinor Ostrom showed that economic agents often found ways to mitigate these potential externalities markedly. For example, mountain villages often implement rules and norms that limit the use of grazing grounds and wooded areas. Likewise, landowners across the world often set up “irrigation communities” that prevent agents from overusing water.

Along similar lines, Julian Morris and I conjecture that informal arrangements and reputational effects might mitigate opportunistic behavior in the standard essential patent industry.

These bottom-up solutions are certainly not perfect. Many common institutions fail—for example, Elinor Ostrom documents several problematic fisheries, groundwater basins and forests, although it is worth noting that government intervention was sometimes behind these failures. To cite but one example:

Several scholars have documented what occurred when the Government of Nepal passed the “Private Forest Nationalization Act” […]. Whereas the law was officially proclaimed to “protect, manage and conserve the forest for the benefit of the entire country”, it actually disrupted previously established communal control over the local forests. Messerschmidt (1986, p.458) reports what happened immediately after the law came into effect:

Nepalese villagers began freeriding — systematically overexploiting their forest resources on a large scale.

In any case, the question is not so much whether private institutions fail, but whether they do so more often than government intervention. be it regulation or property rights. In short, the “tragedy of the commons” is ultimately an empirical question: what works better in each case, government intervention, propertization, or emergent rules and norms?

More broadly, the key lesson is that it is wrong to blindly apply models while ignoring real-world outcomes. As Elinor Ostrom herself put it:

The intellectual trap in relying entirely on models to provide the foundation for policy analysis is that scholars then presume that they are omniscient observers able to comprehend the essentials of how complex, dynamic systems work by creating stylized descriptions of some aspects of those systems.

Dvorak Keyboards

In 1985, Paul David published an influential paper arguing that market failures undermined competition between the QWERTY and Dvorak keyboard layouts. This version of history then became a dominant narrative in the field of network economics, including works by Joseph Farrell & Garth Saloner, and Jean Tirole.

The basic claim was that QWERTY users’ reluctance to switch toward the putatively superior Dvorak layout exerted a negative externality on the rest of the ecosystem (and a positive externality on other QWERTY users), thus preventing the adoption of a more efficient standard. As Paul David put it:

Although the initial lead acquired by QWERTY through its association with the Remington was quantitatively very slender, when magnified by expectations it may well have been quite sufficient to guarantee that the industry eventually would lock in to a de facto QWERTY standard. […]

Competition in the absence of perfect futures markets drove the industry prematurely into standardization on the wrong system — where decentralized decision making subsequently has sufficed to hold it.

Unfortunately, many of the above papers paid little to no attention to actual market conditions in the typewriter and keyboard layout industries. Years later, Stan Liebowitz and Stephen Margolis undertook a detailed analysis of the keyboard layout market. They almost entirely rejected any notion that QWERTY prevailed despite it being the inferior standard:

Yet there are many aspects of the QWERTY-versus-Dvorak fable that do not survive scrutiny. First, the claim that Dvorak is a better keyboard is supported only by evidence that is both scant and suspect. Second, studies in the ergonomics literature find no significant advantage for Dvorak that can be deemed scientifically reliable. Third, the competition among producers of typewriters, out of which the standard emerged, was far more vigorous than is commonly reported. Fourth, there were far more typing contests than just the single Cincinnati contest. These contests provided ample opportunity to demonstrate the superiority of alternative keyboard arrangements. That QWERTY survived significant challenges early in the history of typewriting demonstrates that it is at least among the reasonably fit, even if not the fittest that can be imagined.

In short, there was little to no evidence supporting the view that QWERTY inefficiently prevailed because of network effects. The falsification of this narrative also weakens broader claims that network effects systematically lead to either excess momentum or excess inertia in standardization. Indeed, it is tempting to characterize all network industries with heavily skewed market shares as resulting from market failure. Yet the QWERTY/Dvorak story suggests that such a conclusion would be premature.

Killzones, Zoom, and TikTok

If you are still reading at this point, you might think that contemporary scholars would know better than to base calls for policy intervention on theoretical externalities. Alas, nothing could be further from the truth.

For instance, a recent paper by Sai Kamepalli, Raghuram Rajan and Luigi Zingales conjectures that the interplay between mergers and network externalities discourages the adoption of superior independent platforms:

If techies expect two platforms to merge, they will be reluctant to pay the switching costs and adopt the new platform early on, unless the new platform significantly outperforms the incumbent one. After all, they know that if the entering platform’s technology is a net improvement over the existing technology, it will be adopted by the incumbent after merger, with new features melded with old features so that the techies’ adjustment costs are minimized. Thus, the prospect of a merger will dissuade many techies from trying the new technology.

Although this key behavioral assumption drives the results of the theoretical model, the paper presents no evidence to support the contention that it occurs in real-world settings. Admittedly, the paper does present evidence of reduced venture capital investments after mergers involving large tech firms. But even on their own terms, this data simply does not support the authors’ behavioral assumption.

And this is no isolated example. Over the past couple of years, several scholars have called for more muscular antitrust intervention in networked industries. A common theme is that network externalities, switching costs, and data-related increasing returns to scale lead to inefficient consumer lock-in, thus raising barriers to entry for potential rivals (here, here, here).

But there are also countless counterexamples, where firms have easily overcome potential barriers to entry and network externalities, ultimately disrupting incumbents.

Zoom is one of the most salient instances. As I have written previously:

To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects.

Along similar lines, Geoffrey Manne and Alec Stapp have put forward a multitude of other examples. These include: The demise of Yahoo; the disruption of early instant-messaging applications and websites; MySpace’s rapid decline; etc. In all these cases, outcomes do not match the predictions of theoretical models.

More recently, TikTok’s rapid rise offers perhaps the greatest example of a potentially superior social-networking platform taking significant market share away from incumbents. According to the Financial Times, TikTok’s video-sharing capabilities and its powerful algorithm are the most likely explanations for its success.

While these developments certainly do not disprove network effects theory, they eviscerate the common belief in antitrust circles that superior rivals are unable to overthrow incumbents in digital markets. Of course, this will not always be the case. As in the previous examples, the question is ultimately one of comparing institutions—i.e., do markets lead to more or fewer error costs than government intervention? Yet this question is systematically omitted from most policy discussions.

In Conclusion

My argument is not that models are without value. To the contrary, framing problems in economic terms—and simplifying them in ways that make them cognizable—enables scholars and policymakers to better understand where market failures might arise, and how these problems can be anticipated and solved by private actors. In other words, models alone cannot tell us that markets will fail, but they can direct inquiries and help us to understand why firms behave the way they do, and why markets (including digital ones) are organized in a given way.

In that respect, both the theoretical and empirical research cited throughout this post offer valuable insights for today’s policymakers.

For a start, as Ronald Coase famously argued in what is perhaps his most famous work, externalities (and market failure more generally) are a function of transaction costs. When these are low (relative to the value of a good), market failures are unlikely. This is perhaps clearest in the “Fable of the Bees” example. Given bees’ short foraging range, there were ultimately few real-world obstacles to writing contracts that internalized the mutual benefits of bees and orchards.

Perhaps more importantly, economic research sheds light on behavior that might otherwise be seen as anticompetitive. The rules and norms that bind farming/beekeeping communities, as well as users of common pool resources, could easily be analyzed as a cartel by naïve antitrust authorities. Yet externality theory suggests they play a key role in preventing market failure.

Along similar lines, mergers and acquisitions (as well as vertical integration, more generally) can reduce opportunism and other externalities that might otherwise undermine collaboration between firms (here, here and here). And much of the same is true for certain types of unilateral behavior. Tying video games to consoles (and pricing the console below cost) can help entrants overcome network externalities that might otherwise shield incumbents. Likewise, Google tying its proprietary apps to the open source Android operating system arguably enabled it to earn a return on its investments, thus overcoming the externality problem that plagues open source software.

All of this raises a tantalizing prospect that deserves far more attention than it is currently given in policy circles: authorities around the world are seeking to regulate the tech space. Draft legislation has notably been tabled in the United States, European Union and the United Kingdom. These draft bills would all make it harder for large tech firms to implement various economic hierarchies, including mergers and certain contractual arrangements.

This is highly paradoxical. If digital markets are indeed plagued by network externalities and high transaction costs, as critics allege, then preventing firms from adopting complex hierarchies—which have traditionally been seen as a way to solve externalities—is just as likely to exacerbate problems. In other words, like the economists of old cited above, today’s policymakers appear to be focusing too heavily on simple models that predict market failure, and far too little on the mechanisms that firms have put in place to thrive within this complex environment.

The bigger picture is that far more circumspection is required when using theoretical models in real-world policy settings. Indeed, as Harold Demsetz famously put it, the purpose of normative economics is not so much to identify market failures, but to help policymakers determine which of several alternative institutions will deliver the best outcomes for consumers:

This nirvana approach differs considerably from a comparative institution approach in which the relevant choice is between alternative real institutional arrangements. In practice, those who adopt the nirvana viewpoint seek to discover discrepancies between the ideal and the real and if discrepancies are found, they deduce that the real is inefficient. Users of the comparative institution approach attempt to assess which alternative real institutional arrangement seems best able to cope with the economic problem […].

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Jerry Ellig was a research professor at The George Washington University Regulatory Studies Center and served as chief economist at the Federal Communications Commission from 2017 to 2018. Tragically, he passed away Jan. 20, 2021. TOTM is honored to publish his contribution to this symposium.]

One significant aspect of Chairman Ajit Pai’s legacy is not a policy change, but an organizational one: establishment of the Federal Communications Commission’s (FCC’s) Office of Economics and Analytics (OEA) in 2018.

Prior to OEA, most of the FCC’s economists were assigned to the various policy bureaus, such as Wireless, Wireline Competition, Public Safety, Media, and International. Each of these bureaus had its own chief economist, but the rank-and-file economists reported to the managers who ran the bureaus – usually attorneys who also developed policy and wrote regulations. In the words of former FCC Chief Economist Thomas Hazlett, the FCC had “no location anywhere in the organizational structure devoted primarily to economic analysis.”

Establishment of OEA involved four significant changes. First, most of the FCC’s economists (along with data strategists and auction specialists) are now grouped together into an organization separate from the policy bureaus, and they are managed by other economists. Second, the FCC rules establishing the new office tasked OEA with reviewing every rulemaking, reviewing every other item with economic content that comes before the commission for a vote, and preparing a full benefit-cost analysis for any regulation with $100 million or more in annual economic impact. Third, a joint memo from the FCC’s Office of General Counsel and OEA specifies that economists are to be involved in the early stages of all rulemakings. Fourth, the memo also indicates that FCC regulatory analysis should follow the principles articulated in Executive Order 12866 and Office of Management and Budget Circular A-4 (while specifying that the FCC, as an independent agency, is not bound by the executive order).

While this structure for managing economists was new for the FCC, it is hardly uncommon in federal regulatory agencies. Numerous independent agencies that deal with economic regulation house their economists in a separate bureau or office, including the Securities and Exchange Commission, the Commodity Futures Trading Commission, the Surface Transportation Board, the Office of Comptroller of the Currency, and the Federal Trade Commission. The SEC displays even more parallels with the FCC. A guidance memo adopted in 2012 by the SEC’s Office of General Counsel and Division of Risk, Strategy and Financial Innovation (the name of the division where economists and other analysts were located) specifies that economists are to be involved in the early stages of all rulemakings and articulates best analytical practices based on Executive Order 12866 and Circular A-4.

A separate economics office offers several advantages over the FCC’s prior approach. It gives the economists greater freedom to offer frank advice, enables them to conduct higher-quality analysis more consistent with the norms of their profession, and may ultimately make it easier to uphold FCC rules that are challenged in court.

Independence.  When I served as chief economist at the FCC in 2017-2018, I gathered from conversations that the most common practice in the past was for attorneys who wrote rules to turn to economists for supporting analysis after key decisions had already been made. This was not always the process, but it often occurred. The internal working group of senior FCC career staff who drafted the plan for OEA reached similar conclusions. After the establishment of OEA, an FCC economist I interviewed noted how his role had changed: “My job used to be to support the policy decisions made in the chairman’s office. Now I’m much freer to speak my own mind.”

Ensuring economists’ independence is not a problem unique to the FCC. In a 2017 study, Stuart Shapiro found that most of the high-level economists he interviewed who worked on regulatory impact analyses in federal agencies perceive that economists can be more objective if they are located outside the program office that develops the regulations they are analyzing. As one put it, “It’s very difficult to conduct a BCA [benefit-cost analysis] if our boss wrote what you are analyzing.” Interviews with senior economists and non-economists who work on regulation that I conducted for an Administrative Conference of the United States project in 2019 revealed similar conclusions across federal agencies. Economists located in organizations separate from the program office said that structure gave them greater independence and ability to develop better analytical methodologies. On the other hand, economists located in program offices said they experienced or knew of instances where they were pressured or told to produce an analysis with the results decision-makers wanted.

The FTC provides an informative case study. From 1955-1961, many of the FTC’s economists reported to the attorneys who conducted antitrust cases; in 1961, they were moved into a separate Bureau of Economics. Fritz Mueller, the FTC chief economist responsible for moving the antitrust economists back into the Bureau of Economics, noted that they were originally placed under the antitrust attorneys because the attorneys wanted more control over the economic analysis. A 2015 evaluation by the FTC’s Inspector General concluded that the Bureau of Economics’ existence as a separate organization improves its ability to offer “unbiased and sound economic analysis to support decision-making.”

Higher-quality analysis. An issue closely related to economists’ independence is the quality of the economic analysis. Executive branch regulatory economists interviewed by Richard Williams expressed concern that the economic analysis was more likely to be changed to support decisions when the economists are located in the program office that writes the regulations. More generally, a study that Catherine Konieczny and I conducted while we were at the FCC found that executive branch agencies are more likely to produce higher-quality regulatory impact analyses if the economists responsible for the analysis are in an independent economics office rather than the program office.

Upholding regulations in court. In Michigan v. EPA, the Supreme Court held that it is unreasonable for agencies to refuse to consider regulatory costs if the authorizing statute does not prohibit them from doing so. This precedent will likely increase judicial expectations that agencies will consider economic issues when they issue regulations. The FCC’s OGC-OEA memo cites examples of cases where the quality of the FCC’s economic analysis either helped or harmed the commission’s ability to survive legal challenge under the Administrative Procedure Act’s “arbitrary and capricious” standard. More systematically, a recent Regulatory Studies Center working paper finds that a higher-quality economic analysis accompanying a regulation reduces the likelihood that courts will strike down the regulation, provided that the agency explains how it used the analysis in decisions.

Two potential disadvantages of a separate economics office are that it may make the economists easier to ignore (what former FCC Chief Economist Tim Brennan calls the “Siberia effect”) and may lead the economists to produce research that is less relevant to the practical policy concerns of the policymaking bureaus. The FCC’s reorganization plan took these disadvantages seriously.

To ensure that the ultimate decision-makers—the commissioners—have access to the economists’ analysis and recommendations, the rules establishing the office give OEA explicit responsibility for reviewing all items with economic content that come before the commission. Each item is accompanied by a cover memo that indicates whether OEA believes there are any significant issues, and whether they have been dealt with adequately. To ensure that economists and policy bureaus work together from the outset of regulatory initiatives, the OGC-OEA memo instructs:

Bureaus and Offices should, to the extent practicable, coordinate with OEA in the early stages of all Commission-level and major Bureau-level proceedings that are likely to draw scrutiny due to their economic impact. Such coordination will help promote productive communication and avoid delays from the need to incorporate additional analysis or other content late in the drafting process. In the earliest stages of the rulemaking process, economists and related staff will work with programmatic staff to help frame key questions, which may include drafting options memos with the lead Bureau or Office.

While presiding over his final commission meeting on Jan. 13, Pai commented, “It’s second nature now for all of us to ask, ‘What do the economists think?’” The real test of this institutional innovation will be whether that practice continues under a new chair in the next administration.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.

Joshua D. Wright is university professor and executive director of the Global Antitrust Institute at George Mason University’s Scalia Law School. He served as a commissioner of the Federal Trade Commission from 2013 through 2015.]

Much of this symposium celebrates Ajit’s contributions as chairman of the Federal Communications Commission and his accomplishments and leadership in that role. And rightly so. But Commissioner Pai, not just Chairman Pai, should also be recognized.

I first met Ajit when we were both minority commissioners at our respective agencies: the FCC and Federal Trade Commission. Ajit had started several months before I was confirmed. I watched his performance in the minority with great admiration. He reached new heights when he shifted from minority commissioner to chairman, and the accolades he will receive for that work are quite appropriate. But I want to touch on his time as a minority commissioner at the FCC and how that should inform the retrospective of his tenure.

Let me not bury the lead: Ajit Pai has been, in my view, the most successful, impactful minority commissioner in the history of the modern regulatory state. And it is that success that has led him to become the most successful and impactful chairman, too.

I must admit all of this success makes me insanely jealous. My tenure as a minority commissioner ran in parallel with Ajit. We joked together about our fierce duel to be the reigning king of regulatory dissents. We worked together fighting against net neutrality. We compared notes on dissenting statements and opinions. I tried to win our friendly competition. I tried pretty hard. And I lost; worse than I care to admit. But we had fun. And I very much admired the combination of analytical rigor, clarity of exposition, and intellectual honesty in his work. Anyway, the jealousy would be all too much if he weren’t also a remarkable person and friend.

The life of a minority commissioner can be a frustrating one. Like Sisyphus, the minority commissioner often wakes up each day to roll the regulatory (well, in this case, deregulatory) boulder up the hill, only to watch it roll down. And then do it again. And again. At times, it is an exhausting series of jousting matches with the windmills of Washington bureaucracy. It is not often that a minority commissioner has as much success as Commissioner Pai did: dissenting opinions ultimately vindicated by judicial review; substantive victories on critical policy issues; paving the way for institutional and procedural reforms.

It is one thing to write a raging dissent about how the majority has lost all principles. Fire and brimstone come cheap when there aren’t too many consequences to what you have to say. Measure a man after he has been granted power and a chance to use it, and only then will you have a true test of character. Ajit passes that test like few in government ever have.

This is part of what makes Ajit Pai so impressive. I have seen his work firsthand. The multitude of successes Ajit achieved as Chairman Pai were predictable, precisely because Commissioner Pai told the world exactly where he stood on important telecommunications policy issues, the reasons why he stood there, and then, well, he did what he said he would. The Pai regime was much more like a Le’Veon Bell run, between the tackles, than a no-look pass from Patrick Mahomes to Tyreek Hill. Commissioner Pai shared his playbook with the world; he told us exactly where he was going to run the ball. And then Chairman Pai did exactly that. And neither bureaucratic red tape nor political pressure—or even physical threat—could stop him.

Here is a small sampling of his contributions, many of them building on groundwork he laid in the minority:

Focus on Economic Analysis

One of Chairman Pai’s most important contributions to the FCC is his work to systematically incorporate economic analysis into FCC decision-making. The triumph of this effort was establishing the Office of Economic Analysis (OEA) in 2018. The OEA focus on conducting economic analyses of the costs, benefits, and economic impacts of the commission’s proposed rules will be a critical part of agency decision-making from here on out. This act alone would form a legacy any agency head could easily rest their laurels on. The OEA’s work will shape the agency for decades and ensure that agency decisions are made with the oversight economics provides.

This is a hard thing to do; just hiring economists is not enough. Structure matters. How economists get information to decision-makers determines if it will be taken seriously. To this end, Ajit has taken all the lessons from what has made the economists at the FTC so successful—and the lessons from the structural failures at other agencies—and applied them at the FCC.

Structural independence looks like “involving economists on cross-functional teams at the outset and allowing the economics division to make its own, independent recommendations to decision-makers.”[1] And it is necessary for economics to be taken seriously within an agency structure. Ajit has assured that FCC decision-making will benefit from economic analysis for years to come.

Narrowing the Digital Divide

Chairman Pai made helping the disadvantaged get connected to the internet and narrowing the digital divide the top priorities during his tenure. And Commissioner Pai was fighting for this long before the pandemic started.

As businesses, schools, work, and even health care have moved online, the need to get Americans connected with high-speed broadband has never been greater. Under Pai’s leadership, the FCC has removed bureaucratic barriers[2] and provided billions in funding[3] to facilitate rural broadband buildout. We are talking about connections to some 700,000 rural homes and businesses in 45 states, many of whom are gaining access to high-speed internet for the first time.

Ajit has also made sure to keep an eye out for the little guy, and communities that have been historically left behind. Tribal communities,[4] particularly in the rural West, have been a keen focus of his, as he knows all-too-well the difficulties and increased costs associated with servicing those lands. He established programs to rebuild and expand networks in the Virgin Islands and Puerto Rico[5] in an effort to bring the islands to parity with citizens living on the mainland.

You need not take my word for it; he really does talk about this all the time. As he said in a speech at the National Tribal Broadband Summit: “Since my first day in this job, I’ve said that closing the digital divide was my top priority. And as this audience knows all too well, nowhere is that divide more pronounced than on Tribal lands.“ That work is not done; it is beyond any one person. But Ajit should be recognized for his work bridging the divide and laying the foundation for future gains.

And again, this work started as minority commissioner. Before he was chairman, Pai proposed projects for rural broadband development; he frequently toured underserved states and communities; and he proposed legislation to offer the 21st century promise to economically depressed areas of the country. Looking at Chairman Pai is only half the picture.

Keeping Americans Connected

One would not think that the head of the Federal Communications Commission would be a leader on important health-care issues, but Ajit has made a real difference here too. One of his major initiatives has been the development of telemedicine solutions to expand access to care in critical communities.

Beyond encouraging buildout of networks in less-connected areas, Pai’s FCC has also worked to allocate funding for health-care providers and educational institutions who were navigating the transition to remote services. He ensured that health-care providers’ telecommunications and information services were funded. He worked with the U.S. Department of Education to direct funds for education stabilization and allowed schools to purchase additional bandwidth. And he granted temporary additional spectrum usage to broadband providers to meet the increased demand upon our nation’s networks. Oh, and his Keep Americans Connected Pledge gathered commitment from more than 800 companies to ensure that Americans would not lose their connectivity due to pandemic-related circumstances. As if the list were not long enough, Congress’ January coronavirus relief package will ensure that these and other programs, like Rip and Replace, will remain funded for the foreseeable future.

I might sound like I am beating a dead horse here, but the seeds of this, too, were laid in his work in the minority. Here he is describing his work in a 2015 interview, as a minority commissioner:

My own father is a physician in rural Kansas, and I remember him heading out in his car to visit the small towns that lay 40 miles or more from home. When he was there, he could provide care for people who would otherwise never see a specialist at all. I sometimes wonder, back in the 1970s and 1980s, how much easier it would have been on patients, and him, if broadband had been available so he could provide healthcare online.

Agency Transparency and Democratization

Many minority commissioners like to harp on agency transparency. Some take a different view when they are in charge. But Ajit made good on his complaints about agency transparency when he became Chairman Pai. He did this through circulating draft items well in advance of monthly open meetings, giving people the opportunity to know what the agency was voting on.

You used to need a direct connection with the FCC to even be aware of what orders were being discussed—the worst of the D.C. swamp—but now anyone can read about the working items, in clear language.

These moves toward a more transparent, accessible FCC dispel the impression that the agency is run by Washington insiders who are disconnected from the average person. The meetings may well be dry and technical—they really are—but Chairman Pai’s statements are not only good-natured and humorous, but informative and substantive. The public has been well-served by his efforts here.

Incentivizing Innovation and Next-Generation Technologies

Chairman Pai will be remembered for his encouragement of innovation. Under his chairmanship, the FCC discontinued rules that unnecessarily required carriers to maintain costly older, lower-speed networks and legacy voice services. It streamlined the discontinuance process for lower-speed services if the carrier is already providing higher-speed service or if no customers are using the service. It also okayed streamlined notice following force majeure events like hurricanes to encourage investment and deployment of newer, faster infrastructure and services following destruction of networks. The FCC also approved requests by companies to provide high-speed broadband through non-geostationary orbit satellite constellations and created a streamlined licensing process for small satellites to encourage faster deployment.

This is what happens when you get a tech nerd at the head of an agency he loves and cares for. A serious commitment to good policy with an eye toward the future.

Restoring Internet Freedom

This is a pretty sensitive one for me. You hear less about it now, other than some murmurs from the Biden administration about changing it, but the debate over net neutrality got nasty and apocalyptic.

It was everywhere; people saying Chairman Pai would end the internet as we know it. The whole web blacked out for a day in protest. People mocked up memes showing a 25 cent-per-Google-search charge. And as a result of this over-the-top rhetoric, my friend, and his family, received death threats.

That is truly beyond the pale. One could not blame anyone for leaving public service in such an environment. I cannot begin to imagine what I would have done in Ajit’s place. But Ajit took the threats on his life with grace and dignity, never lost his sense of humor, and continued to serve the public dutifully with remarkable courage. I think that says a lot about him. And the American public is lucky to have benefited from his leadership.

Now, for the policy stuff. Though it should go without saying, the light-touch framework Chairman Pai returned us to—as opposed to the public utility one—will ensure that the United States maintains its leading position on technological innovation in 5G networks and services. The fact that we have endured COVID—and the massive strain on the internet it has caused—with little to no noticeable impact on internet services is all the evidence you need he made the right choice. Ajit has rightfully earned the title of the “5G Chairman.”

Conclusion

I cannot give Ajit all the praise he truly deserves without sounding sycophantic, or bribed. There are any number of windows into his character, but one rises above the rest for me. And I wanted to take the extra time to thank Ajit for it.

Every year, without question, no matter what was going on—even as chairman—Ajit would come to my classes and talk to my students. At length. In detail. And about any subject they wished. He stayed until he answered all of their questions. If I didn’t politely shove him out of the class to let him go do his real job, I’m sure he would have stayed until the last student left. And if you know anything about how to judge a person’s character, that will tell you all you need to know. 

Congratulations, Chairman Pai.


[1] Jerry Ellig & Catherine Konieczny, The Organization of Economists in Regulatory Agencies: Does Structure Matter?

[2] Rural Digital Opportunity Fund, Fed. Commc’ns Comm’n, https://www.fcc.gov/auction/904.

[3] Press Release, Connect America Fund Auction to Expand Broadband to Over 700,000 Rural Homes and Businesses: Auction Allocates $1.488 Billion to Close the Digital Divide, Fed. Commc’ns Comm’n, https://docs.fcc.gov/public/attachments/DOC-353840A1.pdf.

[4] Press Release, FCC Provides Relief for Carriers Serving Tribal Lands, Fed. Commc’ns Comm’n, https://www.fcc.gov/document/fcc-provides-relief-carriers-serving-tribal-lands.

[5] Press Release, FCC Approves $950 Million to Harden, Improve, and Expand Broadband Networks in Puerto Rico and U.S. Virgin Islands, Fed. Commc’ns Comm’n, https://docs.fcc.gov/public/attachments/DOC-359891A1.pdf.

Municipal broadband has been heavily promoted by its advocates as a potential source of competition against Internet service providers (“ISPs”) with market power. Jonathan Sallet argued in Broadband for America’s Future: A Vision for the 2020s, for instance, that municipal broadband has a huge role to play in boosting broadband competition, with attendant lower prices, faster speeds, and economic development. 

Municipal broadband, of course, can mean more than one thing: From “direct consumer” government-run systems, to “open access” where government builds the back-end, but leaves it up to private firms to bring the connections to consumers, to “middle mile” where the government network reaches only some parts of the community but allows private firms to connect to serve other consumers. The focus of this blog post is on the “direct consumer” model.

There have been many economic studies on municipal broadband, both theoretical and empirical. The literature largely finds that municipal broadband poses serious risks to taxpayers, often relies heavily on cross-subsidies from government-owned electric utilities, crowds out private ISP investment in areas it operates, and largely fails the cost-benefit analysis. While advocates have defended municipal broadband on the grounds of its speed, price, and resulting attractiveness to consumers and businesses, others have noted that many of those benefits come at the expense of other parts of the country from which businesses move. 

What this literature has not touched upon is a more fundamental problem: municipal broadband lacks the price signals necessary for economic calculation.. The insights of the Austrian school of economics helps explain why this model is incapable of providing efficient outcomes for society. Rather than creating a valuable source of competition, municipal broadband creates “islands of chaos” undisciplined by the market test of profit-and-loss. As a result, municipal broadband is a poor model for promoting competition and innovation in broadband markets. 

The importance of profit-and-loss to economic calculation

One of the things often assumed away in economic analysis is the very thing the market process depends upon: the discovery of knowledge. Knowledge, in this context, is not the technical knowledge of how to build or maintain a broadband network, but the more fundamental knowledge which is discovered by those exercising entrepreneurial judgment in the marketplace. 

This type of knowledge is dependent on prices throughout the market. In the market process, prices coordinate exchange between market participants without each knowing the full plan of anyone else. For consumers, prices allow for the incremental choices between different options. For producers, prices in capital markets similarly allow for choices between different ways of producing their goods for the next stage of production. Prices in interest rates help coordinate present consumption, investment, and saving. And, the price signal of profit-and-loss allows producers to know whether they have cost-effectively served consumer needs. 

The broadband marketplace can’t be considered in isolation from the greater marketplace in which it is situated. But it can be analyzed under the framework of prices and the knowledge they convey.

For broadband consumers, prices are important for determining the relative importance of Internet access compared to other felt needs. The quality of broadband connection demanded by consumers is dependent on the price. All other things being equal, consumers demand faster connections with less latency issues. But many consumers may prefer slower speeds and connections with more latency if it is cheaper. Even choices between the importance of upload speeds versus download speeds may be highly asymmetrical if determined by consumers.  

While “High Performance Broadband for All” may be a great goal from a social planner’s perspective, individuals acting in the marketplace may prioritize other needs with his or her scarce resources. Even if consumers do need Internet access of some kind, the benefits of 100 Mbps download speeds over 25 Mbps, or upload speeds of 100 Mbps versus 3 Mbps may not be worth the costs. 

For broadband ISPs, prices for capital goods are important for building out the network. The relative prices of fiber, copper, wireless, and all the other factors of production in building out a network help them choose in light of anticipated profit. 

All the decisions of broadband ISPs are made through the lens of pursuing profit. If they are successful, it is because the revenues generated are greater than the costs of production, including the cost of money represented in interest rates. Just as importantly, loss shows the ISPs were unsuccessful in cost-effectively serving consumers. While broadband companies may be able to have losses over some period of time, they ultimately must turn a profit at some point, or there will be exit from the marketplace. Profit-and-loss both serve important functions.

Sallet misses the point when he states the“full value of broadband lies not just in the number of jobs it directly creates or the profits it delivers to broadband providers but also in its importance as a mechanism that others use across the economy and society.” From an economic point of view, profits aren’t important because economists love it when broadband ISPs get rich. Profits are important as an incentive to build the networks we all benefit from, and a signal for greater competition and innovation.

Municipal broadband as islands of chaos

Sallet believes the lack of high-speed broadband (as he defines it) is due to the monopoly power of broadband ISPs. He sees the entry of municipal broadband as pro-competitive. But the entry of a government-run broadband company actually creates “islands of chaos” within the market economy, reducing the ability of prices to coordinate disparate plans of action among participants. This, ultimately, makes society poorer.

The case against municipal broadband doesn’t rely on greater knowledge of how to build or maintain a network being in the hands of private engineers. It relies instead on the different institutional frameworks within which the manager of the government-run broadband network works as compared to the private broadband ISP. The type of knowledge gained in the market process comes from prices, including profit-and-loss. The manager of the municipal broadband network simply doesn’t have access to this knowledge and can’t calculate the best course of action as a result.

This is because the government-run municipal broadband network is not reliant upon revenues generated by free choices of consumers alone. Rather than needing to ultimately demonstrate positive revenue in order to remain a going concern, government-run providers can instead base their ongoing operation on access to below-market loans backed by government power, cross-subsidies when it is run by a government electric utility, and/or public money in the form of public borrowing (i.e. bonds) or taxes. 

Municipal broadband, in fact, does rely heavily on subsidies from the government. As a result, municipal broadband is not subject to the discipline of the market’s profit-and-loss test. This frees the enterprise to focus on other goals, including higher speeds—especially upload speeds—and lower prices than private ISPs often offer in the same market. This is why municipal broadband networks build symmetrical high-speed fiber networks at higher rates than the private sector.

But far from representing a superior source of “competition,” municipal broadband is actually an example of “predatory entry.” In areas where there is already private provision of broadband, municipal broadband can “out-compete” those providers due to subsidies from the rest of society. Eventually, this could lead to exit by the private ISPs, starting with the least cost-efficient to the most. In areas where there is limited provision of Internet access, the entry of municipal broadband could reduce incentives for private entry altogether. In either case, there is little reason to believe municipal broadband actually increases consumer welfarein the long run.

Moreover, there are serious concerns in relying upon municipal broadband for the buildout of ISP networks. While Sallet describes fiber as “future-proof,” there is little reason to think that it is. The profit motive induces broadband ISPs to constantly innovate and improve their networks. Contrary to what you would expect from an alleged monopoly industry, broadband companies are consistently among the highest investors in the American economy. Similar incentives would not apply to municipal broadband, which lacks the profit motive to innovate. 

Conclusion

There is a definite need to improve public policy to promote more competition in broadband markets. But municipal broadband is not the answer. The lack of profit-and-loss prevents the public manager of municipal broadband from having the price signal necessary to know it is serving the public cost-effectively. No amount of bureaucratic management can replace the institutional incentives of the marketplace.

The great Dr. Thomas Sowell

One of the great scholars of law & economics turns 90 years old today. In his long and distinguished career, Thomas Sowell has written over 40 books and countless opinion columns. He has been a professor of economics and a long-time Senior Fellow at the Hoover Institution. He received a National Humanities Medal in 2002 for a lifetime of scholarship, which has only continued since then. His ability to look at issues with an international perspective, using the analytical tools of economics to better understand institutions, is an inspiration to us at the International Center for Law & Economics.

Here, almost as a blog post festschrift as a long-time reader of his works, I want to briefly write about how Sowell’s voluminous writings on visions, law, race, and economics could be the basis for a positive agenda to achieve a greater measure of racial justice in the United States.

The Importance of Visions

One of the most important aspects of Sowell’s work is his ability to distill wide-ranging issues into debates involving different mental models, or a “Conflict of Visions.” He calls one vision the “tragic” or “constrained” vision, which sees all humans as inherently limited in knowledge, wisdom, and virtue, and fundamentally self-interested even at their best. The other vision is the “utopian” or “unconstrained” vision, which sees human limitations as artifacts of social arrangements and cultures, and that there are some capable by virtue of superior knowledge and morality that can redesign society to create a better world. 

An implication of the constrained vision is that the difference in knowledge and virtue between the best and the worst in society is actually quite small. As a result, no one person or group of people can be trusted with redesigning institutions which have spontaneously evolved. The best we can hope for is institutions that reasonably deter bad conduct and allow people the freedom to solve their own problems. 

An important implication of the unconstrained vision, on the other hand,  is that there are some who because of superior enlightenment, which Sowell calls the “Vision of the Anointed,” can redesign institutions to fundamentally change human nature, which is seen as malleable. Institutions are far more often seen as the result of deliberate human design and choice, and that failures to change them to be more just or equal is a result of immorality or lack of will.

The importance of visions to how we view things like justice and institutions makes all the difference. In the constrained view, institutions like language, culture, and even much of the law result from the “spontaneous ordering” that is the result of human action but not of human design. Limited government, markets, and tradition are all important in helping individuals coordinate action. Markets work because self-interested individuals benefit when they serve others. There are no solutions to difficult societal problems, including racism, only trade-offs. 

But in the unconstrained view, limits on government power are seen as impediments to public-spirited experts creating a better society. Markets, traditions, and cultures are to be redesigned from the top down by those who are forward-looking, relying on their articulated reason. There is a belief that solutions could be imposed if only there is sufficient political will and the right people in charge. When it comes to an issue like racism, those who are sufficiently “woke” should be in charge of redesigning institutions to provide for a solution to things like systemic racism.

For Sowell, what he calls “traditional justice” is achieved by processes that hold people accountable for harms to others. Its focus is on flesh-and-blood human beings, not abstractions like all men or blacks versus whites. Differences in outcomes are not just or unjust, by this point of view, what is important is that the processes are just. These processes should focus on institutional incentives of participants. Reforms should be careful not to upset important incentive structures which have evolved over time as the best way for limited human beings to coordinate behavior.

The “Quest for Cosmic Justice,” on the other hand, flows from the unconstrained vision. Cosmic justice sees disparities between abstract groups, like whites and blacks, as unjust and in need of correction. If results from impartial processes like markets or law result in disparities, those with an unconstrained vision often see those processes as themselves racist. The conclusion is that the law should intervene to create better outcomes. This presumes considerable knowledge and morality on behalf of those who are in charge of the interventions. 

For Sowell, a large part of his research project has been showing that those with the unconstrained vision often harm those they are proclaiming the intention to help in their quest for cosmic justice. 

A Constrained Vision of Racial Justice

Sowell has written quite a lot on race, culture, intellectuals, economics, and public policy. One of the main thrusts of his argument about race is that attempts at cosmic justice often harm living flesh-and-blood individuals in the name of intertemporal abstractions like “social justice” for black Americans. Sowell nowhere denies that racism is an important component of understanding the history of black Americans. But his constant challenge is that racism can’t be the only variable which explains disparities. Sowell points to the importance of culture and education in building human capital to be successful in market economies. Without taking those other variables into account, there is no way to determine the extent that racism is the cause of disparities. 

This has important implications for achieving racial justice today. When it comes to policies pursued in the name of racial justice, Sowell has argued that many programs often harm not only members of disfavored groups, but the members of the favored groups.

For instance, Sowell has argued that affirmative action actually harms not only flesh-and-blood white and Asian-Americans who are passed over, but also harms those African-Americans who are “mismatched” in their educational endeavors and end up failing or dropping out of schools when they could have been much better served by attending schools where they would have been very successful. Another example Sowell often points to is minimum wage legislation, which is often justified in the name of helping the downtrodden, but has the effect of harming low-skilled workers by increasing unemployment, most especially young African-American males. 

Any attempts at achieving racial justice, in terms of correcting historical injustices, must take into account how changes in processes could actually end up hurting flesh-and-blood human beings, especially when those harmed are black Americans. 

A Positive Agenda for Policy Reform

In Sowell’s constrained vision, a large part of the equation for African-American improvement is going to be cultural change. However, white Americans should not think that this means they have no responsibility in working towards racial justice. A positive agenda must take into consideration real harms experienced by African-Americans due to government action (and inaction). Thus, traditional justice demands institutional reforms, and in some cases, recompense.

The policy part of this equation outlined below is motivated by traditional justice concerns that hold people accountable under the rule of law for violations of constitutional rights and promotes institutional reforms to more properly align incentives. 

What follows below are policy proposals aimed at achieving a greater degree of racial justice for black Americans, but fundamentally informed by the constrained vision and traditional justice concerns outlined by Sowell. Most of these proposals are not on issues Sowell has written a lot on. In fact, some proposals may actually not be something he would support, but are—in my opinion—consistent with the constrained vision and traditional justice.

Reparations for Historical Rights Violations

Sowell once wrote this in regards to reparations for black Americans:

Nevertheless, it remains painfully clear that those people who were torn from their homes in Africa in centuries past and forcibly brought across the Atlantic in chains suffered not only horribly, but unjustly. Were they and their captors still alive, the reparations and retribution owed would be staggering. Time and death, however, cheat us of such opportunities for justice, however galling that may be. We can, of course, create new injustices among our flesh-and-blood contemporaries for the sake of symbolic expiation, so that the son or daughter of a black doctor or executive can get into an elite college ahead of the son or daughter of a white factory worker or farmer, but only believers in the vision of cosmic justice are likely to take moral solace from that. We can only make our choices among alternatives actually available, and rectifying the past is not one of those options.

In other words, if the victims and perpetrators of injustice are no longer alive, it is not just to hold entire members of respective races accountable for crimes which they did not commit. However, this would presumably leave open the possibility of applying traditional justice concepts in those cases where death has not cheated us.

For instance, there are still black Americans alive who suffered from Jim Crow, as well as children and family members of those lynched. While it is too little, too late, it seems consistent with traditional justice to still seek out and prosecute criminally perpetrators who committed heinous acts but a few generations ago against still living victims. This is not unprecedented. Old Nazis are still prosecuted for crimes against Jews. A similar thing could be done in the United States.

Similarly, civil rights lawsuits for the damages caused by Jim Crow could be another way to recompense those who were harmed. Alternatively, it could be done by legislation. The Civil Liberties Act of 1988 was passed under President Reagan and gave living Japanese Americans who were interned during World War II some limited reparations. A similar system could be set up for living victims of Jim Crow. 

Statutes of limitations may need to be changed to facilitate these criminal prosecutions and civil rights lawsuits, but it is quite clearly consistent with the idea of holding flesh-and-blood persons accountable for their unlawful actions.

Holding flesh-and-blood perpetrators accountable for rights violations should not be confused with the cosmic justice idea—that Sowell consistently decries—that says intertemporal abstractions can be held accountable for crimes. In other words, this is not holding “whites” accountable for all historical injustices to “blacks.” This is specifically giving redress to victims and deterring future bad conduct.  

End Qualified Immunity

Another way to promote racial justice consistent with the constrained vision is to end one of the Warren Court’s egregious examples of judicial activism: qualified immunity. Qualified immunity is nowhere mentioned in the statute for civil rights, 42 USC § 1983. As Sowell argues in his writings, judges in the constrained vision are supposed to declare what the law is, not what they believe it should be, unlike those in the unconstrained vision who—according to Sowell— believe they have the right to amend the laws through judicial edict. The introduction of qualified immunity into the law by the activist Warren Court should be overturned.

Currently, qualified immunity effectively subsidizes police brutality, to the detriment of all Americans, but disproportionately affecting black Americans. The law & economics case against qualified immunity is pretty straightforward: 

In a civil rights lawsuit, the goal is to make the victim (or their families) of a rights violation whole by monetary damages. From a legal perspective, this is necessary to give the victim justice. From an economic perspective this is necessary to deter future bad conduct and properly align ex ante incentives going forward. Under a well-functioning system, juries would, after hearing all the evidence, make a decision about whether constitutional rights were violated and the extent of damages. A functioning system of settlements would result as a common law develops determining what counts as reasonable or unreasonable uses of force. This doesn’t mean plaintiffs always win, either. Officers may be determined to be acting reasonably under the circumstances once all the evidence is presented to a jury.

However, one of the greatest obstacles to holding police officers accountable in misconduct cases is the doctrine of qualified immunity… courts have widely expanded its scope to the point that qualified immunity is now protecting officers even when their conduct violates the law, as long as the officers weren’t on clear notice from specific judicial precedent that what they did was illegal when they did it… This standard has predictably led to a situation where officer misconduct which judges and juries would likely find egregious never makes it to court. The Cato Institute’s website Unlawful Shield details many cases where federal courts found an officer’s conduct was illegal yet nonetheless protected by qualified immunity.

Immunity of this nature has profound consequences on the incentive structure facing police officers. Police officers, as well as the departments that employ them, are insufficiently accountable when gross misconduct does not get past a motion to dismiss for qualified immunity… The result is to encourage police officers to take insufficient care when making the choice about the level of force to use. 

Those with a constrained vision focus on processes and incentives. In this case, it is police officers who have insufficient incentives to take reasonable care when they receive qualified immunity for their conduct.

End the Drug War

While not something he has written a lot on, Sowell has argued for the decriminalization of drugs, comparing the War on Drugs to the earlier attempts at Prohibition of alcohol. This is consistent with the constrained vision, which cares about the institutional incentives created by law. 

Interestingly, work by Michelle Alexander in the second chapter of The New Jim Crow is largely consistent with Sowell’s point of view. There she argued the institutional incentives of police departments were systematically changed when the drug war was ramped up. 

Alexander asks a question which is right in line with the constrained vision:

[I]t is fair to wonder why the police would choose to arrest such an astonishing percentage of the American public for minor drug crimes. The fact that police are legally allowed to engage in a wholesale roundup of nonviolent drug offenders does not answer the question why they would choose to do so, particularly when most police departments have far more serious crimes to prevent and solve. Why would police prioritize drug-law enforcement? Drug use and abuse is nothing new; in fact, it was on the decline, not on the rise, when the War on Drugs began.

Alexander locates the impetus for ramping up the drug war in federal subsidies:

In 1988, at the behest of the Reagan administration, Congress revised the program that provides federal aid to law enforcement, renaming it the Edward Byrne Memorial State and Local Law Enforcement Assistance Program after a New York City police officer who was shot to death while guarding the home of a drug-case witness. The Byrne program was designed to encourage every federal grant recipient to help fight the War on Drugs. Millions of dollars in federal aid have been offered to state and local law enforcement agencies willing to wage the war. By the late 1990s, the overwhelming majority of state and local police forces in the country had availed themselves of the newly available resources and added a significant military component to buttress their drug-war operations. 

On top of that, police departments were benefited by civil asset forfeiture:

As if the free military equipment, training, and cash grants were not enough, the Reagan administration provided law enforcement with yet another financial incentive to devote extraordinary resources to drug law enforcement, rather than more serious crimes: state and local law enforcement agencies were granted the authority to keep, for their own use, the vast majority of cash and assets they seize when waging the drug war. This dramatic change in policy gave state and local police an enormous stake in the War on Drugs—not in its success, but in its perpetual existence. Suddenly, police departments were capable of increasing the size of their budgets, quite substantially, simply by taking the cash, cars, and homes of people suspected of drug use or sales. Because those who were targeted were typically poor or of moderate means, they often lacked the resources to hire an attorney or pay the considerable court costs. As a result, most people who had their cash or property seized did not challenge the government’s action, especially because the government could retaliate by filing criminal charges—baseless or not.

As Alexander notes, black Americans (and other minorities) were largely targeted in this ramped up War on Drugs, noting the drug war’s effects have been to disproportionately imprison black Americans even though drug usage and sales are relatively similar across races. Police officers have incredible discretion in determining who to investigate and bring charges against. When it comes to the drug war, this discretion is magnified because the activity is largely consensual, meaning officers can’t rely on victims to come to them to start an investigation. Alexander finds the reason the criminal justice system has targeted black Americans is because of implicit bias in police officers, prosecutors, and judges, which mirrors the bias shown in media coverage and in larger white American society. 

Anyone inspired by Sowell would need to determine whether this is because of racism or some other variable. It is important to note here that Sowell never denies that racism exists or is a real problem in American society. But he does challenge us to determine whether this alone is the cause of disparities. Here, Alexander makes a strong case that it is implicit racism that causes the disparities in enforcement of the War on Drugs. A race-neutral explanation could be as follows, even though it still suggests ending the War on Drugs: the enforcement costs against those unable to afford to challenge the system are lower. And black Americans are disproportionately represented among the poor in this country. As will be discussed below in the section on reforming indigent criminal defense, most prosecutions are initiated against defendants who can’t afford a lawyer. The result could be racially disparate even without a racist motivation. 

Regardless of whether racism is the variable that explains the disparate impact of the War on Drugs, it should be ended. This may be an area where traditional and cosmic justice concerns can be united in an effort to reform the criminal justice system.

Reform Indigent Criminal Defense

A related aspect of how the criminal justice system has created a real barrier for far too many black Americans is the often poor quality of indigent criminal defense. Indigent defense is a large part of criminal defense in this country since a very high number of criminal prosecutions are initiated against those who are often too poor to afford a lawyer (roughly 80%). Since black Americans are disproportionately represented among the indigent and those in the criminal justice system, it should be no surprise that black Americans are disproportionately represented by public defenders in this country.

According to the constrained vision, it is important to look at the institutional incentives of public defenders. Considering the extremely high societal costs of false convictions, it is important to get these incentives right.

David Friedman and Stephen Schulhofer’s seminal article exploring the law & economics of indigent criminal defense highlighted the conflict of interest inherent in government choosing who represents criminal defendants when the government is in charge of prosecuting. They analyzed each of the models used in the United States for indigent defense from an economic point of view and found each wanting. On top of that, there is also a calculation problem inherent in government-run public defender’s offices whereby defendants may be systematically deprived of viable defense strategies because of a lack of price signals. 

An interesting alternative proposed by Friedman and Schulhofer is a voucher system. This is similar to the voucher system Sowell has often touted for education. The idea would be that indigent criminal defendants get to pick the lawyer of their choosing that is part of the voucher program. The government would subsidize the provision of indigent defense, in this model, but would not actually pick the lawyer or run the public defender organization. Incentives would be more closely aligned between the defendant and counsel. 

Conclusion

While much more could be said consistent with the constrained vision that could help flesh-and-blood black Americans, including abolishing occupational licensing, ending wage controls, promoting school choice, and ending counterproductive welfare policies, this is enough for now. Racial justice demands holding rights violators accountable and making victims whole. Racial justice also means reforming institutions to make sure incentives are right to deter conduct which harms black Americans. However, the growing desire to do something to promote racial justice in this country should not fall into the trap of cosmic justice thinking, which often ends up hurting flesh-and-blood people of all races in the present in the name of intertemporal abstractions. 

Happy 90th birthday to one of the greatest law & economics scholars ever, Dr. Thomas Sowell. 

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Eric Fruits, (Chief Economist, International Center for Law & Economics).]

Earlier this week, merger talks between Uber and food delivery service Grubhub surfaced. House Antitrust Subcommittee Chairman David N. Cicilline quickly reacted to the news:

Americans are struggling to put food on the table, and locally owned businesses are doing everything possible to keep serving people in our communities, even under great duress. Uber is a notoriously predatory company that has long denied its drivers a living wage. Its attempt to acquire Grubhub—which has a history of exploiting local restaurants through deceptive tactics and extortionate fees—marks a new low in pandemic profiteering. We cannot allow these corporations to monopolize food delivery, especially amid a crisis that is rendering American families and local restaurants more dependent than ever on these very services. This deal underscores the urgency for a merger moratorium, which I and several of my colleagues have been urging our caucus to support.

Pandemic profiteering rolls nicely off the tongue, and we’re sure to see that phrase much more over the next year or so. 

Grubhub shares jumped 29% Tuesday, the day the merger talks came to light, shown in the figure below. The Wall Street Journal reports companies are considering a deal that would value Grubhub stock at around 1.9 Uber shares, or $60-65 dollars a share, based on Thursday’s price.

But is that “pandemic profiteering?”

After Amazon announced its intended acquisition of Whole Foods, the grocer’s stock price soared by 27%. Rep. Cicilline voiced some convoluted concerns about that merger, but said nothing about profiteering at the time. Different times, different messaging.

Rep. Cicilline and others have been calling for a merger moratorium during the pandemic and used the Uber/Grubhub announcement as Exhibit A in his indictment of merger activity.

A moratorium would make things much easier for regulators. No more fighting over relevant markets, no HHI calculations, no experts debating SSNIPs or GUPPIs, no worries over consumer welfare, no failing firm defenses. Just a clear, brightline “NO!”

Even before the pandemic, it was well known that the food delivery industry was due for a shakeout. NPR reports, even as the business is growing, none of the top food-delivery apps are turning a profit, with one analyst concluding consolidation was “inevitable.” Thus, even if a moratorium slowed or stopped the Uber/Grubhub merger, at some point a merger in the industry will happen and the U.S. antitrust authorities will have to evaluate it.

First, we have to ask, “What’s the relevant market?” The government has a history of defining relevant markets so narrowly that just about any merger can be challenged. For example, for the scuttled Whole Foods/Wild Oats merger, the FTC famously narrowed the market to “premium natural and organic supermarkets.” Surely, similar mental gymnastics will be used for any merger involving food delivery services.

While food delivery has grown in popularity over the past few years, delivery represents less than 10% of U.S. food service sales. While Rep. Cicilline may be correct that families and local restaurants are “more dependent than ever” on food delivery, delivery is only a small fraction of a large market. Even a monopoly of food delivery service would not confer market power on the restaurant and food service industry.

No reasonable person would claim an Uber/Grubhub merger would increase market power in the restaurant and food service industry. But, it might convey market power in the food delivery market. Much attention is paid to the “Big Four”–DoorDash, Grubhub, Uber Eats, and Postmates. But, these platform delivery services are part of the larger food service delivery market, of which platforms account for about half of the industry’s revenues. Pizza accounts for the largest share of restaurant-to-consumer delivery.

This raises the big question of what is the relevant market: Is it the entire food delivery sector, or just the platform-to-consumer sector? 

Based on the information in the figure below, defining the market narrowly would place an Uber/Grubhub merger squarely in the “presumed to be likely to enhance market power” category.

  • 2016 HHI: <3,175
  • 2018 HHI: <1,474
  • 2020 HHI: <2,249 pre-merger; <4,153 post-merger

Alternatively, defining the market to encompass all food delivery would cut the platforms’ shares roughly in half and the merger would be unlikely to harm competition, based on HHI. Choosing the relevant market is, well, relevant.

The Second Measure data suggests that concentration in the platform delivery sector decreased with the entry of Uber Eats, but subsequently increased with DoorDash’s rising share–which included the acquisition of Caviar from Square.

(NB: There seems to be a significant mismatch in the delivery revenue data. Statista reports platform delivery revenues increased by about 40% from 2018 to 2020, but Second Measure indicates revenues have more than doubled.) 

Geoffrey Manne, in an earlier post points out “while national concentration does appear to be increasing in some sectors of the economy, it’s not actually so clear that the same is true for local concentration — which is often the relevant antitrust market.” That may be the case here.

The figure below is a sample of platform delivery shares by city. I added data from an earlier study of 2017 shares. In all but two metro areas, Uber and Grubhub’s combined market share declined from 2017 to 2020. In Boston, the combined shares did not change and in Los Angeles, the combined shares increased by 1%.

(NB: There are some serious problems with this data, notably that it leaves out the restaurant-to-consumer sector and assumes the entire platform-to-consumer sector is comprised of only the “Big Four.”)

Platform-to-consumer delivery is a complex two-sided market in which the platforms link, and compete for, both restaurants and consumers. Platforms compete for restaurants, drivers, and consumers. Restaurants have a choice of using multiple platforms or entering into exclusive arrangements. Many drivers work for multiple platforms, and many consumers use multiple platforms. 

Fundamentally, the rise of platform-to-consumer is an evolution in vertical integration. Restaurants can choose to offer no delivery, use their own in-house delivery drivers, or use a third party delivery service. Every platform faces competition from in-house delivery, placing a limit on their ability to raise prices to restaurants and consumers.

The choice of delivery is not an either-or decision. For example, many pizza restaurants who have their own delivery drivers also use platform delivery service. Their own drivers may serve a limited geographic area, but the platforms allow the restaurant to expand its geographic reach, thereby increasing its sales. Even so, the platforms face competition from in-house delivery.

Mergers or other forms of shake out in the food delivery industry are inevitable. Mergers will raise important questions about relevant product and geographic markets as well as competition in two-sided markets. While there is a real risk of harm to restaurants, drivers, and consumers, there is also a real possibility of welfare enhancing efficiencies. These questions will never be addressed with an across-the-board merger moratorium.