[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]
26 July, 10 A.F. (after fairness)
Dear Fellow Inquisitors,
It has been more than a decade now since the Federal Neutrality Commission, born of the ashes of the old world, ushered in the Age of Fairness.
As you all know, the FNC was created during the Online Era, when the emergence of the largest companies in human history opened our eyes to the original sin of the competitive process: unfairness.
In the course of their evolution, digital platforms—the vanity fairs of the XXI century—had created entire ecosystems that offered integrated services that were so comfortable to use together that they led to a double-sin: sloth on the part of the consumers, and the unfair exclusion of competitors, who were barred from exercising their God-given right to participate in every market and every platform—and to prosper.
Digital stores selling their own branded goods, social-media apps with their own messaging services, search engines using search statistics to generate optimally efficient tools that surpassed the (legitimate) confines of their core functions and spilled over into the dominion of job search, flight booking, or housing apps … App stores were even using their own recognizable software to guarantee that the apps they distributed met the highest standards of security and trustworthiness!
While these things might not seem entirely unreasonable (especially to the heathens: selfish and individualistic consumers who care about nothing other than satisfying their base hedonistic desires), they in fact led to unspeakable evils that flouted the common good.
For example, they made it very, very uncomfortable for someone who wanted to start their own real-estate business to compete with such strong rival companies, who could leverage their superior efficiency in their core markets to become nigh-unbeatable in offering the cheapest, most relevant housing ads. To make matters worse, the gargantuan spending of the digital platforms on research and development built additional moats of quality and innovation around their products—both core and adjacent—that made them utterly unimpregnable to rivals specializing in just one area.
By constantly leveraging their core services to offer better and improved products on adjacent markets, digital platforms had made it unfairly difficult for other companies to join the race and deliver us to “perfect competition”—the euphoric state of blissful equilibrium foretold by the high priests of the only true belief system, Economics.
But not all was lost, and we hadn’t been forsaken. In those dark and faithless days, it was revealed to us by Sen. Amy Klobuchar—praise be her name—that the loathsome practice whereby online companies favored their own products and services over their rivals had a name, “self-preferencing,” and that it was a sin. And, most importantly, that it could be eradicated.
Fortunately, and thanks to the vigilance of the FNC, legal steps were swiftly taken to make the praxis of the Digital Economy more closely resemble its theory, as passed on to us by our forefathers.
And it worked, brothers and sisters! The prohibition of self-preferencing in digital markets made online products much more homogenous, thus validating one of the main assumptions of Economics. In addition, new competition-law Acts, with mechanisms such as forced data sharing, have eliminated all the messy experimentation that had hitherto led to varied (and risky) business models and diversified approaches. By turning competition into forced collaboration, we had finally made it stable, equal, and predictable; in one word: fair.
And what of the sinner in every one of us? Before the great revelation, blasphemous “consumers”—an anachronistic and reductive term for “socially responsible citizens”—were committing the sin of laziness:sloth. Now, choice is finally mandated, and nothing can be pre-selected or even integrated. No more arbitrary safe-browsing mechanisms, integrated malware detectors and spam filters. Where digital platforms experimented and imposed results on us, we are now coercively free to experiment by ourselves—and on ourselves! Online searches today lead to thousands of indistinguishable links hiding an infinity of surprises, requiring us to be more circumspect and informed than ever before. In one word: the prohibition of self-preferencing has improved the moral character of the human stock.
It is universally known that we owe the dawn of the Age of Fairness to the American Innovation and Choice Online Act, adopted by Congress in the year 2022; and the unwavering vigilance of the FNC. What is lesser known—and what I am here to instill in you today—is that that was just the beginning. The success of AICOA has opened our eyes to an even more ancient and perverse evil: self-preferencing in offline markets. It revealed to us that—for centuries, if not millennia—companies in various industries—from togas to wine, from cosmetics to insurance—had, in fact, always preferred their own initiatives over those of their rivals!
Just as the ancient chariot constructors designed chariots to suit the build of their own thoroughbred horses (thereby foreclosing horses raised by other breeders), the XX century car producers were using spare parts delivered by a supplier organizationally related to their company.
This realization has accelerated the birth pangs of the American Innovation and Choice Offline Act, which we are here to announce today. With it, the FNC will eliminate all remnants of unfair rivalry—online and offline—so that we, as one community of faith, can finally enjoy the true benefits of competition. But we must never forget that this tenuous equilibrium hangs by a thread, and that we owe it all to the indefatigable efforts of the FNC agents patrolling the streets, supermarkets, restaurants, gyms, factories, and just about everything else every single day.
Of course, there is still a lot to be done. But every long journey must begin somewhere.
Today, I want to warn you against sin and urge you to adopt the religion of fairness before the day of judgment comes.
 Or any other religion that condemns self-preferencing. I want to recommend them all equally.
[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]
Much ink has been spilled regarding the potential harm to the economy and to the rule of law that could stem from enactment of the primary federal antitrust legislative proposal, the American Innovation and Choice Online Act (AICOA) (see here). AICOA proponents, of course, would beg to differ, emphasizing the purported procompetitive benefits of limiting the business freedom of “Big Tech monopolists.”
There is, however, one inescapable reality—as night follows day, passage of AICOA would usher in an extended period of costly litigation over the meaning of a host of AICOA terms. As we will see, this would generate business uncertainty and dampen innovative conduct that might be covered by new AICOA statutory terms.
The history of antitrust illustrates the difficulties inherent in clarifying the meaning of novel federal statutory language. It was not until 21 years after passage of the Sherman Antitrust Act that the Supreme Court held that Section 1 of the act’s prohibition on contracts, combinations, and conspiracies “in restraint of trade” only covered unreasonable restraints of trade (see Standard Oil Co. of New Jersey v. United States, 221 U.S. 1 (1911)). Furthermore, courts took decades to clarify that certain types of restraints (for example, hardcore price fixing and horizontal market division) were inherently unreasonable and thus per se illegal, while others would be evaluated on a case-by-case basis under a “rule of reason.”
In addition, even far more specific terms related to exclusive dealing, tying, and price discrimination found within the Clayton Antitrust Act gave rise to uncertainty over the scope of their application. This uncertainty had to be sorted out through judicial case-law tests developed over many decades.
Even today, there is no simple, easily applicable test to determine whether conduct in the abstract constitutes illegal monopolization under Section 2 of the Sherman Act. Rather, whether Section 2 has been violated in any particular instance depends upon the application of economic analysis and certain case-law principles to matter-specific facts.
As is the case with current antitrust law, the precise meaning and scope of AICOA’s terms will have to be fleshed out over many years. Scholarly critiques of AICOA’s language underscore the seriousness of this problem.
In its April 2022 public comment on AICOA, the American Bar Association (ABA) Antitrust Law Section explains in some detail the significant ambiguities inherent in specific AICOA language that the courts will have to address. These include “ambiguous terminology … regarding fairness, preferencing, materiality, and harm to competition on covered platforms”; and “specific language establishing affirmative defenses [that] creates significant uncertainty”. The ABA comment further stresses that AICOA’s failure to include harm to the competitive process as a prerequisite for a statutory violation departs from a broad-based consensus understanding within the antitrust community and could have the unintended consequence of disincentivizing efficient conduct. This departure would, of course, create additional interpretive difficulties for federal judges, further complicating the task of developing coherent case-law principles for the new statute.
In a somewhat similar vein, Stanford Law School Professor (and former acting assistant attorney general for antitrust during the Clinton administration) Douglas Melamed complains that:
[AICOA] does not include the normal antitrust language (e.g., “competition in the market as a whole,” “market power”) that gives meaning to the idea of harm to competition, nor does it say that the imprecise language it does use is to be construed as that language is construed by the antitrust laws. … The bill could be very harmful if it is construed to require, not increased market power, but simply harm to rivals.
In sum, ambiguities inherent in AICOA’s new terminology will generate substantial uncertainty among affected businesses. This uncertainty will play out in the courts over a period of years. Moreover, the likelihood that judicial statutory constructions of AICOA language will support “efficiency-promoting” interpretations of behavior is diminished by the fact that AICOA’s structural scheme (which focuses on harm to rivals) does not harmonize with traditional antitrust concerns about promoting a vibrant competitive process.
Knowing this, the large high-tech firms covered by AICOA will become risk averse and less likely to innovate. (For example, they will be reluctant to improve algorithms in a manner that would increase efficiency and benefit consumers, but that might be seen as disadvantaging rivals.) As such, American innovation will slow, and consumers will suffer. (See here for an estimate of the enormous consumer-welfare gains generated by high tech platforms—gains of a type that AICOA’s enactment may be expected to jeopardize.) It is to be hoped that Congress will take note and consign AICOA to the rubbish heap of disastrous legislative policy proposals.
Slow wage growth and rising inequality over the past few decades have pushed economists more and more toward the study of monopsony power—particularly firms’ monopsony power over workers. Antitrust policy has taken notice. For example, when the Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) initiated the process of updating their merger guidelines, their request for information included questions about how they should respond to monopsony concerns, as distinct from monopoly concerns.
From a pure economic-theory perspective, there is no important distinction between monopsony power and monopoly power. If Armen is trading his apples in exchange for Ben’s bananas, we can call Armen the seller of apples or the buyer of bananas. The labels (buyer and seller) are kind of arbitrary. It doesn’t matter as a pure theory matter. Monopsony and monopoly are just mirrored images.
Some infer from this monopoly-monopsony symmetry, however, that extending antitrust to monopsony power will be straightforward. As a practical matter for antitrust enforcement, it becomes less clear. The moment we go slightly less abstract and use the basic models that economists use, monopsony is not simply the mirror image of monopoly. The tools that antitrust economists use to identify market power differ in the two cases.
Monopsony Requires Studying Output
Suppose that the FTC and DOJ are considering a proposed merger. For simplicity, they know that the merger will generate efficiency gains (and they want to allow it) or market power (and they want to stop it) but not both. The challenge is to look at readily available data like prices and quantities to decide which it is. (Let’s ignore the ideal case that involves being able to estimate elasticities of demand and supply.)
In a monopoly case, if there are efficiency gains from a merger, the standard model has a clear prediction: the quantity sold in the output market will increase. An economist at the FTC or DOJ with sufficient data will be able to see (or estimate) the efficiencies directly in the output market. Efficiency gains result in either greater output at lower unit cost or else product-quality improvements that increase consumer demand. Since the merger lowers prices for consumers, the agencies (assume they care about the consumer welfare standard) will let the merger go through, since consumers are better off.
In contrast, if the merger simply enhances monopoly power without efficiency gains, the quantity sold will decrease, either because the merging parties raise prices or because quality declines. Again, the empirical implication of the merger is seen directly in the market in question. Since the merger raises prices for consumers, the agencies (assume they care about the consumer welfare standard) will let not the merger go through, since consumers are worse off. In both cases, you judge monopoly power by looking directly at the market that may or may not have monopoly power.
Unfortunately, the monopsony case is more complicated. Ultimately, we can be certain of the effects of monopsony only by looking at the output market, not the input market where the monopsony power is claimed.
To see why, consider again a merger that generates either efficiency gains or market (now monopsony) power. A merger that creates monopsony power will necessarily reduce the prices and quantity purchased of inputs like labor and materials. An overly eager FTC may see a lower quantity of input purchased and jump to the conclusion that the merger increased monopsony power. After all, monopsonies purchase fewer inputs than competitive firms.
Not so fast. Fewer input purchases may be because of efficiency gains. For example, if the efficiency gain arises from the elimination of redundancies in a hospital merger, the hospital will buy fewer inputs, hire fewer technicians, or purchase fewer medical supplies. This may even reduce the wages of technicians or the price of medical supplies, even if the newly merged hospitals are not exercising any market power to suppress wages.
The key point is that monopsony needs to be treated differently than monopoly. The antitrust agencies cannot simply look at the quantity of inputs purchased in the monopsony case as the flip side of the quantity sold in the monopoly case, because the efficiency-enhancing merger can look like the monopsony merger in terms of the level of inputs purchased.
How can the agencies differentiate efficiency-enhancing mergers from monopsony mergers? The easiest way may be for the agencies to look at the output market: an entirely different market than the one with the possibility of market power. Once we look at the output market, as we would do in a monopoly case, we have clear predictions. If the merger is efficiency-enhancing, there will be an increase in the output-market quantity. If the merger increases monopsony power, the firm perceives its marginal cost as higher than before the merger and will reduce output.
In short, as we look for how to apply antitrust to monopsony-power cases, the agencies and courts cannot look to the input market to differentiate them from efficiency-enhancing mergers; they must look at the output market. It is impossible to discuss monopsony power coherently without considering the output market.
In real-world cases, mergers will not necessarily be either strictly efficiency-enhancing or strictly monopsony-generating, but a blend of the two. Any rigorous consideration of merger effects must account for both and make some tradeoff between them. The question of how guidelines should address monopsony power is inextricably tied to the consideration of merger efficiencies, particularly given the point above that identifying and evaluating monopsony power will often depend on its effects in downstream markets.
This is just one complication that arises when we move from the purest of pure theory to slightly more applied models of monopoly and monopsony power. Geoffrey Manne, Dirk Auer, Eric Fruits, Lazar Radic and I go through more of the complications in our comments summited to the FTC and DOJ on updating the merger guidelines.
What Assumptions Make the Difference Between Monopoly and Monopsony?
Now that we have shown that monopsony and monopoly are different, how do we square this with the initial observation that it was arbitrary whether we say Armen has monopsony power over apples or monopoly power over bananas?
There are two differences between the standard monopoly and monopsony models. First, in a vast majority of models of monopsony power, the agent with the monopsony power is buying goods only to use them in production. They have a “derived demand” for some factors of production. That demand ties their buying decision to an output market. For monopoly power, the firm sells the goods, makes some money, and that’s the end of the story.
The second difference is that the standard monopoly model looks at one output good at a time. The standard factor-demand model uses two inputs, which introduces a tradeoff between, say, capital and labor. We could force monopoly to look like monopsony by assuming the merging parties each produce two different outputs, apples and bananas. An efficiency gain could favor apple production and hurt banana consumers. While this sort of substitution among outputs is often realistic, it is not the standard economic way of modeling an output market.
Interrogations concerning the role that economic theory should play in policy decisions are nothing new. Milton Friedman famously drew a distinction between “positive” and “normative” economics, notably arguing that theoretical models were valuable, despite their unrealistic assumptions. Kenneth Arrow and Gerard Debreu’s highly theoretical work on General Equilibrium Theory is widely acknowledged as one of the most important achievements of modern economics.
But for all their intellectual value and academic merit, the use of models to inform policy decisions is not uncontroversial. There is indeed a long and unfortunate history of influential economic models turning out to be poor depictions (and predictors) of real-world outcomes.
This raises a key question: should policymakers use economic models to inform their decisions and, if so, how? This post uses the economics of externalities to illustrate both the virtues and pitfalls of economic modeling. Throughout economic history, externalities have routinely been cited to support claims of market failure and calls for government intervention. However, as explained below, these fears have frequently failed to withstand empirical scrutiny.
Today, similar models are touted to support government intervention in digital industries. Externalities are notably said to prevent consumers from switching between platforms, allegedly leading to unassailable barriers to entry and deficient venture-capital investment. Unfortunately, as explained below, the models that underpin these fears are highly abstracted and far removed from underlying market realities.
Ultimately, this post argues that, while models provide a powerful way of thinking about the world, naïvely transposing them to real-world settings is misguided. This is not to say that models are useless—quite the contrary. Indeed, “falsified” models can shed powerful light on economic behavior that would otherwise prove hard to understand.
Fears surrounding economic externalities are as old as modern economics. For example, in the 1950s, economists routinely cited bee pollination as a source of externalities and, ultimately, market failure.
The basic argument was straightforward: Bees and orchards provide each other with positive externalities. Bees cross-pollinate flowers and orchards contain vast amounts of nectar upon which bees feed, thus improving honey yields. Accordingly, several famous economists argued that there was a market failure; bees fly where they please and farmers cannot prevent bees from feeding on their blossoming flowers—allegedly causing underinvestment in both. This led James Meade to conclude:
[T]he apple-farmer provides to the beekeeper some of his factors free of charge. The apple-farmer is paid less than the value of his marginal social net product, and the beekeeper receives more than the value of his marginal social net product.
If, then, apple producers are unable to protect their equity in apple-nectar and markets do not impute to apple blossoms their correct shadow value, profit-maximizing decisions will fail correctly to allocate resources at the margin. There will be failure “by enforcement.” This is what I would call an ownership externality. It is essentially Meade’s “unpaid factor” case.
It took more than 20 years and painstaking research by Steven Cheung to conclusively debunk these assertions. So how did economic agents overcome this “insurmountable” market failure?
The answer, it turns out, was extremely simple. While bees do fly where they please, the relative placement of beehives and orchards has a tremendous impact on both fruit and honey yields. This is partly because bees have a very limited mean foraging range (roughly 2-3km). This left economic agents with ample scope to prevent free-riding.
Using these natural sources of excludability, they built a web of complex agreements that internalize the symbiotic virtues of beehives and fruit orchards. To cite Steven Cheung’s research:
Pollination contracts usually include stipulations regarding the number and strength of the colonies, the rental fee per hive, the time of delivery and removal of hives, the protection of bees from pesticide sprays, and the strategic placing of hives. Apiary lease contracts differ from pollination contracts in two essential aspects. One is, predictably, that the amount of apiary rent seldom depends on the number of colonies, since the farmer is interested only in obtaining the rent per apiary offered by the highest bidder. Second, the amount of apiary rent is not necessarily fixed. Paid mostly in honey, it may vary according to either the current honey yield or the honey yield of the preceding year.
But what of neighboring orchards? Wouldn’t these entail a more complex externality (i.e., could one orchard free-ride on agreements concluded between other orchards and neighboring apiaries)? Apparently not:
Acknowledging the complication, beekeepers and farmers are quick to point out that a social rule, or custom of the orchards, takes the place of explicit contracting: during the pollination period the owner of an orchard either keeps bees himself or hires as many hives per area as are employed in neighboring orchards of the same type. One failing to comply would be rated as a “bad neighbor,” it is said, and could expect a number of inconveniences imposed on him by other orchard owners. This customary matching of hive densities involves the exchange of gifts of the same kind, which apparently entails lower transaction costs than would be incurred under explicit contracting, where farmers would have to negotiate and make money payments to one another for the bee spillover.
In short, not only did the bee/orchard externality model fail, but it failed to account for extremely obvious counter-evidence. Even a rapid flip through the Yellow Pages (or, today, a search on Google) would have revealed a vibrant market for bee pollination. In short, the bee externalities, at least as presented in economic textbooks, were merely an economic “fable.” Unfortunately, they would not be the last.
Lighthouses provide another cautionary tale. Indeed, Henry Sidgwick, A.C. Pigou, John Stuart Mill, and Paul Samuelson all cited the externalities involved in the provision of lighthouse services as a source of market failure.
Here, too, the problem was allegedly straightforward. A lighthouse cannot prevent ships from free-riding on its services when they sail by it (i.e., it is mostly impossible to determine whether a ship has paid fees and to turn off the lighthouse if that is not the case). Hence there can be no efficient market for light dues (lighthouses were seen as a “public good”). As Paul Samuelson famously put it:
Take our earlier case of a lighthouse to warn against rocks. Its beam helps everyone in sight. A businessman could not build it for a profit, since he cannot claim a price from each user. This certainly is the kind of activity that governments would naturally undertake.
He added that:
[E]ven if the operators were able—say, by radar reconnaissance—to claim a toll from every nearby user, that fact would not necessarily make it socially optimal for this service to be provided like a private good at a market-determined individual price. Why not? Because it costs society zero extra cost to let one extra ship use the service; hence any ships discouraged from those waters by the requirement to pay a positive price will represent a social economic loss—even if the price charged to all is no more than enough to pay the long-run expenses of the lighthouse.
More than a century after it was first mentioned in economics textbooks, Ronald Coase finally laid the lighthouse myth to rest—rebutting Samuelson’s second claim in the process.
What piece of evidence had eluded economists for all those years? As Coase observed, contemporary economists had somehow overlooked the fact that large parts of the British lighthouse system were privately operated, and had been for centuries:
[T]he right to operate a lighthouse and to levy tolls was granted to individuals by Acts of Parliament. The tolls were collected at the ports by agents (who might act for several lighthouses), who might be private individuals but were commonly customs officials. The toll varied with the lighthouse and ships paid a toll, varying with the size of the vessel, for each lighthouse passed. It was normally a rate per ton (say 1/4d or 1/2d) for each voyage. Later, books were published setting out the lighthouses passed on different voyages and the charges that would be made.
In other words, lighthouses used a simple physical feature to create “excludability” and prevent free-riding. The main reason ships require lighthouses is to avoid hitting rocks when they make their way to a port. By tying port fees and light dues, lighthouse owners—aided by mild government-enforced property rights—could easily earn a return on their investments, thus disproving the lighthouse free-riding myth.
Ultimately, this meant that a large share of the British lighthouse system was privately operated throughout the 19th century, and this share would presumably have been more pronounced if government-run “Trinity House” lighthouses had not crowded out private investment:
The position in 1820 was that there were 24 lighthouses operated by Trinity House and 22 by private individuals or organizations. But many of the Trinity House lighthouses had not been built originally by them but had been acquired by purchase or as the result of the expiration of a lease.
Of course, this system was not perfect. Some ships (notably foreign ones that did not dock in the United Kingdom) might free-ride on this arrangement. It also entailed some level of market power. The ability to charge light dues meant that prices were higher than the “socially optimal” baseline of zero (the marginal cost of providing light is close to zero). Though it is worth noting that tying port fees and light dues might also have decreased double marginalization, to the benefit of sailors.
Samuelson was particularly weary of this market power that went hand in hand with the private provision of public goods, including lighthouses:
Being able to limit a public good’s consumption does not make it a true-blue private good. For what, after all, are the true marginal costs of having one extra family tune in on the program? They are literally zero. Why then prevent any family which would receive positive pleasure from tuning in on the program from doing so?
However, as Coase explained, light fees represented only a tiny fraction of a ship’s costs. In practice, they were thus unlikely to affect market output meaningfully:
[W]hat is the gain which Samuelson sees as coming from this change in the way in which the lighthouse service is financed? It is that some ships which are now discouraged from making a voyage to Britain because of the light dues would in future do so. As it happens, the form of the toll and the exemptions mean that for most ships the number of voyages will not be affected by the fact that light dues are paid. There may be some ships somewhere which are laid up or broken up because of the light dues, but the number cannot be great, if indeed there are any ships in this category.
Samuelson’s critique also falls prey to the Nirvana Fallacy pointed out by Harold Demsetz: markets might not be perfect, but neither is government intervention. Market power and imperfect appropriability are the two (paradoxical) pitfalls of the first; “white elephants,” underinvestment, and lack of competition (and the information it generates) tend to stem from the latter.
Which of these solutions is superior, in each case, is an empirical question that early economists had simply failed to consider—assuming instead that market failure was systematic in markets that present prima facie externalities. In other words, models were taken as gospel without any circumspection about their relevance to real-world settings.
The Tragedy of the Commons
Externalities were also said to undermine the efficient use of “common pool resources,” such grazing lands, common irrigation systems, and fisheries—resources where one agent’s use diminishes that of others, and where exclusion is either difficult or impossible.
The most famous formulation of this problem is Garret Hardin’s highly influential (over 47,000 cites) “tragedy of the commons.” Hardin cited the example of multiple herdsmen occupying the same grazing ground:
The rational herdsman concludes that the only sensible course for him to pursue is to add another animal to his herd. And another; and another … But this is the conclusion reached by each and every rational herdsman sharing a commons. Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit—in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons.
In more technical terms, each economic agent purportedly exerts an unpriced negative externality on the others, thus leading to the premature depletion of common pool resources. Hardin extended this reasoning to other problems, such as pollution and allegations of global overpopulation.
Although Hardin hardly documented any real-world occurrences of this so-called tragedy, his policy prescriptions were unequivocal:
The most important aspect of necessity that we must now recognize, is the necessity of abandoning the commons in breeding. No technical solution can rescue us from the misery of overpopulation. Freedom to breed will bring ruin to all.
As with many other theoretical externalities, empirical scrutiny revealed that these fears were greatly overblown. In her Nobel-winning work, Elinor Ostrom showed that economic agents often found ways to mitigate these potential externalities markedly. For example, mountain villages often implement rules and norms that limit the use of grazing grounds and wooded areas. Likewise, landowners across the world often set up “irrigation communities” that prevent agents from overusing water.
Along similar lines, Julian Morris and I conjecture that informal arrangements and reputational effects might mitigate opportunistic behavior in the standard essential patent industry.
These bottom-up solutions are certainly not perfect. Many common institutions fail—for example, Elinor Ostrom documents several problematic fisheries, groundwater basins and forests, although it is worth noting that government intervention was sometimes behind these failures. To cite but one example:
Several scholars have documented what occurred when the Government of Nepal passed the “Private Forest Nationalization Act” […]. Whereas the law was officially proclaimed to “protect, manage and conserve the forest for the benefit of the entire country”, it actually disrupted previously established communal control over the local forests. Messerschmidt (1986, p.458) reports what happened immediately after the law came into effect:
Nepalese villagers began freeriding — systematically overexploiting their forest resources on a large scale.
In any case, the question is not so much whether private institutions fail, but whether they do so more often than government intervention. be it regulation or property rights. In short, the “tragedy of the commons” is ultimately an empirical question: what works better in each case, government intervention, propertization, or emergent rules and norms?
More broadly, the key lesson is that it is wrong to blindly apply models while ignoring real-world outcomes. As Elinor Ostrom herself put it:
The intellectual trap in relying entirely on models to provide the foundation for policy analysis is that scholars then presume that they are omniscient observers able to comprehend the essentials of how complex, dynamic systems work by creating stylized descriptions of some aspects of those systems.
In 1985, Paul David published an influential paper arguing that market failures undermined competition between the QWERTY and Dvorak keyboard layouts. This version of history then became a dominant narrative in the field of network economics, including works by Joseph Farrell & Garth Saloner, and Jean Tirole.
The basic claim was that QWERTY users’ reluctance to switch toward the putatively superior Dvorak layout exerted a negative externality on the rest of the ecosystem (and a positive externality on other QWERTY users), thus preventing the adoption of a more efficient standard. As Paul David put it:
Although the initial lead acquired by QWERTY through its association with the Remington was quantitatively very slender, when magnified by expectations it may well have been quite sufficient to guarantee that the industry eventually would lock in to a de facto QWERTY standard. […]
Competition in the absence of perfect futures markets drove the industry prematurely into standardization on the wrong system — where decentralized decision making subsequently has sufficed to hold it.
Unfortunately, many of the above papers paid little to no attention to actual market conditions in the typewriter and keyboard layout industries. Years later, Stan Liebowitz and Stephen Margolis undertook a detailed analysis of the keyboard layout market. They almost entirely rejected any notion that QWERTY prevailed despite it being the inferior standard:
Yet there are many aspects of the QWERTY-versus-Dvorak fable that do not survive scrutiny. First, the claim that Dvorak is a better keyboard is supported only by evidence that is both scant and suspect. Second, studies in the ergonomics literature find no significant advantage for Dvorak that can be deemed scientifically reliable. Third, the competition among producers of typewriters, out of which the standard emerged, was far more vigorous than is commonly reported. Fourth, there were far more typing contests than just the single Cincinnati contest. These contests provided ample opportunity to demonstrate the superiority of alternative keyboard arrangements. That QWERTY survived significant challenges early in the history of typewriting demonstrates that it is at least among the reasonably fit, even if not the fittest that can be imagined.
In short, there was little to no evidence supporting the view that QWERTY inefficiently prevailed because of network effects. The falsification of this narrative also weakens broader claims that network effects systematically lead to either excess momentum or excess inertia in standardization. Indeed, it is tempting to characterize all network industries with heavily skewed market shares as resulting from market failure. Yet the QWERTY/Dvorak story suggests that such a conclusion would be premature.
Killzones, Zoom, and TikTok
If you are still reading at this point, you might think that contemporary scholars would know better than to base calls for policy intervention on theoretical externalities. Alas, nothing could be further from the truth.
For instance, a recent paper by Sai Kamepalli, Raghuram Rajan and Luigi Zingales conjectures that the interplay between mergers and network externalities discourages the adoption of superior independent platforms:
If techies expect two platforms to merge, they will be reluctant to pay the switching costs and adopt the new platform early on, unless the new platform significantly outperforms the incumbent one. After all, they know that if the entering platform’s technology is a net improvement over the existing technology, it will be adopted by the incumbent after merger, with new features melded with old features so that the techies’ adjustment costs are minimized. Thus, the prospect of a merger will dissuade many techies from trying the new technology.
Although this key behavioral assumption drives the results of the theoretical model, the paper presents no evidence to support the contention that it occurs in real-world settings. Admittedly, the paper does present evidence of reduced venture capital investments after mergers involving large tech firms. But even on their own terms, this data simply does not support the authors’ behavioral assumption.
And this is no isolated example. Over the past couple of years, several scholars have called for more muscular antitrust intervention in networked industries. A common theme is that network externalities, switching costs, and data-related increasing returns to scale lead to inefficient consumer lock-in, thus raising barriers to entry for potential rivals (here, here, here).
But there are also countless counterexamples, where firms have easily overcome potential barriers to entry and network externalities, ultimately disrupting incumbents.
Zoom is one of the most salient instances. As I have written previously:
To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects.
Along similar lines, Geoffrey Manne and Alec Stapp have put forward a multitude of other examples. These include: The demise of Yahoo; the disruption of early instant-messaging applications and websites; MySpace’s rapid decline; etc. In all these cases, outcomes do not match the predictions of theoretical models.
More recently, TikTok’s rapid rise offers perhaps the greatest example of a potentially superior social-networking platform taking significant market share away from incumbents. According to the Financial Times, TikTok’s video-sharing capabilities and its powerful algorithm are the most likely explanations for its success.
While these developments certainly do not disprove network effects theory, they eviscerate the common belief in antitrust circles that superior rivals are unable to overthrow incumbents in digital markets. Of course, this will not always be the case. As in the previous examples, the question is ultimately one of comparing institutions—i.e., do markets lead to more or fewer error costs than government intervention? Yet this question is systematically omitted from most policy discussions.
My argument is not that models are without value. To the contrary, framing problems in economic terms—and simplifying them in ways that make them cognizable—enables scholars and policymakers to better understand where market failures might arise, and how these problems can be anticipated and solved by private actors. In other words, models alone cannot tell us that markets will fail, but they can direct inquiries and help us to understand why firms behave the way they do, and why markets (including digital ones) are organized in a given way.
In that respect, both the theoretical and empirical research cited throughout this post offer valuable insights for today’s policymakers.
For a start, as Ronald Coase famously argued in what is perhaps his most famous work, externalities (and market failure more generally) are a function of transaction costs. When these are low (relative to the value of a good), market failures are unlikely. This is perhaps clearest in the “Fable of the Bees” example. Given bees’ short foraging range, there were ultimately few real-world obstacles to writing contracts that internalized the mutual benefits of bees and orchards.
Perhaps more importantly, economic research sheds light on behavior that might otherwise be seen as anticompetitive. The rules and norms that bind farming/beekeeping communities, as well as users of common pool resources, could easily be analyzed as a cartel by naïve antitrust authorities. Yet externality theory suggests they play a key role in preventing market failure.
Along similar lines, mergers and acquisitions (as well as vertical integration, more generally) can reduce opportunism and other externalities that might otherwise undermine collaboration between firms (here, here and here). And much of the same is true for certain types of unilateral behavior. Tying video games to consoles (and pricing the console below cost) can help entrants overcome network externalities that might otherwise shield incumbents. Likewise, Google tying its proprietary apps to the open source Android operating system arguably enabled it to earn a return on its investments, thus overcoming the externality problem that plagues open source software.
All of this raises a tantalizing prospect that deserves far more attention than it is currently given in policy circles: authorities around the world are seeking to regulate the tech space. Draft legislation has notably been tabled in the United States, European Union and the United Kingdom. These draft bills would all make it harder for large tech firms to implement various economic hierarchies, including mergers and certain contractual arrangements.
This is highly paradoxical. If digital markets are indeed plagued by network externalities and high transaction costs, as critics allege, then preventing firms from adopting complex hierarchies—which have traditionally been seen as a way to solve externalities—is just as likely to exacerbate problems. In other words, like the economists of old cited above, today’s policymakers appear to be focusing too heavily on simple models that predict market failure, and far too little on the mechanisms that firms have put in place to thrive within this complex environment.
The bigger picture is that far more circumspection is required when using theoretical models in real-world policy settings. Indeed, as Harold Demsetz famously put it, the purpose of normative economics is not so much to identify market failures, but to help policymakers determine which of several alternative institutions will deliver the best outcomes for consumers:
This nirvana approach differs considerably from a comparative institution approach in which the relevant choice is between alternative real institutional arrangements. In practice, those who adopt the nirvana viewpoint seek to discover discrepancies between the ideal and the real and if discrepancies are found, they deduce that the real is inefficient. Users of the comparative institution approach attempt to assess which alternative real institutional arrangement seems best able to cope with the economic problem […].
[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.
Jerry Ellig was a research professor at The George Washington University Regulatory Studies Center and served as chief economist at the Federal Communications Commission from 2017 to 2018. Tragically, he passed away Jan. 20, 2021. TOTM is honored to publish his contribution to this symposium.]
One significant aspect of Chairman Ajit Pai’s legacy is not a policy change, but an organizational one: establishment of the Federal Communications Commission’s (FCC’s) Office of Economics and Analytics (OEA) in 2018.
Prior to OEA, most of the FCC’s economists were assigned to the various policy bureaus, such as Wireless, Wireline Competition, Public Safety, Media, and International. Each of these bureaus had its own chief economist, but the rank-and-file economists reported to the managers who ran the bureaus – usually attorneys who also developed policy and wrote regulations. In the words of former FCC Chief Economist Thomas Hazlett, the FCC had “no location anywhere in the organizational structure devoted primarily to economic analysis.”
Establishment of OEA involved four significant changes. First, most of the FCC’s economists (along with data strategists and auction specialists) are now grouped together into an organization separate from the policy bureaus, and they are managed by other economists. Second, the FCC rules establishing the new office tasked OEA with reviewing every rulemaking, reviewing every other item with economic content that comes before the commission for a vote, and preparing a full benefit-cost analysis for any regulation with $100 million or more in annual economic impact. Third, a joint memo from the FCC’s Office of General Counsel and OEA specifies that economists are to be involved in the early stages of all rulemakings. Fourth, the memo also indicates that FCC regulatory analysis should follow the principles articulated in Executive Order 12866 and Office of Management and Budget Circular A-4 (while specifying that the FCC, as an independent agency, is not bound by the executive order).
While this structure for managing economists was new for the FCC, it is hardly uncommon in federal regulatory agencies. Numerous independent agencies that deal with economic regulation house their economists in a separate bureau or office, including the Securities and Exchange Commission, the Commodity Futures Trading Commission, the Surface Transportation Board, the Office of Comptroller of the Currency, and the Federal Trade Commission. The SEC displays even more parallels with the FCC. A guidance memo adopted in 2012 by the SEC’s Office of General Counsel and Division of Risk, Strategy and Financial Innovation (the name of the division where economists and other analysts were located) specifies that economists are to be involved in the early stages of all rulemakings and articulates best analytical practices based on Executive Order 12866 and Circular A-4.
A separate economics office offers several advantages over the FCC’s prior approach. It gives the economists greater freedom to offer frank advice, enables them to conduct higher-quality analysis more consistent with the norms of their profession, and may ultimately make it easier to uphold FCC rules that are challenged in court.
Independence. When I served as chief economist at the FCC in 2017-2018, I gathered from conversations that the most common practice in the past was for attorneys who wrote rules to turn to economists for supporting analysis after key decisions had already been made. This was not always the process, but it often occurred. The internal working group of senior FCC career staff who drafted the plan for OEA reached similar conclusions. After the establishment of OEA, an FCC economist I interviewed noted how his role had changed: “My job used to be to support the policy decisions made in the chairman’s office. Now I’m much freer to speak my own mind.”
Ensuring economists’ independence is not a problem unique to the FCC. In a 2017 study, Stuart Shapiro found that most of the high-level economists he interviewed who worked on regulatory impact analyses in federal agencies perceive that economists can be more objective if they are located outside the program office that develops the regulations they are analyzing. As one put it, “It’s very difficult to conduct a BCA [benefit-cost analysis] if our boss wrote what you are analyzing.” Interviews with senior economists and non-economists who work on regulation that I conducted for an Administrative Conference of the United States project in 2019 revealed similar conclusions across federal agencies. Economists located in organizations separate from the program office said that structure gave them greater independence and ability to develop better analytical methodologies. On the other hand, economists located in program offices said they experienced or knew of instances where they were pressured or told to produce an analysis with the results decision-makers wanted.
The FTC provides an informative case study. From 1955-1961, many of the FTC’s economists reported to the attorneys who conducted antitrust cases; in 1961, they were moved into a separate Bureau of Economics. Fritz Mueller, the FTC chief economist responsible for moving the antitrust economists back into the Bureau of Economics, noted that they were originally placed under the antitrust attorneys because the attorneys wanted more control over the economic analysis. A 2015 evaluation by the FTC’s Inspector General concluded that the Bureau of Economics’ existence as a separate organization improves its ability to offer “unbiased and sound economic analysis to support decision-making.”
Higher-quality analysis. An issue closely related to economists’ independence is the quality of the economic analysis. Executive branch regulatory economists interviewed by Richard Williams expressed concern that the economic analysis was more likely to be changed to support decisions when the economists are located in the program office that writes the regulations. More generally, a study that Catherine Konieczny and I conducted while we were at the FCC found that executive branch agencies are more likely to produce higher-quality regulatory impact analyses if the economists responsible for the analysis are in an independent economics office rather than the program office.
Upholding regulations in court. In Michigan v. EPA, the Supreme Court held that it is unreasonable for agencies to refuse to consider regulatory costs if the authorizing statute does not prohibit them from doing so. This precedent will likely increase judicial expectations that agencies will consider economic issues when they issue regulations. The FCC’s OGC-OEA memo cites examples of cases where the quality of the FCC’s economic analysis either helped or harmed the commission’s ability to survive legal challenge under the Administrative Procedure Act’s “arbitrary and capricious” standard. More systematically, a recent Regulatory Studies Center working paper finds that a higher-quality economic analysis accompanying a regulation reduces the likelihood that courts will strike down the regulation, provided that the agency explains how it used the analysis in decisions.
Two potential disadvantages of a separate economics office are that it may make the economists easier to ignore (what former FCC Chief Economist Tim Brennan calls the “Siberia effect”) and may lead the economists to produce research that is less relevant to the practical policy concerns of the policymaking bureaus. The FCC’s reorganization plan took these disadvantages seriously.
To ensure that the ultimate decision-makers—the commissioners—have access to the economists’ analysis and recommendations, the rules establishing the office give OEA explicit responsibility for reviewing all items with economic content that come before the commission. Each item is accompanied by a cover memo that indicates whether OEA believes there are any significant issues, and whether they have been dealt with adequately. To ensure that economists and policy bureaus work together from the outset of regulatory initiatives, the OGC-OEA memo instructs:
Bureaus and Offices should, to the extent practicable, coordinate with OEA in the early stages of all Commission-level and major Bureau-level proceedings that are likely to draw scrutiny due to their economic impact. Such coordination will help promote productive communication and avoid delays from the need to incorporate additional analysis or other content late in the drafting process. In the earliest stages of the rulemaking process, economists and related staff will work with programmatic staff to help frame key questions, which may include drafting options memos with the lead Bureau or Office.
While presiding over his final commission meeting on Jan. 13, Pai commented, “It’s second nature now for all of us to ask, ‘What do the economists think?’” The real test of this institutional innovation will be whether that practice continues under a new chair in the next administration.
[TOTM: The following is part of a digital symposium by TOTM guests and authors on the legal and regulatory issues that arose during Ajit Pai’s tenure as chairman of the Federal Communications Commission. The entire series of posts is available here.
Joshua D. Wright is university professor and executive director of the Global Antitrust Institute at George Mason University’s Scalia Law School. He served as a commissioner of the Federal Trade Commission from 2013 through 2015.]
Much of this symposium celebrates Ajit’s contributions as chairman of the Federal Communications Commission and his accomplishments and leadership in that role. And rightly so. But Commissioner Pai, not just Chairman Pai, should also be recognized.
I first met Ajit when we were both minority commissioners at our respective agencies: the FCC and Federal Trade Commission. Ajit had started several months before I was confirmed. I watched his performance in the minority with great admiration. He reached new heights when he shifted from minority commissioner to chairman, and the accolades he will receive for that work are quite appropriate. But I want to touch on his time as a minority commissioner at the FCC and how that should inform the retrospective of his tenure.
Let me not bury the lead: Ajit Pai has been, in my view, the most successful, impactful minority commissioner in the history of the modern regulatory state. And it is that success that has led him to become the most successful and impactful chairman, too.
I must admit all of this success makes me insanely jealous. My tenure as a minority commissioner ran in parallel with Ajit. We joked together about our fierce duel to be the reigning king of regulatory dissents. We worked together fighting against net neutrality. We compared notes on dissenting statements and opinions. I tried to win our friendly competition. I tried pretty hard. And I lost; worse than I care to admit. But we had fun. And I very much admired the combination of analytical rigor, clarity of exposition, and intellectual honesty in his work. Anyway, the jealousy would be all too much if he weren’t also a remarkable person and friend.
The life of a minority commissioner can be a frustrating one. Like Sisyphus, the minority commissioner often wakes up each day to roll the regulatory (well, in this case, deregulatory) boulder up the hill, only to watch it roll down. And then do it again. And again. At times, it is an exhausting series of jousting matches with the windmills of Washington bureaucracy. It is not often that a minority commissioner has as much success as Commissioner Pai did: dissenting opinions ultimately vindicated by judicial review; substantive victories on critical policy issues; paving the way for institutional and procedural reforms.
It is one thing to write a raging dissent about how the majority has lost all principles. Fire and brimstone come cheap when there aren’t too many consequences to what you have to say. Measure a man after he has been granted power and a chance to use it, and only then will you have a true test of character. Ajit passes that test like few in government ever have.
This is part of what makes Ajit Pai so impressive. I have seen his work firsthand. The multitude of successes Ajit achieved as Chairman Pai were predictable, precisely because Commissioner Pai told the world exactly where he stood on important telecommunications policy issues, the reasons why he stood there, and then, well, he did what he said he would. The Pai regime was much more like a Le’Veon Bell run, between the tackles, than a no-look pass from Patrick Mahomes to Tyreek Hill. Commissioner Pai shared his playbook with the world; he told us exactly where he was going to run the ball. And then Chairman Pai did exactly that. And neither bureaucratic red tape nor political pressure—or even physical threat—could stop him.
Here is a small sampling of his contributions, many of them building on groundwork he laid in the minority:
Focus on Economic Analysis
One of Chairman Pai’s most important contributions to the FCC is his work to systematically incorporate economic analysis into FCC decision-making. The triumph of this effort was establishing the Office of Economic Analysis (OEA) in 2018. The OEA focus on conducting economic analyses of the costs, benefits, and economic impacts of the commission’s proposed rules will be a critical part of agency decision-making from here on out. This act alone would form a legacy any agency head could easily rest their laurels on. The OEA’s work will shape the agency for decades and ensure that agency decisions are made with the oversight economics provides.
This is a hard thing to do; just hiring economists is not enough. Structure matters. How economists get information to decision-makers determines if it will be taken seriously. To this end, Ajit has taken all the lessons from what has made the economists at the FTC so successful—and the lessons from the structural failures at other agencies—and applied them at the FCC.
Structural independence looks like “involving economists on cross-functional teams at the outset and allowing the economics division to make its own, independent recommendations to decision-makers.” And it is necessary for economics to be taken seriously within an agency structure. Ajit has assured that FCC decision-making will benefit from economic analysis for years to come.
Narrowing the Digital Divide
Chairman Pai made helping the disadvantaged get connected to the internet and narrowing the digital divide the top priorities during his tenure. And Commissioner Pai was fighting for this long before the pandemic started.
As businesses, schools, work, and even health care have moved online, the need to get Americans connected with high-speed broadband has never been greater. Under Pai’s leadership, the FCC has removed bureaucratic barriers and provided billions in funding to facilitate rural broadband buildout. We are talking about connections to some 700,000 rural homes and businesses in 45 states, many of whom are gaining access to high-speed internet for the first time.
Ajit has also made sure to keep an eye out for the little guy, and communities that have been historically left behind. Tribal communities, particularly in the rural West, have been a keen focus of his, as he knows all-too-well the difficulties and increased costs associated with servicing those lands. He established programs to rebuild and expand networks in the Virgin Islands and Puerto Rico in an effort to bring the islands to parity with citizens living on the mainland.
You need not take my word for it; he really does talk about this all the time. As he said in a speech at the National Tribal Broadband Summit: “Since my first day in this job, I’ve said that closing the digital divide was my top priority. And as this audience knows all too well, nowhere is that divide more pronounced than on Tribal lands.“ That work is not done; it is beyond any one person. But Ajit should be recognized for his work bridging the divide and laying the foundation for future gains.
And again, this work started as minority commissioner. Before he was chairman, Pai proposed projects for rural broadband development; he frequently toured underserved states and communities; and he proposed legislation to offer the 21st century promise to economically depressed areas of the country. Looking at Chairman Pai is only half the picture.
Keeping Americans Connected
One would not think that the head of the Federal Communications Commission would be a leader on important health-care issues, but Ajit has made a real difference here too. One of his major initiatives has been the development of telemedicine solutions to expand access to care in critical communities.
Beyond encouraging buildout of networks in less-connected areas, Pai’s FCC has also worked to allocate funding for health-care providers and educational institutions who were navigating the transition to remote services. He ensured that health-care providers’ telecommunications and information services were funded. He worked with the U.S. Department of Education to direct funds for education stabilization and allowed schools to purchase additional bandwidth. And he granted temporary additional spectrum usage to broadband providers to meet the increased demand upon our nation’s networks. Oh, and his Keep Americans Connected Pledge gathered commitment from more than 800 companies to ensure that Americans would not lose their connectivity due to pandemic-related circumstances. As if the list were not long enough, Congress’ January coronavirus relief package will ensure that these and other programs, like Rip and Replace, will remain funded for the foreseeable future.
I might sound like I am beating a dead horse here, but the seeds of this, too, were laid in his work in the minority. Here he is describing his work in a 2015 interview, as a minority commissioner:
My own father is a physician in rural Kansas, and I remember him heading out in his car to visit the small towns that lay 40 miles or more from home. When he was there, he could provide care for people who would otherwise never see a specialist at all. I sometimes wonder, back in the 1970s and 1980s, how much easier it would have been on patients, and him, if broadband had been available so he could provide healthcare online.
Agency Transparency and Democratization
Many minority commissioners like to harp on agency transparency. Some take a different view when they are in charge. But Ajit made good on his complaints about agency transparency when he became Chairman Pai. He did this through circulating draft items well in advance of monthly open meetings, giving people the opportunity to know what the agency was voting on.
You used to need a direct connection with the FCC to even be aware of what orders were being discussed—the worst of the D.C. swamp—but now anyone can read about the working items, in clear language.
These moves toward a more transparent, accessible FCC dispel the impression that the agency is run by Washington insiders who are disconnected from the average person. The meetings may well be dry and technical—they really are—but Chairman Pai’s statements are not only good-natured and humorous, but informative and substantive. The public has been well-served by his efforts here.
Incentivizing Innovation and Next-Generation Technologies
Chairman Pai will be remembered for his encouragement of innovation. Under his chairmanship, the FCC discontinued rules that unnecessarily required carriers to maintain costly older, lower-speed networks and legacy voice services. It streamlined the discontinuance process for lower-speed services if the carrier is already providing higher-speed service or if no customers are using the service. It also okayed streamlined notice following force majeure events like hurricanes to encourage investment and deployment of newer, faster infrastructure and services following destruction of networks. The FCC also approved requests by companies to provide high-speed broadband through non-geostationary orbit satellite constellations and created a streamlined licensing process for small satellites to encourage faster deployment.
This is what happens when you get a tech nerd at the head of an agency he loves and cares for. A serious commitment to good policy with an eye toward the future.
Restoring Internet Freedom
This is a pretty sensitive one for me. You hear less about it now, other than some murmurs from the Biden administration about changing it, but the debate over net neutrality got nasty and apocalyptic.
It was everywhere; people saying Chairman Pai would end the internet as we know it. The whole web blacked out for a day in protest. People mocked up memes showing a 25 cent-per-Google-search charge. And as a result of this over-the-top rhetoric, my friend, and his family, received death threats.
That is truly beyond the pale. One could not blame anyone for leaving public service in such an environment. I cannot begin to imagine what I would have done in Ajit’s place. But Ajit took the threats on his life with grace and dignity, never lost his sense of humor, and continued to serve the public dutifully with remarkable courage. I think that says a lot about him. And the American public is lucky to have benefited from his leadership.
Now, for the policy stuff. Though it should go without saying, thelight-touch framework Chairman Pai returned us to—as opposed to the public utility one—will ensure that the United States maintains its leading position on technological innovation in 5G networks and services. The fact that we have endured COVID—and the massive strain on the internet it has caused—with little to no noticeable impact on internet services is all the evidence you need he made the right choice. Ajit has rightfully earned the title of the “5G Chairman.”
I cannot give Ajit all the praise he truly deserves without sounding sycophantic, or bribed. There are any number of windows into his character, but one rises above the rest for me. And I wanted to take the extra time to thank Ajit for it.
Every year, without question, no matter what was going on—even as chairman—Ajit would come to my classes and talk to my students. At length. In detail. And about any subject they wished. He stayed until he answered all of their questions. If I didn’t politely shove him out of the class to let him go do his real job, I’m sure he would have stayed until the last student left. And if you know anything about how to judge a person’s character, that will tell you all you need to know.
Municipal broadband has been heavily promoted by its advocates as a potential source of competition against Internet service providers (“ISPs”) with market power. Jonathan Sallet argued in Broadband for America’s Future: A Vision for the 2020s, for instance, that municipal broadband has a huge role to play in boosting broadband competition, with attendant lower prices, faster speeds, and economic development.
Municipal broadband, of course, can mean more than one thing: From “direct consumer” government-run systems, to “open access” where government builds the back-end, but leaves it up to private firms to bring the connections to consumers, to “middle mile” where the government network reaches only some parts of the community but allows private firms to connect to serve other consumers. The focus of this blog post is on the “direct consumer” model.
There have been many economic studies on municipal broadband, both theoretical and empirical. The literature largely finds that municipal broadband poses serious risks to taxpayers, often relies heavily on cross-subsidies from government-owned electric utilities, crowds out private ISP investment in areas it operates, and largely fails the cost-benefit analysis. While advocates have defended municipal broadband on the grounds of its speed, price, and resulting attractiveness to consumers and businesses, others have noted that many of those benefits come at the expense of other parts of the country from which businesses move.
What this literature has not touched upon is a more fundamental problem: municipal broadband lacks the price signals necessary for economic calculation.. The insights of the Austrian school of economics helps explain why this model is incapable of providing efficient outcomes for society. Rather than creating a valuable source of competition, municipal broadband creates “islands of chaos” undisciplined by the market test of profit-and-loss. As a result, municipal broadband is a poor model for promoting competition and innovation in broadband markets.
The importance of profit-and-loss to economic calculation
One of the things often assumed away in economic analysis is the very thing the market process depends upon: the discovery of knowledge. Knowledge, in this context, is not the technical knowledge of how to build or maintain a broadband network, but the more fundamental knowledge which is discovered by those exercising entrepreneurial judgment in the marketplace.
This type of knowledge is dependent on prices throughout the market. In the market process, prices coordinate exchange between market participants without each knowing the full plan of anyone else. For consumers, prices allow for the incremental choices between different options. For producers, prices in capital markets similarly allow for choices between different ways of producing their goods for the next stage of production. Prices in interest rates help coordinate present consumption, investment, and saving. And, the price signal of profit-and-loss allows producers to know whether they have cost-effectively served consumer needs.
The broadband marketplace can’t be considered in isolation from the greater marketplace in which it is situated. But it can be analyzed under the framework of prices and the knowledge they convey.
For broadband consumers, prices are important for determining the relative importance of Internet access compared to other felt needs. The quality of broadband connection demanded by consumers is dependent on the price. All other things being equal, consumers demand faster connections with less latency issues. But many consumers may prefer slower speeds and connections with more latency if it is cheaper. Even choices between the importance of upload speeds versus download speeds may be highly asymmetrical if determined by consumers.
While “High Performance Broadband for All” may be a great goal from a social planner’s perspective, individuals acting in the marketplace may prioritize other needs with his or her scarce resources. Even if consumers do need Internet access of some kind, the benefits of 100 Mbps download speeds over 25 Mbps, or upload speeds of 100 Mbps versus 3 Mbps may not be worth the costs.
For broadband ISPs, prices for capital goods are important for building out the network. The relative prices of fiber, copper, wireless, and all the other factors of production in building out a network help them choose in light of anticipated profit.
All the decisions of broadband ISPs are made through the lens of pursuing profit. If they are successful, it is because the revenues generated are greater than the costs of production, including the cost of money represented in interest rates. Just as importantly, loss shows the ISPs were unsuccessful in cost-effectively serving consumers. While broadband companies may be able to have losses over some period of time, they ultimately must turn a profit at some point, or there will be exit from the marketplace. Profit-and-loss both serve important functions.
Sallet misses the point when he states the“full value of broadband lies not just in the number of jobs it directly creates or the profits it delivers to broadband providers but also in its importance as a mechanism that others use across the economy and society.” From an economic point of view, profits aren’t important because economists love it when broadband ISPs get rich. Profits are important as an incentive to build the networks we all benefit from, and a signal for greater competition and innovation.
Municipal broadband as islands of chaos
Sallet believes the lack of high-speed broadband (as he defines it) is due to the monopoly power of broadband ISPs. He sees the entry of municipal broadband as pro-competitive. But the entry of a government-run broadband company actually creates “islands of chaos” within the market economy, reducing the ability of prices to coordinate disparate plans of action among participants. This, ultimately, makes society poorer.
The case against municipal broadband doesn’t rely on greater knowledge of how to build or maintain a network being in the hands of private engineers. It relies instead on the different institutional frameworks within which the manager of the government-run broadband network works as compared to the private broadband ISP. The type of knowledge gained in the market process comes from prices, including profit-and-loss. The manager of the municipal broadband network simply doesn’t have access to this knowledge and can’t calculate the best course of action as a result.
This is because the government-run municipal broadband network is not reliant upon revenues generated by free choices of consumers alone. Rather than needing to ultimately demonstrate positive revenue in order to remain a going concern, government-run providers can instead base their ongoing operation on access to below-market loans backed by government power, cross-subsidies when it is run by a government electric utility, and/or public money in the form of public borrowing (i.e. bonds) or taxes.
Municipal broadband, in fact, does rely heavily on subsidies from the government. As a result, municipal broadband is not subject to the discipline of the market’s profit-and-loss test. This frees the enterprise to focus on other goals, including higher speeds—especially upload speeds—and lower prices than private ISPs often offer in the same market. This is why municipal broadband networks build symmetrical high-speed fiber networks at higher rates than the private sector.
But far from representing a superior source of “competition,” municipal broadband is actually an example of “predatory entry.” In areas where there is already private provision of broadband, municipal broadband can “out-compete” those providers due to subsidies from the rest of society. Eventually, this could lead to exit by the private ISPs, starting with the least cost-efficient to the most. In areas where there is limited provision of Internet access, the entry of municipal broadband could reduce incentives for private entry altogether. In either case, there is little reason to believe municipal broadband actually increases consumer welfarein the long run.
Moreover, there are serious concerns in relying upon municipal broadband for the buildout of ISP networks. While Sallet describes fiber as “future-proof,” there is little reason to think that it is. The profit motive induces broadband ISPs to constantly innovate and improve their networks. Contrary to what you would expect from an alleged monopoly industry, broadband companies are consistently among the highest investors in the American economy. Similar incentives would not apply to municipal broadband, which lacks the profit motive to innovate.
There is a definite need to improve public policy to promote more competition in broadband markets. But municipal broadband is not the answer. The lack of profit-and-loss prevents the public manager of municipal broadband from having the price signal necessary to know it is serving the public cost-effectively. No amount of bureaucratic management can replace the institutional incentives of the marketplace.
One of the great scholars of law & economics turns 90 years old today. In his long and distinguished career, Thomas Sowell has written over 40 books and countless opinion columns. He has been a professor of economics and a long-time Senior Fellow at the Hoover Institution. He received a National Humanities Medal in 2002 for a lifetime of scholarship, which has only continued since then. His ability to look at issues with an international perspective, using the analytical tools of economics to better understand institutions, is an inspiration to us at the International Center for Law & Economics.
Here, almost as a blog post festschrift as a long-time reader of his works, I want to briefly write about how Sowell’s voluminous writings on visions, law, race, and economics could be the basis for a positive agenda to achieve a greater measure of racial justice in the United States.
The Importance of Visions
One of the most important aspects of Sowell’s work is his ability to distill wide-ranging issues into debates involving different mental models, or a “Conflict of Visions.” He calls one vision the “tragic” or “constrained” vision, which sees all humans as inherently limited in knowledge, wisdom, and virtue, and fundamentally self-interested even at their best. The other vision is the “utopian” or “unconstrained” vision, which sees human limitations as artifacts of social arrangements and cultures, and that there are some capable by virtue of superior knowledge and morality that can redesign society to create a better world.
An implication of the constrained vision is that the difference in knowledge and virtue between the best and the worst in society is actually quite small. As a result, no one person or group of people can be trusted with redesigning institutions which have spontaneously evolved. The best we can hope for is institutions that reasonably deter bad conduct and allow people the freedom to solve their own problems.
An important implication of the unconstrained vision, on the other hand, is that there are some who because of superior enlightenment, which Sowell calls the “Vision of the Anointed,” can redesign institutions to fundamentally change human nature, which is seen as malleable. Institutions are far more often seen as the result of deliberate human design and choice, and that failures to change them to be more just or equal is a result of immorality or lack of will.
The importance of visions to how we view things like justice and institutions makes all the difference. In the constrained view, institutions like language, culture, and even much of the law result from the “spontaneous ordering” that is the result of human action but not of human design. Limited government, markets, and tradition are all important in helping individuals coordinate action. Markets work because self-interested individuals benefit when they serve others. There are no solutions to difficult societal problems, including racism, only trade-offs.
But in the unconstrained view, limits on government power are seen as impediments to public-spirited experts creating a better society. Markets, traditions, and cultures are to be redesigned from the top down by those who are forward-looking, relying on their articulated reason. There is a belief that solutions could be imposed if only there is sufficient political will and the right people in charge. When it comes to an issue like racism, those who are sufficiently “woke” should be in charge of redesigning institutions to provide for a solution to things like systemic racism.
For Sowell, what he calls “traditional justice” is achieved by processes that hold people accountable for harms to others. Its focus is on flesh-and-blood human beings, not abstractions like all men or blacks versus whites. Differences in outcomes are not just or unjust, by this point of view, what is important is that the processes are just. These processes should focus on institutional incentives of participants. Reforms should be careful not to upset important incentive structures which have evolved over time as the best way for limited human beings to coordinate behavior.
The “Quest for Cosmic Justice,” on the other hand, flows from the unconstrained vision. Cosmic justice sees disparities between abstract groups, like whites and blacks, as unjust and in need of correction. If results from impartial processes like markets or law result in disparities, those with an unconstrained vision often see those processes as themselves racist. The conclusion is that the law should intervene to create better outcomes. This presumes considerable knowledge and morality on behalf of those who are in charge of the interventions.
For Sowell, a large part of his research project has been showing that those with the unconstrained vision often harm those they are proclaiming the intention to help in their quest for cosmic justice.
A Constrained Vision of Racial Justice
Sowell has written quite a lot on race, culture, intellectuals, economics, and public policy. One of the main thrusts of his argument about race is that attempts at cosmic justice often harm living flesh-and-blood individuals in the name of intertemporal abstractions like “social justice” for black Americans. Sowell nowhere denies that racism is an important component of understanding the history of black Americans. But his constant challenge is that racism can’t be the only variable which explains disparities. Sowell points to the importance of culture and education in building human capital to be successful in market economies. Without taking those other variables into account, there is no way to determine the extent that racism is the cause of disparities.
This has important implications for achieving racial justice today. When it comes to policies pursued in the name of racial justice, Sowell has argued that many programs often harm not only members of disfavored groups, but the members of the favored groups.
For instance, Sowell has argued that affirmative action actually harms not only flesh-and-blood white and Asian-Americans who are passed over, but also harms those African-Americans who are “mismatched” in their educational endeavors and end up failing or dropping out of schools when they could have been much better served by attending schools where they would have been very successful. Another example Sowell often points to is minimum wage legislation, which is often justified in the name of helping the downtrodden, but has the effect of harming low-skilled workers by increasing unemployment, most especially young African-American males.
Any attempts at achieving racial justice, in terms of correcting historical injustices, must take into account how changes in processes could actually end up hurting flesh-and-blood human beings, especially when those harmed are black Americans.
A Positive Agenda for Policy Reform
In Sowell’s constrained vision, a large part of the equation for African-American improvement is going to be cultural change. However, white Americans should not think that this means they have no responsibility in working towards racial justice. A positive agenda must take into consideration real harms experienced by African-Americans due to government action (and inaction). Thus, traditional justice demands institutional reforms, and in some cases, recompense.
The policy part of this equation outlined below is motivated by traditional justice concerns that hold people accountable under the rule of law for violations of constitutional rights and promotes institutional reforms to more properly align incentives.
What follows below are policy proposals aimed at achieving a greater degree of racial justice for black Americans, but fundamentally informed by the constrained vision and traditional justice concerns outlined by Sowell. Most of these proposals are not on issues Sowell has written a lot on. In fact, some proposals may actually not be something he would support, but are—in my opinion—consistent with the constrained vision and traditional justice.
Reparations for Historical Rights Violations
Sowell once wrote this in regards to reparations for black Americans:
Nevertheless, it remains painfully clear that those people who were torn from their homes in Africa in centuries past and forcibly brought across the Atlantic in chains suffered not only horribly, but unjustly. Were they and their captors still alive, the reparations and retribution owed would be staggering. Time and death, however, cheat us of such opportunities for justice, however galling that may be. We can, of course, create new injustices among our flesh-and-blood contemporaries for the sake of symbolic expiation, so that the son or daughter of a black doctor or executive can get into an elite college ahead of the son or daughter of a white factory worker or farmer, but only believers in the vision of cosmic justice are likely to take moral solace from that. We can only make our choices among alternatives actually available, and rectifying the past is not one of those options.
In other words, if the victims and perpetrators of injustice are no longer alive, it is not just to hold entire members of respective races accountable for crimes which they did not commit. However, this would presumably leave open the possibility of applying traditional justice concepts in those cases where death has not cheated us.
For instance, there are still black Americans alive who suffered from Jim Crow, as well as children and family members of those lynched. While it is too little, too late, it seems consistent with traditional justice to still seek out and prosecute criminally perpetrators who committed heinous acts but a few generations ago against still living victims. This is not unprecedented. Old Nazis are still prosecuted for crimes against Jews. A similar thing could be done in the United States.
Similarly, civil rights lawsuits for the damages caused by Jim Crow could be another way to recompense those who were harmed. Alternatively, it could be done by legislation. The Civil Liberties Act of 1988 was passed under President Reagan and gave living Japanese Americans who were interned during World War II some limited reparations. A similar system could be set up for living victims of Jim Crow.
Statutes of limitations may need to be changed to facilitate these criminal prosecutions and civil rights lawsuits, but it is quite clearly consistent with the idea of holding flesh-and-blood persons accountable for their unlawful actions.
Holding flesh-and-blood perpetrators accountable for rights violations should not be confused with the cosmic justice idea—that Sowell consistently decries—that says intertemporal abstractions can be held accountable for crimes. In other words, this is not holding “whites” accountable for all historical injustices to “blacks.” This is specifically giving redress to victims and deterring future bad conduct.
End Qualified Immunity
Another way to promote racial justice consistent with the constrained vision is to end one of the Warren Court’s egregious examples of judicial activism: qualified immunity. Qualified immunity is nowhere mentioned in the statute for civil rights, 42 USC § 1983. As Sowell argues in his writings, judges in the constrained vision are supposed to declare what the law is, not what they believe it should be, unlike those in the unconstrained vision who—according to Sowell— believe they have the right to amend the laws through judicial edict. The introduction of qualified immunity into the law by the activist Warren Court should be overturned.
In a civil rights lawsuit, the goal is to make the victim (or their families) of a rights violation whole by monetary damages. From a legal perspective, this is necessary to give the victim justice. From an economic perspective this is necessary to deter future bad conduct and properly align ex ante incentives going forward. Under a well-functioning system, juries would, after hearing all the evidence, make a decision about whether constitutional rights were violated and the extent of damages. A functioning system of settlements would result as a common law develops determining what counts as reasonable or unreasonable uses of force. This doesn’t mean plaintiffs always win, either. Officers may be determined to be acting reasonably under the circumstances once all the evidence is presented to a jury.
However, one of the greatest obstacles to holding police officers accountable in misconduct cases is the doctrine of qualified immunity… courts have widely expanded its scope to the point that qualified immunity is now protecting officers even when their conduct violates the law, as long as the officers weren’t on clear notice from specific judicial precedent that what they did was illegal when they did it… This standard has predictably led to a situation where officer misconduct which judges and juries would likely find egregious never makes it to court. The Cato Institute’s website Unlawful Shield details many cases where federal courts found an officer’s conduct was illegal yet nonetheless protected by qualified immunity.
Immunity of this nature has profound consequences on the incentive structure facing police officers. Police officers, as well as the departments that employ them, are insufficiently accountable when gross misconduct does not get past a motion to dismiss for qualified immunity… The result is to encourage police officers to take insufficient care when making the choice about the level of force to use.
Those with a constrained vision focus on processes and incentives. In this case, it is police officers who have insufficient incentives to take reasonable care when they receive qualified immunity for their conduct.
End the Drug War
While not something he has written a lot on, Sowell has argued for the decriminalization of drugs, comparing the War on Drugs to the earlier attempts at Prohibition of alcohol. This is consistent with the constrained vision, which cares about the institutional incentives created by law.
Interestingly, work by Michelle Alexander in the second chapter of The New Jim Crow is largely consistent with Sowell’s point of view. There she argued the institutional incentives of police departments were systematically changed when the drug war was ramped up.
Alexander asks a question which is right in line with the constrained vision:
[I]t is fair to wonder why the police would choose to arrest such an astonishing percentage of the American public for minor drug crimes. The fact that police are legally allowed to engage in a wholesale roundup of nonviolent drug offenders does not answer the question why they would choose to do so, particularly when most police departments have far more serious crimes to prevent and solve. Why would police prioritize drug-law enforcement? Drug use and abuse is nothing new; in fact, it was on the decline, not on the rise, when the War on Drugs began.
Alexander locates the impetus for ramping up the drug war in federal subsidies:
In 1988, at the behest of the Reagan administration, Congress revised the program that provides federal aid to law enforcement, renaming it the Edward Byrne Memorial State and Local Law Enforcement Assistance Program after a New York City police officer who was shot to death while guarding the home of a drug-case witness. The Byrne program was designed to encourage every federal grant recipient to help fight the War on Drugs. Millions of dollars in federal aid have been offered to state and local law enforcement agencies willing to wage the war. By the late 1990s, the overwhelming majority of state and local police forces in the country had availed themselves of the newly available resources and added a significant military component to buttress their drug-war operations.
On top of that, police departments were benefited by civil asset forfeiture:
As if the free military equipment, training, and cash grants were not enough, the Reagan administration provided law enforcement with yet another financial incentive to devote extraordinary resources to drug law enforcement, rather than more serious crimes: state and local law enforcement agencies were granted the authority to keep, for their own use, the vast majority of cash and assets they seize when waging the drug war. This dramatic change in policy gave state and local police an enormous stake in the War on Drugs—not in its success, but in its perpetual existence. Suddenly, police departments were capable of increasing the size of their budgets, quite substantially, simply by taking the cash, cars, and homes of people suspected of drug use or sales. Because those who were targeted were typically poor or of moderate means, they often lacked the resources to hire an attorney or pay the considerable court costs. As a result, most people who had their cash or property seized did not challenge the government’s action, especially because the government could retaliate by filing criminal charges—baseless or not.
As Alexander notes, black Americans (and other minorities) were largely targeted in this ramped up War on Drugs, noting the drug war’s effects have been to disproportionately imprison black Americans even though drug usage and sales are relatively similar across races. Police officers have incredible discretion in determining who to investigate and bring charges against. When it comes to the drug war, this discretion is magnified because the activity is largely consensual, meaning officers can’t rely on victims to come to them to start an investigation. Alexander finds the reason the criminal justice system has targeted black Americans is because of implicit bias in police officers, prosecutors, and judges, which mirrors the bias shown in media coverage and in larger white American society.
Anyone inspired by Sowell would need to determine whether this is because of racism or some other variable. It is important to note here that Sowell never denies that racism exists or is a real problem in American society. But he does challenge us to determine whether this alone is the cause of disparities. Here, Alexander makes a strong case that it is implicit racism that causes the disparities in enforcement of the War on Drugs. A race-neutral explanation could be as follows, even though it still suggests ending the War on Drugs: the enforcement costs against those unable to afford to challenge the system are lower. And black Americans are disproportionately represented among the poor in this country. As will be discussed below in the section on reforming indigent criminal defense, most prosecutions are initiated against defendants who can’t afford a lawyer. The result could be racially disparate even without a racist motivation.
Regardless of whether racism is the variable that explains the disparate impact of the War on Drugs, it should be ended. This may be an area where traditional and cosmic justice concerns can be united in an effort to reform the criminal justice system.
Reform Indigent Criminal Defense
A related aspect of how the criminal justice system has created a real barrier for far too many black Americans is the often poor quality of indigent criminal defense. Indigent defense is a large part of criminal defense in this country since a very high number of criminal prosecutions are initiated against those who are often too poor to afford a lawyer (roughly 80%). Since black Americans are disproportionately represented among the indigent and those in the criminal justice system, it should be no surprise that black Americans are disproportionately represented by public defenders in this country.
According to the constrained vision, it is important to look at the institutional incentives of public defenders. Considering the extremely high societal costs of false convictions, it is important to get these incentives right.
David Friedman and Stephen Schulhofer’s seminal article exploring the law & economics of indigent criminal defense highlighted the conflict of interest inherent in government choosing who represents criminal defendants when the government is in charge of prosecuting. They analyzed each of the models used in the United States for indigent defense from an economic point of view and found each wanting. On top of that, there is also a calculation problem inherent in government-run public defender’s offices whereby defendants may be systematically deprived of viable defense strategies because of a lack of price signals.
An interesting alternative proposed by Friedman and Schulhofer is a voucher system. This is similar to the voucher system Sowell has often touted for education. The idea would be that indigent criminal defendants get to pick the lawyer of their choosing that is part of the voucher program. The government would subsidize the provision of indigent defense, in this model, but would not actually pick the lawyer or run the public defender organization. Incentives would be more closely aligned between the defendant and counsel.
While much more could be said consistent with the constrained vision that could help flesh-and-blood black Americans, including abolishing occupational licensing, ending wage controls, promoting school choice, and ending counterproductive welfare policies, this is enough for now. Racial justice demands holding rights violators accountable and making victims whole. Racial justice also means reforming institutions to make sure incentives are right to deter conduct which harms black Americans. However, the growing desire to do something to promote racial justice in this country should not fall into the trap of cosmic justice thinking, which often ends up hurting flesh-and-blood people of all races in the present in the name of intertemporal abstractions.
Happy 90th birthday to one of the greatest law & economics scholars ever, Dr. Thomas Sowell.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Eric Fruits, (Chief Economist, International Center for Law & Economics).]
Earlier this week, merger talks between Uber and food delivery service Grubhub surfaced. House Antitrust Subcommittee Chairman David N. Cicilline quickly reacted to the news:
Americans are struggling to put food on the table, and locally owned businesses are doing everything possible to keep serving people in our communities, even under great duress. Uber is a notoriously predatory company that has long denied its drivers a living wage. Its attempt to acquire Grubhub—which has a history of exploiting local restaurants through deceptive tactics and extortionate fees—marks a new low in pandemic profiteering. We cannot allow these corporations to monopolize food delivery, especially amid a crisis that is rendering American families and local restaurants more dependent than ever on these very services. This deal underscores the urgency for a merger moratorium, which I and several of my colleagues have been urging our caucus to support.
Pandemic profiteering rolls nicely off the tongue, and we’re sure to see that phrase much more over the next year or so.
Grubhub shares jumped 29% Tuesday, the day the merger talks came to light, shown in the figure below. The Wall Street Journal reports companies are considering a deal that would value Grubhub stock at around 1.9 Uber shares, or $60-65 dollars a share, based on Thursday’s price.
But is that “pandemic profiteering?”
After Amazon announced its intended acquisition of Whole Foods, the grocer’s stock price soared by 27%. Rep. Cicilline voiced some convoluted concerns about that merger, but said nothing about profiteering at the time. Different times, different messaging.
Rep. Cicilline and others have been calling for a merger moratorium during the pandemic and used the Uber/Grubhub announcement as Exhibit A in his indictment of merger activity.
A moratorium would make things much easier for regulators. No more fighting over relevant markets, no HHI calculations, no experts debating SSNIPs or GUPPIs, no worries over consumer welfare, no failing firm defenses. Just a clear, brightline “NO!”
Even before the pandemic, it was well known that the food delivery industry was due for a shakeout. NPR reports, even as the business is growing, none of the top food-delivery apps are turning a profit, with one analyst concluding consolidation was “inevitable.” Thus, even if a moratorium slowed or stopped the Uber/Grubhub merger, at some point a merger in the industry will happen and the U.S. antitrust authorities will have to evaluate it.
First, we have to ask, “What’s the relevant market?” The government has a history of defining relevant markets so narrowly that just about any merger can be challenged. For example, for the scuttled Whole Foods/Wild Oats merger, the FTC famously narrowed the market to “premium natural and organic supermarkets.” Surely, similar mental gymnastics will be used for any merger involving food delivery services.
While food delivery has grown in popularity over the past few years, delivery represents less than 10% of U.S. food service sales. While Rep. Cicilline may be correct that families and local restaurants are “more dependent than ever” on food delivery, delivery is only a small fraction of a large market. Even a monopoly of food delivery service would not confer market power on the restaurant and food service industry.
No reasonable person would claim an Uber/Grubhub merger would increase market power in the restaurant and food service industry. But, it might convey market power in the food delivery market. Much attention is paid to the “Big Four”–DoorDash, Grubhub, Uber Eats, and Postmates. But, these platform delivery services are part of the larger food service delivery market, of which platforms account for about half of the industry’s revenues. Pizza accounts for the largest share of restaurant-to-consumer delivery.
This raises the big question of what is the relevant market: Is it the entire food delivery sector, or just the platform-to-consumer sector?
Based on the information in the figure below, defining the market narrowly would place an Uber/Grubhub merger squarely in the “presumed to be likely to enhance market power” category.
2016 HHI: <3,175
2018 HHI: <1,474
2020 HHI: <2,249 pre-merger; <4,153 post-merger
Alternatively, defining the market to encompass all food delivery would cut the platforms’ shares roughly in half and the merger would be unlikely to harm competition, based on HHI. Choosing the relevant market is, well, relevant.
The Second Measure data suggests that concentration in the platform delivery sector decreased with the entry of Uber Eats, but subsequently increased with DoorDash’s rising share–which included the acquisition of Caviar from Square.
(NB: There seems to be a significant mismatch in the delivery revenue data. Statista reports platform delivery revenues increased by about 40% from 2018 to 2020, but Second Measure indicates revenues have more than doubled.)
Geoffrey Manne, in an earlier post points out “while national concentration does appear to be increasing in some sectors of the economy, it’s not actually so clear that the same is true for local concentration — which is often the relevant antitrust market.” That may be the case here.
The figure below is a sample of platform delivery shares by city. I added data from an earlier study of 2017 shares. In all but two metro areas, Uber and Grubhub’s combined market share declined from 2017 to 2020. In Boston, the combined shares did not change and in Los Angeles, the combined shares increased by 1%.
(NB: There are some serious problems with this data, notably that it leaves out the restaurant-to-consumer sector and assumes the entire platform-to-consumer sector is comprised of only the “Big Four.”)
Platform-to-consumer delivery is a complex two-sided market in which the platforms link, and compete for, both restaurants and consumers. Platforms compete for restaurants, drivers, and consumers. Restaurants have a choice of using multiple platforms or entering into exclusive arrangements. Many drivers work for multiple platforms, and many consumers use multiple platforms.
Fundamentally, the rise of platform-to-consumer is an evolution in vertical integration. Restaurants can choose to offer no delivery, use their own in-house delivery drivers, or use a third party delivery service. Every platform faces competition from in-house delivery, placing a limit on their ability to raise prices to restaurants and consumers.
The choice of delivery is not an either-or decision. For example, many pizza restaurants who have their own delivery drivers also use platform delivery service. Their own drivers may serve a limited geographic area, but the platforms allow the restaurant to expand its geographic reach, thereby increasing its sales. Even so, the platforms face competition from in-house delivery.
Mergers or other forms of shake out in the food delivery industry are inevitable. Mergers will raise important questions about relevant product and geographic markets as well as competition in two-sided markets. While there is a real risk of harm to restaurants, drivers, and consumers, there is also a real possibility of welfare enhancing efficiencies. These questions will never be addressed with an across-the-board merger moratorium.
In the wake of the launch of Facebook’s content oversight board, Republican Senator Josh Hawley and FCC Commissioner Brendan Carr, among others, have taken to Twitter to levy criticisms at the firm and, in the process, demonstrate just how far the Right has strayed from its first principles around free speech and private property. For his part, Commissioner Carr’s thread makes the case that the members of the board are highly partisan and mostly left-wing and can’t be trusted with the responsibility of oversight. While Senator Hawley took the approach that the Board’s very existence is just further evidence of the need to break Facebook up.
Both Hawley and Carr have been lauded in rightwing circles, but in reality their positions contradict conservative notions of the free speech and private property protections given by the First Amendment.
I have noted in severalplaces before that there is a conflict of visions when it comes to whether the First Amendment protects a negative or positive conception of free speech. For those unfamiliar with the distinction: it comes from philosopher Isaiah Berlin, who identified negative liberty as freedom from external interference, and positive liberty as freedom to do something, including having the power and resources necessary to do that thing. Discussions of the First Amendment’s protection of free speech often elide over this distinction.
With respect to speech, the negative conception of liberty recognizes that individual property owners can control what is said on their property, for example. To force property owners to allow speakers/speech on their property that they don’t desire would actually be a violation of their liberty — what the Supreme Court calls “compelled speech.” The First Amendment, consistent with this view, generally protects speech from government interference (with very few, narrow exceptions), while allowing private regulation of speech (again, with very few, narrow exceptions).
Commissioner Carr’s complaint and Senator Hawley’s antitrust approach of breaking up Facebook has much more in common with the views traditionally held by left-wing Democrats on the need for the government to regulate private actors in order to promote speech interests. Originalists and law & economics scholars, on the other hand, have consistently taken the opposite point of view that the First Amendment protects against government infringement of speech interests, including protecting the right to editorial discretion. While there is clearly a conflict of visions in First Amendment jurisprudence, the conservative (and, in my view, correct) point of view should not be jettisoned by Republicans to achieve short-term political gains.
The First Amendment restricts government action, not private action
The First Amendment, by its very text, only applies to government action: “Congress shall make no law . . . abridging the freedom of speech.” This applies to the “State[s]” through the Fourteenth Amendment. There is extreme difficulty in finding any textual hook to say the First Amendment protects against private action, like that of Facebook.
Ratified in 1791, the First Amendment provides in relevant part that “Congress shall make no law . . . abridging the freedom of speech.” Ratified in 1868, the Fourteenth Amendment makes the First Amendment’s Free Speech Clause applicable against the States: “No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law . . . .” §1. The text and original meaning of those Amendments, as well as this Court’s longstanding precedents, establish that the Free Speech Clause prohibits only governmental abridgment of speech. The Free Speech Clause does not prohibit private abridgment of speech… In accord with the text and structure of the Constitution, this Court’s state-action doctrine distinguishes the government from individuals and private entities. By enforcing that constitutional boundary between the governmental and the private, the state-action doctrine protects a robust sphere of individual liberty. (Emphasis added).
This was true at the adoption of the First Amendment and remains true today in a high-tech world. Federal district courts have consistently dismissed First Amendment lawsuits against Facebook on the grounds there is no state action.
For instance, in Nyawba v. Facebook, the plaintiff initiated a civil rights lawsuit against Facebook for restricting his use of the platform. The U.S. District Court for the Southern District of Texas dismissed the case, noting
Because the First Amendment governs only governmental restrictions on speech, Nyabwa has not stated a cause of action against FaceBook… Like his free speech claims, Nyabwa’s claims for violation of his right of association and violation of his due process rights are claims that may be vindicated against governmental actors pursuant to § 1983, but not a private entity such as FaceBook.
Similarly, in Young v. Facebook, the U.S. District Court for the Northern District of California rejected a claim that Facebook violated the First Amendment by deactivating the plaintiff’s Facebook page. The court declined to subject Facebook to the First Amendment analysis, stating that “because Young has not alleged any action under color of state law, she fails to state a claim under § 1983.”
The First Amendment restricts antitrust actions against Facebook, not Facebook’s editorial discretion over its platform
Far from restricting Facebook, the First Amendment actually restricts government actions aimed at platforms like Facebook when they engage in editorial discretion by moderating content. If an antitrust plaintiff was to act on the impulse to “break up” Facebook because of alleged political bias in its editorial discretion, the lawsuit would be running headlong into the First Amendment’s protections.
There is no basis for concluding online platforms do not have editorial discretion under the law. In fact, the position of Facebook here is very similar to the newspaper in Miami Herald Publishing Co. v. Tornillo, in which the Supreme Court considered a state law giving candidates for public office a right to reply in newspapers to editorials written about them. The Florida Supreme Court upheld the statute, finding it furthered the “broad societal interest in the free flow of information to the public.” The U.S. Supreme Court, despite noting the level of concentration in the newspaper industry, nonetheless reversed. The Court explicitly found the newspaper had a First Amendment right to editorial discretion:
The choice of material to go into a newspaper, and the decisions made as to limitations on the size and content of the paper, and treatment of public issues and public officials — whether fair or unfair — constitute the exercise of editorial control and judgment. It has yet to be demonstrated how governmental regulation of this crucial process can be exercised consistent with First Amendment guarantees of a free press as they have evolved to this time.
Online platforms have the same First Amendment protections for editorial discretion. For instance, in both Search King v. Google and Langdon v. Google, two different federal district courts ruled Google’s search results are subject to First Amendment protections, both citing Tornillo.
In Zhang v. Baidu.com, another district court went so far as to grant a Chinese search engine the right to editorial discretion in limiting access to democracy movements in China. The court found that the search engine “inevitably make[s] editorial judgments about what information (or kinds of information) to include in the results and how and where to display that information.” Much like the search engine in Zhang, Facebook is clearly making editorial judgments about what information shows up in newsfeed and where to display it.
None of this changes because the generally applicable law is antitrust rather than some other form of regulation. For instance, in Tornillo, the Supreme Court took pains to distinguish the case from an earlier antitrust case against newspapers, Associated Press v. United States, which found that there was no broad exemption from antitrust under the First Amendment.
The Court foresaw the problems relating to government-enforced access as early as its decision in Associated Press v. United States, supra. There it carefully contrasted the private “compulsion to print” called for by the Association’s bylaws with the provisions of the District Court decree against appellants which “does not compel AP or its members to permit publication of anything which their `reason’ tells them should not be published.”
In other words, the Tornillo and Associated Press establish the government may not compel speech through regulation, including an antitrust remedy.
Once it is conceded that there is a speech interest here, the government must justify the use of antitrust law to compel Facebook to display the speech of users in the newsfeeds of others under the strict scrutiny test of the First Amendment. In other words, the use of antitrust law must be narrowly tailored to a compelling government interest. Even taking for granted that there may be a compelling government interest in facilitating a free and open platform (which is by no means certain), it is clear that this would not be narrowly tailored action.
First, “breaking up” Facebook is clearly overbroad as compared to the goal of promoting free speech on the platform. There is no need to break it up just because it has an Oversight Board that engages in editorial responsibilities. There are many less restrictive means, including market competition, which has greatly expanded consumer choice for communications and connections. Second, antitrust does not even really have a remedy for free speech issues complained of here, as it would require courts to engage in long-term oversight and engage in compelled speech foreclosed by Associated Press.
Note that this makes good sense from a law & economics perspective. Platforms like Facebook should be free to regulate the speech on their platforms as they see fit and consumers are free to decide which platforms they wish to use based upon that information. While there are certainly network effects to social media, the plethora of options currently available with low switching costs suggests that there is no basis for antitrust action against Facebook because consumers are unable to speak. In other words, the least restrictive means test of the First Amendment is best fulfilled by market competition in this case.
If there were a basis for antitrust intervention against Facebook, either through merger review or as a standalone monopoly claim, the underlying issue would be harm to competition. While this would have implications for speech concerns (which may be incorporated into an analysis through quality-adjusted price), it is inconceivable how an antitrust remedy could be formed on speech issues consistent with the First Amendment.
Despite now well-worn complaints by so-called conservatives in and out of the government about the baneful influence of Facebook and other Big Tech companies, the First Amendment forecloses government actions to violate the editorial discretion of these companies. Even if Commissioner Carr is right, this latest call for antitrust enforcement against Facebook by Senator Hawley should be rejected for principled conservative reasons.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Eline Chivot, (Senior Policy Analyst, Center for Data Innovation, Information Technology and Innovation Foundation.).]
As the COVID-19 outbreak led to the shutdown of many stores, e-commerce and brick-and-mortar shops have been stepping up efforts to facilitate online deliveries while ensuring their workers’ safety. Without online retail, lockdown conditions would have been less tolerable, and confinement measures less sustainable. Yet a recent French court’s ruling on Amazon seems to be a justification for making life more difficult for some of these businesses and more inconvenient for people by limiting consumer choice. But in a context that calls for as much support to economic activity and consumer welfare as possible, that makes little sense. In fact, the court’s decision is symptomatic of how countries use industrial policy to treat certain companies with double standards.
On April 24, Amazon lost its appeal of a French court order requiring the platform to stop delivering “non-essential items” until it evaluates workers’ risk of coronavirus exposure in its six French warehouses. The online retailer is now facing penalties of about 100,000 euros (about $110,000) per delivery, and was given 48 hours to reduce its warehouse activities and operations.
But the complexity of logistics would make it difficult to adjust and limit deliveries to just “essential items.” Given the novelty of the situation, there were no official, precise, and pre-determined lists in place, nor was there clarity about who gets to decide, nor was there a common understanding of what customers would consider essential services or goods. As a result, Amazon temporarily closed its six French distribution centers, and is now shipping to its French customers from its warehouses in other European countries. If France wants to apply such measure for worker safety in this time of crisis, that’s clearly its right. But the requirement should apply to all online retailers equally, not just to the American company Amazon.
The court’s decision was made on the grounds that Amazon had not implemented sufficient safety measures for its workers. The turnaround last week of trade unions (who had initiated the complaints against Amazon and called for the shutdown of its facilities) and their proposition to “gradually” resume operations speak volume. Like many other companies, Amazon had invested in additional safety measures for its employees during the crisis, distributed masks and gloves to its workers, had taken their temperatures before their shifts, had built testing capacity, and proactively decided to prioritize the delivery of essential goods. Like many other companies, Amazon had to rapidly cope with unprecedented circumstances it wasn’t prepared to handle, while having to juggle a surge in online orders during lockdowns and make do with some governments’ unclear guidance regarding safety measures.
But France has long prioritized worker welfare over broad economic welfare—which includes worker welfare, but also consumer welfare and economic growth. Yet, in this case, that prioritization seems to only apply to Amazon. French retailers like Fnac, Cdiscount, Spartoo, and La Redoute did not face the same degree of judicial scrutiny despite similar complaints about distribution centers. Nor did they have to restrict their deliveries to “essential goods.” But in France, it seems, what is good for French geese isn’t good for U.S. ganders. In fact, the real issue appears to be the French application of industrial policy. According to a union representative of Fnac, this is about “preventing Amazon from gaining market share over French retailers during lockdown,” so that the latter can reap the benefits. Using the crisis as an excuse to restructure the French retail sector is certainly one creative application of industrial policy.
Moreover, by applying these restrictions (either just to Amazon or across all retailers who engage in e-commerce), the French government is deepening the economic crisis. The restrictions it has imposed on Amazon are likely to accentuate the losses many French small- and medium-sized companies are already facing because of the COVID-19 crisis, while also having longer-term negative consequences for its logistics network in France. Many such firms rely on Amazon’s platform to sell, ship, and develop their business, and now have to turn to more expensive delivery services. In addition, the reduction in activity by its distribution centers could force Amazon to furlough many of its 9,300 French workers.
Finally, the French court’s decision is an inconvenience to the 22.2 million people in France who order via Amazon, depend on efficient home deliveries to cope with strict confinement measures, and are now being told what is essential or not. With Amazon relying on other European warehouses for deliveries and being forced to limit them to items such as IT products, health and nutrition items, food, and pet food, consumers will be faced with delayed deliveries and reduced access to product variety. The court’s decision also hurts many French merchants who use Amazon for warehousing and fulfillment, as they are effectively locked out of accessing their stock.
Non-discrimination is, or least should be, a core principle of rule-of-law nations. It appears that, at least in this case, France does not think it should apply to non-French firms.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Eric Fruits, (Chief Economist, International Center for Law & Economics).]
In an earlier TOTM post, we argued as the economy emerges from the COVID-19 crisis, perhaps the best policy would allow properly motivated firms and households to themselves balance the benefits, costs, and risks of transitioning to “business as usual.”
Sometimes, however, well meaning government policies disrupt the balance and realign motivations.
Our post contrasted firms who determined they could remain open by undertaking mitigation efforts with those who determined they could not safely remain open. One of these latter firms was Portland-based ChefStable, which operates more than 20 restaurants and bars. Kurt Huffman, the owner of ChefStable, shut down all the company’s properties one day before the Oregon governor issued her “Stay home, stay safe” order.
An unintended consequence
In a recent Wall Street Journal op-ed, Mr. Huffman reports his business was able to shift to carryout and delivery, which ended up being more successful than anticipated. So successful, in fact, that he needed to bring back some of the laid-off employees. That’s when he ran into one of the stimulus package’s unintended—but not unanticipated—consequences of providing federal-level payments on top of existing state-level guarantees:
We started making the calls last week, just as our furloughed employees began receiving weekly Federal Pandemic Unemployment Compensation checks of $600 under the Cares Act. When we asked our employees to come back, almost all said, “No thanks.” If they return to work, they’ll have to take a pay cut.
But as of this week, that same employee receives $1,016 a week, or $376 more than he made as a full time employee. Why on earth would he want to come back to work?
Mr. Huffman’s not alone. NPR reports on a Kentucky coffee shop owner who faces the same difficulty keeping her employees at work:
“The very people we hired have now asked us to be laid off,” Marietta wrote in a blog post. “Not because they did not like their jobs or because they did not want to work, but because it would cost them literally hundreds of dollars per week to be employed.”
With the federal government now offering $600 a week on top of the state’s unemployment benefits, she recognized her former employees could make more money staying home than they did on the job.
Or, a fully intended consequence
The NPR piece indicates the Trump administration opted for the relatively straightforward (if not simplistic) unemployment payments as a way to get the money to unemployed workers as quickly as possible.
On the other hand, maybe the unemployment premium was not an unintended consequence. Perhaps, there was some intention.
If the purpose of the stay-at-home orders is to “flatten the curve” and slow the spread of the coronavirus, then it can be argued the purpose of the stimulus spending is to mitigate some of the economic costs.
If this is the case, it can also be argued that the unemployment premium paid by the federal government was designed to encourage people to stay at home and delay returning to work. In fact, it may be more effective than a bunch of loophole laden employment regulations that would require an army of enforcers.
Mr. Huffman seems confident his employees will be ready to return to work in August, when the premium runs out. John Cochrane, however, is not so confident, writing on his blog, “Hint to Mr. Huffman: I would not bet too much that this deadline is not extended.”
With the administration’s state-by-state phased re-opening of the economy, the unemployment premium payments could be tweaked so only residents in states in Phase 1 or 2 would be eligible to receive the premium payments.
Of course, this tweak would unleash its own unintended consequences. In particular, it would encourage some states to slow walk the re-opening of their economies as a way to extract more federal money for their residents. My wild guess: The slow walking states will be the same states who have been most affected by the state and local tax deductibility provisions in the Tax Cuts and Jobs Act.
As with all government policies, the unemployment provisions in the COVID-19 stimulus raise the age old question: If a policy generates unintended consequences that are not unanticipated, can those consequences really be unintended?