Archives For Friedrich Hayek

One of the biggest names in economics, Daron Acemoglu, recently joined the mess that is Twitter. He wasted no time in throwing out big ideas for discussion and immediately getting tons of, let us say, spirited replies. 

One of Acemoglu’s threads involved a discussion of F.A. Hayek’s famous essay “The Use of Knowledge in Society,” wherein Hayek questions central planners’ ability to acquire and utilize such knowledge. Echoing many other commentators, Acemoglu asks: can supercomputers and artificial intelligence get around Hayek’s concerns? 

Coming back to Hayek’s argument, there was another aspect of it that has always bothered me. What if computational power of central planners improved tremendously? Would Hayek then be happy with central planning?

While there are a few different layers to Hayek’s argument, at least one key aspect does not rest at all on computational power. Hayek argues that markets do not require users to have much information in order to make their decisions. 

To use Hayek’s example, when the price of tin increases: “All that the users of tin need to know is that some of the tin they used to consume is now more profitably employed elsewhere.” Knowing whether demand or supply shifted to cause the price increase would be redundant information for the tin user; the price provides all the information about market conditions that the user needs. 

To Hayek, this informational role of prices is what makes markets unique (compared to central planning):

The most significant fact about this [market] system is the economy of knowledge with which it operates, or how little the individual participants need to know in order to take the right action.

Good computers, bad computers—it doesn’t matter. Markets just require less information from their individual participants. This was made precise in the 1970s and 1980s in a series of papers on the “informational efficiency” of competitive markets.

This post will give an explanation of what the formal results say. From there, we can go back to debating the relevance for Acemoglu’s argument and the future of central planning with AI.

From Hayek to Hurwicz

First, let’s run through an oversimplified history of economic thought. Hayek developed his argument about information and markets during the socialist-calculation debate between Hayek and Ludwig von Mises on one side and Oskar Lange and Abba Lerner on the other. Lange and Lerner argued that a planned socialist economy could replicate a market economy. Mises and Hayek argued that it could not, because the socialist planner would not have the relevant information.

In response to the socialist-calculation debate, Leonid Hurwicz—who studied with Hayek at the London School of Economics, overlapped with Mises in Geneva, and would ultimately be awarded the Nobel Memorial Prize in 2007—developed the formal language in the 1960s and 1970s that became what we now call “mechanism design.”

Specifically, Hurwicz developed an abstract way to measure how much information a system needed. What does it mean for a system to require little information? What is the “efficient” (i.e., minimal) amount of information? Two later papers (Mount and Reiter (1974) and Jordan (1982)) used Hurwicz’s framework to prove that competitive markets are informationally efficient.

Understanding the Meaning of Informational Efficiency

How much information do people need to achieve a competitive outcome? This is where Hurwicz’s theory comes in. He gave us a formal way to discuss more and less information: the size of the message space. 

To understand the message space’s size, consider an economy with six people: three buyers and three sellers. Some buyers—call them type B3—are willing to pay $3. Type B2 is willing to pay $2. Sellers of type S0 are willing to sell for $0. S1 for $1, and so on. Each buyer knows their valuation for the good, and each seller knows their cost.

Here’s the weird exercise. Along comes an oracle who knows everything. The oracle decides to figure out a competitive price that will clear the market, so he draws out the supply curve (in orange), and the demand curve (in blue) and picks an equilibrium point where they cross (in red). 

So the oracle knows a price of $1.50 and a quantity of 2 is an equilibrium.

Now, we, the ignorant outsiders, come along and want to verify that the oracle is telling the truth and knows that it is an equilibrium. But we shouldn’t take the oracle’s word for it.

How can the oracle convince us that this is an equilibrium? We don’t know anyone’s valuation.

The oracle puts forward a game to the six players. The oracle says:

  • The price is $1.50, meaning that if you buy 1, you pay $1.50; if you sell 1, you receive $1.50.
  • If you say you’re B3 (which means you value the good at $3), you must buy 1.
  • If you say you’re B2, you must buy 1.
  • If you say you’re B1, you must buy 0.
  • If you say you’re S0, you must sell 1.
  • If you say you’re S1, you must sell 1.
  • If you say you’re S2, you must sell 0.

The oracle then asks everyone: do you accept the terms of this mechanism? Everyone says yes, because only the buyers who value it more than $1.50 buy and only the sellers with a cost less than $1.50 sell. By everyone agreeing, we (the ignorant outsiders) can verify that the oracle did, in fact, know people’s valuations.

Now, let’s count how much information the oracle needed to communicate. He needed to send a message that included the price and the trades for each type. Technically, he didn’t need to say S2 sells zero, because it is implied by the fact that the quantity bought must equal the quantity sold. In total, he needs to send six messages.

The formal exercise amounts to counting each message that needs to be sent. With a formally specified way of measuring how much information is required in competitive markets, we can now ask whether this is a lot. 

If you don’t care about efficiency, you can always save on information and not say anything, don’t have anyone trade, and have a message space of size 0. That saves on information; just do nothing.

But in the context of the socialist-calculation debate, the argument was over how much information was needed to achieve “good” outcomes. Lange and Lerner argued that market socialism could be efficient, not that it would result in zero trade, so efficiency is the welfare benchmark we are aiming for.

If you restrict your attention to efficient outcomes, Mount and Reiter (1974) showed you cannot use less information than competitive markets. In a later paper, Jordan (1982) showed that there is no way to match the competitive mechanism in terms of information. The competitive mechanism is the unique mechanism with this dimension. 

Acemoglu reads Hayek as saying “central planning wouldn’t work because it would be impossible to collect and compute the right allocation of resources.” But the Jordan and Mount & Reiter papers don’t claim that computation is impossible for central planners. Take whatever computational abilities exist, from the first computer to the newest AI—competitive markets always require the least information possible. Supercomputers or AI do not, and cannot, change that relative comparison. 

Beyond Computational Issues

In terms of information costs, the best a central planner could hope for is to mimic exactly the market mechanism. But then, of what use is the planner? She’s just one more actor who could divert the system toward her own interest. As Acemoglu points out, “if the planner could collect all of that information, she could do lots of bad things with it.” 

The incentive problem is a separate problem, which is why Hayek tried to focus solely on information. Think about building a road. There is a concern that markets will not provide roads because people would be unwilling to pay for them without being coerced through taxes. You cannot simply ask people how much they are willing to pay for the road and charge them that price. People will lie and say they do not care about roads. No amount of computing power fixes incentives. Again, computing power is tangential to the question of markets versus planning. Superior computational power doesn’t help. 

There’s a lot buried in Hayek and all of those ideas are important and worth considering. They are just further complications with which we should grapple. A handful of theory papers will never solve all of our questions about the nature of markets and central planning. Instead, the formal papers tell us, in a very stylized setting, what it would even mean to quantify the “amount of information.” And once we quantify it, we have an explicit way to ask: do markets use minimal information?

For several decades, we have known that the answer is yes. In recent work, Rafael Guthmann and I show that informational efficiency can extend to big platforms coordinating buyers and sellers—what we call market-makers.

The bigger problem with Acemoglu’s suggestion that computational abilities can solve Hayek’s challenge is that Hayek wasn’t merely thinking about computation and the communication of information. Instead, Hayek was concerned about our ability to even articulate our desires. In the example above, the buyers know exactly how much they are willing to pay and sellers know exactly how much they are willing to sell for. But in the real world, people have tacit knowledge that they cannot communicate to third parties. This is especially true when we think about a dynamic world of innovation. How do you communicate to a central planner a new product? 

The real issue is the market dynamics require entrepreneurs who are imagining new futures with new products like the iPhone. Major innovations will never be able to be articulated and communicated to a central planner. All of these readings of Hayek and the market’s ability to communicate information—from formal informational efficiency to tacit knowledge—are independent of computational capabilities. 

[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]

Things are heating up in the antitrust world. There is considerable pressure to pass the American Innovation and Choice Online Act (AICOA) before the congressional recess in August—a short legislative window before members of Congress shift their focus almost entirely to campaigning for the mid-term elections. While it would not be impossible to advance the bill after the August recess, it would be a steep uphill climb.

But whether it passes or not, some of the damage from AICOA may already be done. The bill has moved the antitrust dialogue that will harm innovation and consumers. In this post, I will first explain AICOA’s fundamental flaws. Next, I discuss the negative impact that the legislation is likely to have if passed, even if courts and agencies do not aggressively enforce its provisions. Finally, I show how AICOA has already provided an intellectual victory for the approach articulated in the European Union (EU)’s Digital Markets Act (DMA). It has built momentum for a dystopian regulatory framework to break up and break into U.S. superstar firms designated as “gatekeepers” at the expense of innovation and consumers.

The Unseen of AICOA

AICOA’s drafters argue that, once passed, it will deliver numerous economic benefits. Sen. Amy Klobuchar (D-Minn.)—the bill’s main sponsor—has stated that it will “ensure small businesses and entrepreneurs still have the opportunity to succeed in the digital marketplace. This bill will do just that while also providing consumers with the benefit of greater choice online.”

Section 3 of the bill would provide “business users” of the designated “covered platforms” with a wide range of entitlements. This includes preventing the covered platform from offering any services or products that a business user could provide (the so-called “self-preferencing” prohibition); allowing a business user access to the covered platform’s proprietary data; and an entitlement for business users to have “preferred placement” on a covered platform without having to use any of that platform’s services.

These entitlements would provide non-platform businesses what are effectively claims on the platform’s proprietary assets, notwithstanding the covered platform’s own investments to collect data, create services, and invent products—in short, the platform’s innovative efforts. As such, AICOA is redistributive legislation that creates the conditions for unfair competition in the name of “fair” and “open” competition. It treats the behavior of “covered platforms” differently than identical behavior by their competitors, without considering the deterrent effect such a framework will have on consumers and innovation. Thus, AICOA offers rent-seeking rivals a formidable avenue to reap considerable benefits at the expense of the innovators thanks to the weaponization of antitrust to subvert, not improve, competition.

In mandating that covered platforms make their data and proprietary assets freely available to “business users” and rivals, AICOA undermines the underpinning of free markets to pursue the misguided goal of “open markets.” The inevitable result will be the tragedy of the commons. Absent the covered platforms having the ability to benefit from their entrepreneurial endeavors, the law no longer encourages innovation. As Joseph Schumpeter seminally predicted: “perfect competition implies free entry into every industry … But perfectly free entry into a new field may make it impossible to enter it at all.”

To illustrate, if business users can freely access, say, a special status on the covered platforms’ ancillary services without having to use any of the covered platform’s services (as required under Section 3(a)(5)), then platforms are disincentivized from inventing zero-priced services, since they cannot cross-monetize these services with existing services. Similarly, if, under Section 3(a)(1) of the bill, business users can stop covered platforms from pre-installing or preferencing an app whenever they happen to offer a similar app, then covered platforms will be discouraged from investing in or creating new apps. Thus, the bill would generate a considerable deterrent effect for covered platforms to invest, invent, and innovate.

AICOA’s most detrimental consequences may not be immediately apparent; they could instead manifest in larger and broader downstream impacts that will be difficult to undo. As the 19th century French economist Frederic Bastiat wrote: “a law gives birth not only to an effect but to a series of effects. Of these effects, the first only is immediate; it manifests itself simultaneously with its cause—it is seen. The others unfold in succession—they are not seen it is well for, if they are foreseen … it follows that the bad economist pursues a small present good, which will be followed by a great evil to come, while the true economist pursues a great good to come,—at the risk of a small present evil.”

To paraphrase Bastiat, AICOA offers ill-intentioned rivals a “small present good”–i.e., unconditional access to the platforms’ proprietary assets–while society suffers the loss of a greater good–i.e., incentives to innovate and welfare gains to consumers. The logic is akin to those who advocate the abolition of intellectual-property rights: The immediate (and seen) gain is obvious, concerning the dissemination of innovation and a reduction of the price of innovation, while the subsequent (and unseen) evil remains opaque, as the destruction of the institutional premises for innovation will generate considerable long-term innovation costs.

Fundamentally, AICOA weakens the benefits of scale by pursuing vertical disintegration of the covered platforms to the benefit of short-term static competition. In the long term, however, the bill would dampen dynamic competition, ultimately harming consumer welfare and the capacity for innovation. The measure’s opportunity costs will prevent covered platforms’ innovations from benefiting other business users or consumers. They personify the “unseen,” as Bastiat put it: “[they are] always in the shadow, and who, personifying what is not seen, [are] an essential element of the problem. [They make] us understand how absurd it is to see a profit in destruction.”

The costs could well amount to hundreds of billions of dollars for the U.S. economy, even before accounting for the costs of deterred innovation. The unseen is costly, the seen is cheap.

A New Robinson-Patman Act?

Most antitrust laws are terse, vague, and old: The Sherman Act of 1890, the Federal Trade Commission Act, and the Clayton Act of 1914 deal largely in generalities, with considerable deference for courts to elaborate in a common-law tradition on the specificities of what “restraints of trade,” “monopolization,” or “unfair methods of competition” mean.

In 1936, Congress passed the Robinson-Patman Act, designed to protect competitors from the then-disruptive competition of large firms who—thanks to scale and practices such as price differentiation—upended traditional incumbents to the benefit of consumers. Passed after “Congress made no factual investigation of its own, and ignored evidence that conflicted with accepted rhetoric,” the law prohibits price differentials that would benefit buyers, and ultimately consumers, in the name of less vigorous competition from more efficient, more productive firms. Indeed, under the Robinson-Patman Act, manufacturers cannot give a bigger discount to a distributor who would pass these savings onto consumers, even if the distributor performs extra services relative to others.

Former President Gerald Ford declared in 1975 that the Robinson-Patman Act “is a leading example of [a law] which restrain[s] competition and den[ies] buyers’ substantial savings…It discourages both large and small firms from cutting prices, making it harder for them to expand into new markets and pass on to customers the cost-savings on large orders.” Despite this, calls to amend or repeal the Robinson-Patman Act—supported by, among others, competition scholars like Herbert Hovenkamp and Robert Bork—have failed.

In the 1983 Abbott decision, Justice Lewis Powell wrote: “The Robinson-Patman Act has been widely criticized, both for its effects and for the policies that it seeks to promote. Although Congress is aware of these criticisms, the Act has remained in effect for almost half a century.”

Nonetheless, the act’s enforcement dwindled, thanks to wise reactions from antitrust agencies and the courts. While it is seldom enforced today, the act continues to create considerable legal uncertainty, as it raises regulatory risks for companies who engage in behavior that may conflict with its provisions. Indeed, many of the same so-called “neo-Brandeisians” who support passage of AICOA also advocate reinvigorating Robinson-Patman. More specifically, the new FTC majority has expressed that it is eager to revitalize Robinson-Patman, even as the law protects less efficient competitors. In other words, the Robinson-Patman Act is a zombie law: dead, but still moving.

Even if the antitrust agencies and courts ultimately follow the same path of regulatory and judicial restraint on AICOA that they have on Robinson-Patman, the legal uncertainty its existence will engender will act as a powerful deterrent on disruptive competition that dynamically benefits consumers and innovation. In short, like the Robinson-Patman Act, antitrust agencies and courts will either enforce AICOA–thus, generating the law’s adverse effects on consumers and innovation–or they will refrain from enforcing AICOA–but then, the legal uncertainty shall lead to unseen, harmful effects on innovation and consumers.

For instance, the bill’s prohibition on “self-preferencing” in Section 3(a)(1) will prevent covered platforms from offering consumers new products and services that happen to compete with incumbents’ products and services. Self-preferencing often is a pro-competitive, pro-efficiency practice that companies widely adopt—a reality that AICOA seems to ignore.

Would AICOA prevent, e.g., Apple from offering a bundled subscription to Apple One, which includes Apple Music, so that the company can effectively compete with incumbents like Spotify? As with Robinson-Patman, antitrust agencies and courts will have to choose whether to enforce a productivity-decreasing law, or to ignore congressional intent but, in the process, generate significant legal uncertainties.

Judge Bork once wrote that Robinson-Patman was “antitrust’s least glorious hour” because, rather than improving competition and innovation, it reduced competition from firms who happen to be more productive, innovative, and efficient than their rivals. The law infamously protected inefficient competitors rather than competition. But from the perspective of legislative history perspective, AICOA may be antitrust’s new “least glorious hour.” If adopted, it will adversely affect innovation and consumers, as opportunistic rivals will be able to prevent cost-saving practices by the covered platforms.

As with Robinson-Patman, calls to amend or repeal AICOA may follow its passage. But Robinson-Patman Act illustrates the path dependency of bad antitrust laws. However costly and damaging, AICOA would likely stay in place, with regular calls for either stronger or weaker enforcement, depending on whether the momentum shifts from populist antitrust or antitrust more consistent with dynamic competition.

Victory of the Brussels Effect

The future of AICOA does not bode well for markets, either from a historical perspective or from a comparative-law perspective. The EU’s DMA similarly targets a few large tech platforms but it is broader, harsher, and swifter. In the competition between these two examples of self-inflicted techlash, AICOA will pale in comparison with the DMA. Covered platforms will be forced to align with the DMA’s obligations and prohibitions.

Consequently, AICOA is a victory of the DMA and of the Brussels effect in general. AICOA effectively crowns the DMA as the all-encompassing regulatory assault on digital gatekeepers. While members of Congress have introduced numerous antitrust bills aimed at targeting gatekeepers, the DMA is the one-stop-shop regulation that encompasses multiple antitrust bills and imposes broader prohibitions and stronger obligations on gatekeepers. In other words, the DMA outcompetes AICOA.

Commentators seldom lament the extraterritorial impact of European regulations. Regarding regulating digital gatekeepers, U.S. officials should have pushed back against the innovation-stifling, welfare-decreasing effects of the DMA on U.S. tech companies, in particular, and on U.S. technological innovation, in general. To be fair, a few U.S. officials, such as Commerce Secretary Gina Raimundo, did voice opposition to the DMA. Indeed, well-aware of the DMA’s protectionist intent and its potential to break up and break into tech platforms, Raimundo expressed concerns that antitrust should not be about protecting competitors and deterring innovation but rather about protecting the process of competition, however disruptive may be.

The influential neo-Brandeisians and radical antitrust reformers, however, lashed out at Raimundo and effectively shamed the Biden administration into embracing the DMA (and its sister regulation, AICOA). Brussels did not have to exert its regulatory overreach; the U.S. administration happily imports and emulates European overregulation. There is no better way for European officials to see their dreams come true: a techlash against U.S. digital platforms that enjoys the support of local officials.

In that regard, AICOA has already played a significant role in shaping the intellectual mood in Washington and in altering the course of U.S. antitrust. Members of Congress designed AICOA along the lines pioneered by the DMA. Sen. Klobuchar has argued that America should emulate European competition policy regarding tech platforms. Lina Khan, now chair of the FTC, co-authored the U.S. House Antitrust Subcommittee report, which recommended adopting the European concept of “abuse of dominant position” in U.S. antitrust. In her current position, Khan now praises the DMA. Tim Wu, competition counsel for the White House, has praised European competition policy and officials. Indeed, the neo-Brandeisians’ have not only praised the European Commission’s fines against U.S. tech platforms (despite early criticisms from former President Barack Obama) but have more dramatically called for the United States to imitate the European regulatory framework.

In this regulatory race to inefficiency, the standard is set in Brussels with the blessings of U.S. officials. Not even the precedent set by the EU’s General Data Protection Regulation (GDPR) fully captures the effects the DMA will have. Privacy laws passed by U.S. states’ privacy have mostly reacted to the reality of the GDPR. With AICOA, Congress is proactively anticipating, emulating, and welcoming the DMA before it has even been adopted. The intellectual and policy shift is historical, and so is the policy error.

AICOA and the Boulevard of Broken Dreams

AICOA is a failure similar to the Robinson-Patman Act and a victory for the Brussels effect and the DMA. Consumers will be the collateral damages, and the unseen effects on innovation will take years before they materialize. Calls for amendments and repeals of AICOA are likely to fail, so that the inevitable costs will forever bear upon consumers and innovation dynamics.

AICOA illustrates the neo-Brandeisian opposition to large innovative companies. Joseph Schumpeter warned against such hostility and its effect on disincentivizing entrepreneurs to innovate when he wrote:

Faced by the increasing hostility of the environment and by the legislative, administrative, and judicial practice born of that hostility, entrepreneurs and capitalists—in fact the whole stratum that accepts the bourgeois scheme of life—will eventually cease to function. Their standard aims are rapidly becoming unattainable, their efforts futile.

President William Howard Taft once said, “the world is not going to be saved by legislation.” AICOA will not save antitrust, nor will consumers. To paraphrase Schumpeter, the bill’s drafters “walked into our future as we walked into the war, blindfolded.” AICOA’s intentions to deliver greater competition, a fairer marketplace, greater consumer choice, and more consumer benefits will ultimately scatter across the boulevard of broken dreams.

The Baron de Montesquieu once wrote that legislators should only change laws with a “trembling hand”:

It is sometimes necessary to change certain laws. But the case is rare, and when it happens, they should be touched only with a trembling hand: such solemnities should be observed, and such precautions are taken that the people will naturally conclude that the laws are indeed sacred since it takes so many formalities to abrogate them.

AICOA’s drafters had a clumsy hand, coupled with what Friedrich Hayek would call “a pretense of knowledge.” They were certain to do social good and incapable of thinking of doing social harm. The future will remember AICOA as the new antitrust’s least glorious hour, where consumers and innovation were sacrificed on the altar of a revitalized populist view of antitrust.

This post is the first in a three-part series. The second installment can be found here and the third can be found here.

The interplay among political philosophy, competition, and competition law remains, with some notable exceptions, understudied in the literature. Indeed, while examinations of the intersection between economics and competition law have taught us much, relatively little has been said about the value frameworks within which different visions of competition and competition law operate.

As Ronald Coase reminds us, questions of economics and political philosophy are interrelated, so that “problems of welfare economics must ultimately dissolve into a study of aesthetics and morals.” When we talk about economics, we talk about political philosophy, and vice versa. Every political philosophy reproduces economic prescriptions that reflect its core tenets. And every economic arrangement, in turn, evokes the normative values that undergird it. This is as true for socialism and fascism as it is for liberalism and neoliberalism.

Many economists have understood this. Milton Friedman, for instance, who spent most of his career studying social welfare, not ethics, admitted in Free to Choose that he was ultimately concerned with the preservation of a value: the liberty of the individual. Similarly, the avowed purpose of Friedrich Hayek’s The Constitution of Liberty was to maximize the state of human freedom, with coercion—i.e., the opposite of freedom—described as evil. James Buchanan fought to preserve political philosophy within the economic discipline, particularly worrying that:

Political economy was becoming unmoored from the types of philosophic and institutional analysis which were previously central to the field. In its flight from reality, Buchanan feared economics was in danger of abandoning social-philosophic issues for exclusively technical questions.

— John Kroencke, “Three Essays in the History of Economics”

Against this background, I propose to look at competition and competition law from a perspective that explicitly recognizes this connection. The goal is not to substitute, but rather to complement, our comparatively broad understanding of competition economics with a better grasp of the deeper normative implications of regulating competition in a certain way. If we agree with Robert Bork that antitrust is a subcategory of ideology that reflects and reacts upon deeper tensions in our society, the exercise might also be relevant beyond the relatively narrow confines of antitrust scholarship (which, on the other hand, seem to be getting wider and wider).

The Classical Liberal Revolution and the Unshackling of Competition

Mercantilism

When Adam Smith’s The Wealth of Nations was published in 1776, heavy economic regulation of the market through laws, by-laws, tariffs, and special privileges was the norm. Restrictions on imports were seen as protecting national wealth by preventing money from flowing out of the country—a policy premised on the conflation of money with wealth. A morass of legally backed and enforceable monopoly rights, granted either by royal decree or government-sanctioned by-laws, marred competition. Guilds reigned over tradesmen by restricting entry into the professions and segregating markets along narrow geographic lines. At every turn, economic activity was shot through with rules, restrictions, and regulations.

The Revolution in Political Economy

Classical liberals like Smith departed from the then-dominant mercantilist paradigm by arguing that nations prospered through trade and competition, and not protectionism and monopoly privileges. He demonstrated that both the seller and the buyer benefited from trade; and theorized the market as an automatic mechanism that allocated resources efficiently through the spontaneous, self-interested interaction of individuals.

Undergirding this position was the notion of the natural order, which Smith carried over from his own Theory of Moral Sentiments and which elaborated on arguments previously espoused by the French physiocrats (a neologism meaning “the rule of nature”), such as Anne Robert Jacques Turgot, François Quesnay, and Jacques Claude Marie Vincent de Gournay. The basic premise was that there existed a harmonious order of things established and maintained by means of subconscious balancing of the egoism of the individual and the greatest welfare for all.

The implications of this modest insight, which clashed directly with established mercantilist orthodoxy, were tremendous. If human freedom maximized social welfare, the justification for detailed government intervention in the economy was untenable. The principles of laissez-faire (a term probably coined by Gournay, who had been Turgot’s mentor) instead prescribed that the government should adopt a “night watchman” role, tending to modest tasks such as internal and external defense, the mediation of disputes, and certain public works that were not deemed profitable for the individual.

Freeing Competition from the Mercantilist Yoke

Smith’s general attitude also carried over to competition. Following the principles described above, classical liberals believed that price and product adjustments following market interactions among tradesmen (i.e., competition) would automatically maximize social utility. As Smith argued:

In general, if any branch of trade, or any division of labor, be advantageous to the public, the freer and more general the competition, it will always be the more so.

This did not mean that competition occurred in a legal void. Rather, Smith’s point was that there was no need to construct a comprehensive system of competition regulation, as markets would oversee themselves so long as a basic legal and institutional framework was in place and government refrained from actively abetting monopolies. Under this view, the only necessary “competition law” would be those individual laws that made competition possible, such as private property rights, contracts, unfair competition laws, and the laws against government and guild restrictions.

Liberal Political Philosophy: Utilitarian and Deontological Perspectives on Liberty and Individuality

Of course, this sort of volte face in political economy needed to be buttressed by a robust philosophical conception of the individual and the social order. Such ontological and moral theories were articulated in, among others, the Theory of Moral Sentiments and John Stuart Mill’s On Liberty. At the heart of the liberal position was the idea that undue restrictions on human freedom and individuality were not only intrinsically despotic, but also socially wasteful, as they precluded men from enjoying the fruits of the exercise of such freedoms. For instance, infringing the freedom to trade and to compete would rob the public of cheaper goods, while restrictions on freedom of expression would arrest the development of thoughts and ideas through open debate.

It is not clear whether the material or the ethical argument for freedom came first. In other words, whether classical liberalism constituted an ex-post rationalization of a moral preference for individual liberty, or precisely the reverse. The question may be immaterial, as classical liberals generally believed that the deontological and the consequentialist cases for liberty—save in the most peripheral of cases (e.g., violence against others)—largely overlapped.

Conclusion

In sum, classical liberalism offered a holistic, integrated view of societies, markets, morals, and individuals that was revolutionary for the time. The notion of competition as a force to be unshackled—rather than actively constructed and chaperoned—flowed organically from that account and its underlying values and assumptions. These included such values as personal freedom and individualism, along with foundational metaphysical presuppositions, such as the existence of a harmonious natural order that seamlessly guided individual actions for the benefit of the whole.

Where such base values and presumptions are eroded, however, the notion of a largely spontaneous, self-sustaining competitive process loses much of its rational, ethical, and moral legitimacy. Competition thus ceases to be tenable on its “own two feet” and must either be actively engineered and protected, or abandoned altogether as a viable organizing principle. In this sense, the crisis of liberalism the West experienced in the late 19th and early 20th centuries—which attacked the very foundations of classical liberal doctrine—can also be read as a crisis of competition.

In my next post, I’ll discuss the collectivist backlash against liberalism.

[This post adapts elements of “Should ASEAN Antitrust Laws Emulate European Competition Policy?”, published in the Singapore Economic Review (2021). Open access working paper here.]

U.S. and European competition laws diverge in numerous ways that have important real-world effects. Understanding these differences is vital, particularly as lawmakers in the United States, and the rest of the world, consider adopting a more “European” approach to competition.

In broad terms, the European approach is more centralized and political. The European Commission’s Directorate General for Competition (DG Comp) has significant de facto discretion over how the law is enforced. This contrasts with the common law approach of the United States, in which courts elaborate upon open-ended statutes through an iterative process of case law. In other words, the European system was built from the top down, while U.S. antitrust relies on a bottom-up approach, derived from arguments made by litigants (including the government antitrust agencies) and defendants (usually businesses).

This procedural divergence has significant ramifications for substantive law. European competition law includes more provisions akin to de facto regulation. This is notably the case for the “abuse of dominance” standard, in which a “dominant” business can be prosecuted for “abusing” its position by charging high prices or refusing to deal with competitors. By contrast, the U.S. system places more emphasis on actual consumer outcomes, rather than the nature or “fairness” of an underlying practice.

The American system thus affords firms more leeway to exclude their rivals, so long as this entails superior benefits for consumers. This may make the U.S. system more hospitable to innovation, since there is no built-in regulation of conduct for innovators who acquire a successful market position fairly and through normal competition.

In this post, we discuss some key differences between the two systems—including in areas like predatory pricing and refusals to deal—as well as the discretionary power the European Commission enjoys under the European model.

Exploitative Abuses

U.S. antitrust is, by and large, unconcerned with companies charging what some might consider “excessive” prices. The late Associate Justice Antonin Scalia, writing for the Supreme Court majority in the 2003 case Verizon v. Trinko, observed that:

The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system. The opportunity to charge monopoly prices—at least for a short period—is what attracts “business acumen” in the first place; it induces risk taking that produces innovation and economic growth.

This contrasts with European competition-law cases, where firms may be found to have infringed competition law because they charged excessive prices. As the European Court of Justice (ECJ) held in 1978’s United Brands case: “In this case charging a price which is excessive because it has no reasonable relation to the economic value of the product supplied would be such an abuse.”

While United Brands was the EU’s foundational case for excessive pricing, and the European Commission reiterated that these allegedly exploitative abuses were possible when it published its guidance paper on abuse of dominance cases in 2009, the commission had for some time demonstrated apparent disinterest in bringing such cases. In recent years, however, both the European Commission and some national authorities have shown renewed interest in excessive-pricing cases, most notably in the pharmaceutical sector.

European competition law also penalizes so-called “margin squeeze” abuses, in which a dominant upstream supplier charges a price to distributors that is too high for them to compete effectively with that same dominant firm downstream:

[I]t is for the referring court to examine, in essence, whether the pricing practice introduced by TeliaSonera is unfair in so far as it squeezes the margins of its competitors on the retail market for broadband connection services to end users. (Konkurrensverket v TeliaSonera Sverige, 2011)

As Scalia observed in Trinko, forcing firms to charge prices that are below a market’s natural equilibrium affects firms’ incentives to enter markets, notably with innovative products and more efficient means of production. But the problem is not just one of market entry and innovation.  Also relevant is the degree to which competition authorities are competent to determine the “right” prices or margins.

As Friedrich Hayek demonstrated in his influential 1945 essay The Use of Knowledge in Society, economic agents use information gleaned from prices to guide their business decisions. It is this distributed activity of thousands or millions of economic actors that enables markets to put resources to their most valuable uses, thereby leading to more efficient societies. By comparison, the efforts of central regulators to set prices and margins is necessarily inferior; there is simply no reasonable way for competition regulators to make such judgments in a consistent and reliable manner.

Given the substantial risk that investigations into purportedly excessive prices will deter market entry, such investigations should be circumscribed. But the court’s precedents, with their myopic focus on ex post prices, do not impose such constraints on the commission. The temptation to “correct” high prices—especially in the politically contentious pharmaceutical industry—may thus induce economically unjustified and ultimately deleterious intervention.

Predatory Pricing

A second important area of divergence concerns predatory-pricing cases. U.S. antitrust law subjects allegations of predatory pricing to two strict conditions:

  1. Monopolists must charge prices that are below some measure of their incremental costs; and
  2. There must be a realistic prospect that they will able to recoup these initial losses.

In laying out its approach to predatory pricing, the U.S. Supreme Court has identified the risk of false positives and the clear cost of such errors to consumers. It thus has particularly stressed the importance of the recoupment requirement. As the court found in 1993’s Brooke Group Ltd. v. Brown & Williamson Tobacco Corp., without recoupment, “predatory pricing produces lower aggregate prices in the market, and consumer welfare is enhanced.”

Accordingly, U.S. authorities must prove that there are constraints that prevent rival firms from entering the market after the predation scheme, or that the scheme itself would effectively foreclose rivals from entering the market in the first place. Otherwise, the predator would be undercut by competitors as soon as it attempts to recoup its losses by charging supra-competitive prices.

Without the strong likelihood that a monopolist will be able to recoup lost revenue from underpricing, the overwhelming weight of economic evidence (to say nothing of simple logic) is that predatory pricing is not a rational business strategy. Thus, apparent cases of predatory pricing are most likely not, in fact, predatory; deterring or punishing them would actually harm consumers.

By contrast, the EU employs a more expansive legal standard to define predatory pricing, and almost certainly risks injuring consumers as a result. Authorities must prove only that a company has charged a price below its average variable cost, in which case its behavior is presumed to be predatory. Even when a firm charges prices that are between its average variable and average total cost, it can be found guilty of predatory pricing if authorities show that its behavior was part of a plan to eliminate a competitor. Most significantly, in neither case is it necessary for authorities to show that the scheme would allow the monopolist to recoup its losses.

[I]t does not follow from the case‑law of the Court that proof of the possibility of recoupment of losses suffered by the application, by an undertaking in a dominant position, of prices lower than a certain level of costs constitutes a necessary precondition to establishing that such a pricing policy is abusive. (France Télécom v Commission, 2009).

This aspect of the legal standard has no basis in economic theory or evidence—not even in the “strategic” economic theory that arguably challenges the dominant Chicago School understanding of predatory pricing. Indeed, strategic predatory pricing still requires some form of recoupment, and the refutation of any convincing business justification offered in response. For example, ​​in a 2017 piece for the Antitrust Law Journal, Steven Salop lays out the “raising rivals’ costs” analysis of predation and notes that recoupment still occurs, just at the same time as predation:

[T]he anticompetitive conditional pricing practice does not involve discrete predatory and recoupment periods, as in the case of classical predatory pricing. Instead, the recoupment occurs simultaneously with the conduct. This is because the monopolist is able to maintain its current monopoly power through the exclusionary conduct.

The case of predatory pricing illustrates a crucial distinction between European and American competition law. The recoupment requirement embodied in American antitrust law serves to differentiate aggressive pricing behavior that improves consumer welfare—because it leads to overall price decreases—from predatory pricing that reduces welfare with higher prices. It is, in other words, entirely focused on the welfare of consumers.

The European approach, by contrast, reflects structuralist considerations far removed from a concern for consumer welfare. Its underlying fear is that dominant companies could use aggressive pricing to engender more concentrated markets. It is simply presumed that these more concentrated markets are invariably detrimental to consumers. Both the Tetra Pak and France Télécom cases offer clear illustrations of the ECJ’s reasoning on this point:

[I]t would not be appropriate, in the circumstances of the present case, to require in addition proof that Tetra Pak had a realistic chance of recouping its losses. It must be possible to penalize predatory pricing whenever there is a risk that competitors will be eliminated… The aim pursued, which is to maintain undistorted competition, rules out waiting until such a strategy leads to the actual elimination of competitors. (Tetra Pak v Commission, 1996).

Similarly:

[T]he lack of any possibility of recoupment of losses is not sufficient to prevent the undertaking concerned reinforcing its dominant position, in particular, following the withdrawal from the market of one or a number of its competitors, so that the degree of competition existing on the market, already weakened precisely because of the presence of the undertaking concerned, is further reduced and customers suffer loss as a result of the limitation of the choices available to them.  (France Télécom v Commission, 2009).

In short, the European approach leaves less room to analyze the concrete effects of a given pricing scheme, leaving it more prone to false positives than the U.S. standard explicated in the Brooke Group decision. Worse still, the European approach ignores not only the benefits that consumers may derive from lower prices, but also the chilling effect that broad predatory pricing standards may exert on firms that would otherwise seek to use aggressive pricing schemes to attract consumers.

Refusals to Deal

U.S. and EU antitrust law also differ greatly when it comes to refusals to deal. While the United States has limited the ability of either enforcement authorities or rivals to bring such cases, EU competition law sets a far lower threshold for liability.

As Justice Scalia wrote in Trinko:

Aspen Skiing is at or near the outer boundary of §2 liability. The Court there found significance in the defendant’s decision to cease participation in a cooperative venture. The unilateral termination of a voluntary (and thus presumably profitable) course of dealing suggested a willingness to forsake short-term profits to achieve an anticompetitive end. (Verizon v Trinko, 2003.)

This highlights two key features of American antitrust law with regard to refusals to deal. To start, U.S. antitrust law generally does not apply the “essential facilities” doctrine. Accordingly, in the absence of exceptional facts, upstream monopolists are rarely required to supply their product to downstream rivals, even if that supply is “essential” for effective competition in the downstream market. Moreover, as Justice Scalia observed in Trinko, the Aspen Skiing case appears to concern only those limited instances where a firm’s refusal to deal stems from the termination of a preexisting and profitable business relationship.

While even this is not likely the economically appropriate limitation on liability, its impetus—ensuring that liability is found only in situations where procompetitive explanations for the challenged conduct are unlikely—is completely appropriate for a regime concerned with minimizing the cost to consumers of erroneous enforcement decisions.

As in most areas of antitrust policy, EU competition law is much more interventionist. Refusals to deal are a central theme of EU enforcement efforts, and there is a relatively low threshold for liability.

In theory, for a refusal to deal to infringe EU competition law, it must meet a set of fairly stringent conditions: the input must be indispensable, the refusal must eliminate all competition in the downstream market, and there must not be objective reasons that justify the refusal. Moreover, if the refusal to deal involves intellectual property, it must also prevent the appearance of a new good.

In practice, however, all of these conditions have been relaxed significantly by EU courts and the commission’s decisional practice. This is best evidenced by the lower court’s Microsoft ruling where, as John Vickers notes:

[T]he Court found easily in favor of the Commission on the IMS Health criteria, which it interpreted surprisingly elastically, and without relying on the special factors emphasized by the Commission. For example, to meet the “new product” condition it was unnecessary to identify a particular new product… thwarted by the refusal to supply but sufficient merely to show limitation of technical development in terms of less incentive for competitors to innovate.

EU competition law thus shows far less concern for its potential chilling effect on firms’ investments than does U.S. antitrust law.

Vertical Restraints

There are vast differences between U.S. and EU competition law relating to vertical restraints—that is, contractual restraints between firms that operate at different levels of the production process.

On the one hand, since the Supreme Court’s Leegin ruling in 2006, even price-related vertical restraints (such as resale price maintenance (RPM), under which a manufacturer can stipulate the prices at which retailers must sell its products) are assessed under the rule of reason in the United States. Some commentators have gone so far as to say that, in practice, U.S. case law on RPM almost amounts to per se legality.

Conversely, EU competition law treats RPM as severely as it treats cartels. Both RPM and cartels are considered to be restrictions of competition “by object”—the EU’s equivalent of a per se prohibition. This severe treatment also applies to non-price vertical restraints that tend to partition the European internal market.

Furthermore, in the Consten and Grundig ruling, the ECJ rejected the consequentialist, and economically grounded, principle that inter-brand competition is the appropriate framework to assess vertical restraints:

Although competition between producers is generally more noticeable than that between distributors of products of the same make, it does not thereby follow that an agreement tending to restrict the latter kind of competition should escape the prohibition of Article 85(1) merely because it might increase the former. (Consten SARL & Grundig-Verkaufs-GMBH v. Commission of the European Economic Community, 1966).

This treatment of vertical restrictions flies in the face of longstanding mainstream economic analysis of the subject. As Patrick Rey and Jean Tirole conclude:

Another major contribution of the earlier literature on vertical restraints is to have shown that per se illegality of such restraints has no economic foundations.

Unlike the EU, the U.S. Supreme Court in Leegin took account of the weight of the economic literature, and changed its approach to RPM to ensure that the law no longer simply precluded its arguable consumer benefits, writing: “Though each side of the debate can find sources to support its position, it suffices to say here that economics literature is replete with procompetitive justifications for a manufacturer’s use of resale price maintenance.” Further, the court found that the prior approach to resale price maintenance restraints “hinders competition and consumer welfare because manufacturers are forced to engage in second-best alternatives and because consumers are required to shoulder the increased expense of the inferior practices.”

The EU’s continued per se treatment of RPM, by contrast, strongly reflects its “precautionary principle” approach to antitrust. European regulators and courts readily condemn conduct that could conceivably injure consumers, even where such injury is, according to the best economic understanding, exceedingly unlikely. The U.S. approach, which rests on likelihood rather than mere possibility, is far less likely to condemn beneficial conduct erroneously.

Political Discretion in European Competition Law

EU competition law lacks a coherent analytical framework like that found in U.S. law’s reliance on the consumer welfare standard. The EU process is driven by a number of laterally equivalent—and sometimes mutually exclusive—goals, including industrial policy and the perceived need to counteract foreign state ownership and subsidies. Such a wide array of conflicting aims produces lack of clarity for firms seeking to conduct business. Moreover, the discretion that attends this fluid arrangement of goals yields an even larger problem.

The Microsoft case illustrates this problem well. In Microsoft, the commission could have chosen to base its decision on various potential objectives. It notably chose to base its findings on the fact that Microsoft’s behavior reduced “consumer choice.”

The commission, in fact, discounted arguments that economic efficiency may lead to consumer welfare gains, because it determined “consumer choice” among media players was more important:

Another argument relating to reduced transaction costs consists in saying that the economies made by a tied sale of two products saves resources otherwise spent for maintaining a separate distribution system for the second product. These economies would then be passed on to customers who could save costs related to a second purchasing act, including selection and installation of the product. Irrespective of the accuracy of the assumption that distributive efficiency gains are necessarily passed on to consumers, such savings cannot possibly outweigh the distortion of competition in this case. This is because distribution costs in software licensing are insignificant; a copy of a software programme can be duplicated and distributed at no substantial effort. In contrast, the importance of consumer choice and innovation regarding applications such as media players is high. (Commission Decision No. COMP. 37792 (Microsoft)).

It may be true that tying the products in question was unnecessary. But merely dismissing this decision because distribution costs are near-zero is hardly an analytically satisfactory response. There are many more costs involved in creating and distributing complementary software than those associated with hosting and downloading. The commission also simply asserts that consumer choice among some arbitrary number of competing products is necessarily a benefit. This, too, is not necessarily true, and the decision’s implication that any marginal increase in choice is more valuable than any gains from product design or innovation is analytically incoherent.

The Court of First Instance was only too happy to give the commission a pass in its breezy analysis; it saw no objection to these findings. With little substantive reasoning to support its findings, the court fully endorsed the commission’s assessment:

As the Commission correctly observes (see paragraph 1130 above), by such an argument Microsoft is in fact claiming that the integration of Windows Media Player in Windows and the marketing of Windows in that form alone lead to the de facto standardisation of the Windows Media Player platform, which has beneficial effects on the market. Although, generally, standardisation may effectively present certain advantages, it cannot be allowed to be imposed unilaterally by an undertaking in a dominant position by means of tying.

The Court further notes that it cannot be ruled out that third parties will not want the de facto standardisation advocated by Microsoft but will prefer it if different platforms continue to compete, on the ground that that will stimulate innovation between the various platforms. (Microsoft Corp. v Commission, 2007)

Pointing to these conflicting effects of Microsoft’s bundling decision, without weighing either, is a weak basis to uphold the commission’s decision that consumer choice outweighs the benefits of standardization. Moreover, actions undertaken by other firms to enhance consumer choice at the expense of standardization are, on these terms, potentially just as problematic. The dividing line becomes solely which theory the commission prefers to pursue.

What such a practice does is vest the commission with immense discretionary power. Any given case sets up a “heads, I win; tails, you lose” situation in which defendants are easily outflanked by a commission that can change the rules of its analysis as it sees fit. Defendants can play only the cards that they are dealt. Accordingly, Microsoft could not successfully challenge a conclusion that its behavior harmed consumers’ choice by arguing that it improved consumer welfare, on net.

By selecting, in this instance, “consumer choice” as the standard to be judged, the commission was able to evade the constraints that might have been imposed by a more robust welfare standard. Thus, the commission can essentially pick and choose the objectives that best serve its interests in each case. This vastly enlarges the scope of potential antitrust liability, while also substantially decreasing the ability of firms to predict when their behavior may be viewed as problematic. It leads to what, in U.S. courts, would be regarded as an untenable risk of false positives that chill innovative behavior and create nearly unwinnable battles for targeted firms.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Dirk Auer, (Senior Fellow of Law & Economics, ICLE); Eric Fruits (Chief Economist, ICLE; Adjunct Professor of Economics, Portland State University); and Kristian Stout (Associate Director, ICLE

The COVID-19 pandemic is changing the way consumers shop and the way businesses sell. These shifts in behavior, designed to “flatten the curve” of infection through social distancing, are happening across many (if not all) markets. But in many cases, it’s impossible to know now whether these new habits are actually achieving the desired effect. 

Take a seemingly silly example from Oregon. The state is one of only two in the U.S. that prohibits self-serve gas. In response to COVID-19, the state fire marshall announced it would temporarily suspend its enforcement of the prohibition. Public opinion fell into two broad groups. Those who want the option to pump their own gas argue that self-serve reduces the interaction between station attendants and consumers, thereby potentially reducing the spread of coronavirus. On the other hand, those who support the prohibition on self-serve have blasted the fire marshall’s announcement, arguing that all those dirty fingers pressing keypads and all those grubby hands on fuel pumps will likely increase the spread of the virus. 

Both groups may be right, but no one yet knows the net effect. We can only speculate. This picture becomes even more complex when considering other, alternative policies. For instance, would it be more effective for the state of Oregon to curtail gas station visits by forcing the closure of stations? Probably not. Would it be more effective to reduce visits through some form of rationing? Maybe. Maybe not. 

Policymakers will certainly struggle to efficiently decide how firms and consumers should minimize the spread of COVID-19. That struggle is an extension of Hayek’s knowledge problem: policymakers don’t have adequate knowledge of alternatives, preferences, and the associated risks. 

A Hayekian approach — relying on bottom-up rather than top-down solutions to the problem — may be the most appropriate solution. Allowing firms to experiment and iteratively find solutions that work for their consumers and employees (potentially adjusting prices and wages in the process) may be the best that policymakers can do.

The case of online retail platforms

One area where these complex tradeoffs are particularly acute is that of online retail. In response to the pandemic, many firms have significantly boosted their online retail capacity. 

These initiatives have been met with a mix of enthusiasm and disapproval. On the one hand online retail enables consumers to purchase “essential” goods with a significantly reduced risk of COVID-19 contamination. It also allows “non-essential” goods to be sold, despite the closure of their brick and mortar stores. At first blush, this seems like a win-win situation for both consumers and retailers of all sizes, with large retailers ramping up their online operations and independent retailers switching to online platforms such as Amazon.

But there is a potential downside. Even contactless deliveries do present some danger, notably for warehouse workers who run the risk of being infected and subsequently passing the virus on to others. This risk is amplified by the fact that many major retailers, including Walmart, Kroger, CVS, and Albertsons, are hiring more warehouse and delivery workers to meet an increase in online orders. 

This has led some to question whether sales of “non-essential” goods (though the term is almost impossible to define) should be halted. The reasoning is that continuing to supply such goods needlessly puts lives at risk and reduces overall efforts to slow the virus.

Once again, these are incredibly complex questions. It is hard to gauge the overall risk of infection that is produced by the online retail industry’s warehousing and distribution infrastructure. In particular, it is not clear how effective social distancing policies, widely imposed within these workplaces, will be at achieving distancing and, in turn, reducing infections. 

More fundamentally, whatever this risk turns out to be, it is almost impossible to weigh it against an appropriate counterfactual. 

Online retail is not the only area where this complex tradeoff arises. An analogous reasoning could, for instance, also be applied to food delivery platforms. Ordering a meal on UberEats does carry some risk, but so does repeated trips to the grocery store. And there are legitimate concerns about the safety of food handlers working in close proximity to each other.  These considerations make it hard for policymakers to strike the appropriate balance. 

The good news: at least some COVID-related risks are being internalized

But there is also some good news. Firms, consumers and employees all have some incentive to mitigate these risks. 

Consumers want to purchase goods without getting contaminated; employees want to work in safe environments; and firms need to attract both consumers and employees, while minimizing potential liability. These (partially) aligned incentives will almost certainly cause these economic agents to take at least some steps that mitigate the spread of COVID-19. This might notably explain why many firms imposed social distancing measures well before governments started to take notice (here, here, and here). 

For example, one first-order effect of COVID-19 is that it has become more expensive for firms to hire warehouse workers. Not only have firms moved up along the supply curve (by hiring more workers), but the curve itself has likely shifted upwards reflecting the increased opportunity cost of warehouse work. Predictably, this has resulted in higher wages for workers. For example, Amazon and Walmart recently increased the wages they were paying warehouse workers, as have brick and mortar retailers, such as Kroger, who have implemented similar policies.

Along similar lines, firms and employees will predictably bargain — through various channels — over the appropriate level of protection for those workers who must continue to work in-person.

For example, some companies have found ways to reduce risk while continuing operations:

  • CNBC reports Tyson Foods is using walk-through infrared body temperature scanners to check employees’ temperatures as they enter three of the company’s meat processing plants. Other companies planning to use scanners include Goldman Sachs, UPS, Ford, and Carnival Cruise Lines.
  • Kroger’s Fred Meyer chain of supermarkets is limiting the number of customers in each of its stores to half the occupancy allowed under international building codes. Kroger will use infrared sensors and predictive analytics to monitor the new capacity limits. The company already uses the technology to estimate how many checkout lanes are needed at any given time.
  • Trader Joe’s limits occupancy in its store. Customers waiting to enter are asked to stand six feet apart using marked off Trader Joe’s logos on the sidewalk. Shopping carts are separated into groups of “sanitized” and “to be cleaned.” Each cart is thoroughly sprayed with disinfectant and wiped down with a clean cloth.

In other cases, bargaining over the right level of risk-mitigation has been pursued through more coercive channels, such as litigation and lobbying:

  • A recently filed lawsuit alleges that managers at an Illinois Walmart store failed to alert workers after several employees began showing symptoms of COVID-19. The suit claims Walmart “had a duty to exercise reasonable care in keeping the store in a safe and healthy environment and, in particular, to protect employees, customers and other individuals within the store from contracting COVID-19 when it knew or should have known that individuals at the store were at a very high risk of infection and exposure.” 
  • According to CNBC, a group of legislators, unions and Amazon employees in New York wrote a letter to CEO Jeff Bezos calling on him to enact greater protections for warehouse employees who continue to work during the coronavirus outbreak. The Financial Times reports worker protests at Amazon warehouse in the US, France, and Italy. Worker protests have been reported at a Barnes & Noble warehouse. Several McDonald’s locations have been hit with strikes.
  • In many cases, worker concerns about health and safety have been conflated with long-simmering issues of unionization, minimum wage, flexible scheduling, and paid time-off. For example, several McDonald’s strikes were reported to have been organized by “Fight for $15.”

Sometimes, there is simply no mutually-advantageous solution. And businesses are thus left with no other option than temporarily suspending their activities: 

  • For instance, McDonalds and Burger King have spontaneously closed their restaurants — including drive-thru and deliveries — in many European countries (here and here).
  • In Portland, Oregon, ChefStable a restaurant group behind some of the city’s best-known restaurants, closed all 20 of its bars and restaurants for at least four weeks. In what he called a “crisis of conscience,” owner Kurt Huffman concluded it would be impossible to maintain safe social distancing for customers and staff.

This is certainly not to say that all is perfect. Employers, employees and consumers may have very strong disagreements about what constitutes the appropriate level of risk mitigation.

Moreover, the questions of balancing worker health and safety with that of consumers become all the more complex when we recognize that consumers and businesses are operating in a dynamic environment, making sometimes fundamental changes to reduce risk at many levels of the supply chain.

Likewise, not all businesses will be able to implement measures that mitigate the risk of COVID-19. For instance, “Big Business” might be in a better position to reduce risks to its workforce than smaller businesses. 

Larger firms tend to have the resources and economies of scale to make capital investments in temperature scanners or sensors. They have larger workforces where employees can, say, shift from stocking shelves to sanitizing shopping carts. Several large employers, including Amazon, Kroger, and CVS have offered higher wages to employees who are more likely to be exposed to the coronavirus. Smaller firms are less likely to have the resources to offer such wage premiums.

For example, Amazon recently announced that it would implement mandatory temperature checks, that it would provide employees with protective equipment, and that it would increase the frequency and intensity of cleaning for all its sites. And, as already mentioned above, Tyson Foods announced that they would install temperature scanners at a number of sites. It is not clear whether smaller businesses are in a position to implement similar measures. 

That’s not to say that small businesses can’t adjust. It’s just more difficult. For example, a small paint-your-own ceramics shop, Mimosa Studios, had to stop offering painting parties because of government mandated social distancing. One way it’s mitigating the loss of business is with a paint-at-home package. Customers place an order online, and the studio delivers the ceramic piece, paints, and loaner brushes. When the customer is finished painting, Mimosa picks up the piece, fires it, and delivers the finished product. The approach doesn’t solve the problem, but it helps mitigate the losses.

Conclusion

In all likelihood, we can’t actually avoid all bad outcomes. There is, of course, some risk associated with even well-resourced large businesses continuing to operate, even though some of them play a crucial role in coronavirus-related lockdowns. 

Currently, market actors are working within the broad outlines of lockdowns deemed necessary by policymakers. Given the intensely complicated risk calculation necessary to determine if any given individual truly needs an “essential” (or even a “nonessential”) good or service, the best thing that lawmakers can do for now is let properly motivated private actors continue to seek optimal outcomes together within the imposed constraints. 

So far, most individuals and the firms serving them are at least partially internalizing Covid-related risks. The right approach for lawmakers would be to watch this process and determine where it breaks down. Measures targeted to fix those breaches will almost inevitably outperform interventionist planning to determine exactly what is essential, what is nonessential, and who should be allowed to serve consumers in their time of need.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Ben Sperry, (Associate Director, Legal Research, International Center for Law & Economics).]

The visceral reaction to the New York Times’ recent story on Matt Colvin, the man who had 17,700 bottles of hand sanitizer with nowhere to sell them, shows there is a fundamental misunderstanding of the importance of prices and the informational function they serve in the economy. Calls to enforce laws against “price gouging” may actually prove more harmful to consumers and society than allowing prices to rise (or fall, of course) in response to market conditions. 

Nobel-prize winning economist Friedrich Hayek explained how price signals serve as information that allows for coordination in a market society:

We must look at the price system as such a mechanism for communicating information if we want to understand its real function… The most significant fact about this system is the economy of knowledge with which it operates, or how little the individual participants need to know in order to be able to take the right action. In abbreviated form, by a kind of symbol, only the most essential information is passed on and passed on only to those concerned. It is more than a metaphor to describe the price system as a kind of machinery for registering change, or a system of telecommunications which enables individual producers to watch merely the movement of a few pointers, as an engineer might watch the hands of a few dials, in order to adjust their activities to changes of which they may never know more than is reflected in the price movement.

Economic actors don’t need a PhD in economics or even to pay attention to the news about the coronavirus to change their behavior. Higher prices for goods or services alone give important information to individuals — whether consumers, producers, distributors, or entrepreneurs — to conserve scarce resources, produce more, and look for (or invest in creating!) alternatives.

Prices are fundamental to rationing scarce resources, especially during an emergency. Allowing prices to rapidly rise has three salutary effects (as explained by Professor Michael Munger in his terrific twitter thread):

  1. Consumers ration how much they really need;
  2. Producers respond to the rising prices by ramping up supply and distributors make more available; and
  3. Entrepreneurs find new substitutes in order to innovate around bottlenecks in the supply chain. 

Despite the distaste with which the public often treats “price gouging,” officials should take care to ensure that they don’t prevent these three necessary responses from occurring. 

Rationing by consumers

During a crisis, if prices for goods that are in high demand but short supply are forced to stay at pre-crisis levels, the informational signal of a shortage isn’t given — at least by the market directly. This encourages consumers to buy more than is rationally justified under the circumstances. This stockpiling leads to shortages. 

Companies respond by rationing in various ways, like instituting shorter hours or placing limits on how much of certain high-demand goods can be bought by any one consumer. Lines (and unavailability), instead of price, become the primary cost borne by consumers trying to obtain the scarce but underpriced goods. 

If, instead, prices rise in light of the short supply and high demand, price-elastic consumers will buy less, freeing up supply for others. And, critically, price-inelastic consumers (i.e. those who most need the good) will be provided a better shot at purchase.

According to the New York Times story on Mr. Colvin, he focused on buying out the hand sanitizer in rural areas of Tennessee and Kentucky, since the major metro areas were already cleaned out. His goal was to then sell these hand sanitizers (and other high-demand goods) online at market prices. He was essentially acting as a speculator and bringing information to the market (much like an insider trader). If successful, he would be coordinating supply and demand between geographical areas by successfully arbitraging. This often occurs when emergencies are localized, like post-Katrina New Orleans or post-Irma Florida. In those cases, higher prices induced suppliers to shift goods and services from around the country to the affected areas. Similarly, here Mr. Colvin was arguably providing a beneficial service, by shifting the supply of high-demand goods from low-demand rural areas to consumers facing localized shortages. 

For those who object to Mr. Colvin’s bulk purchasing-for-resale scheme, the answer is similar to those who object to ticket resellers: the retailer should raise the price. If the Walmarts, Targets, and Dollar Trees raised prices or rationed supply like the supermarket in Denmark, Mr. Colvin would not have been able to afford nearly as much hand sanitizer. (Of course, it’s also possible — had those outlets raised prices — that Mr. Colvin would not have been able to profitably re-route the excess local supply to those in other parts of the country most in need.)

The role of “price gouging” laws and social norms

A common retort, of course, is that Colvin was able to profit from the pandemic precisely because he was able to purchase a large amount of stock at normal retail prices, even after the pandemic began. Thus, he was not a producer who happened to have a restricted amount of supply in the face of new demand, but a mere reseller who exacerbated the supply shortage problems.

But such an observation truncates the analysis and misses the crucial role that social norms against “price gouging” and state “price gouging” laws play in facilitating shortages during a crisis.

Under these laws, typically retailers may raise prices by at most 10% during a declared state of emergency. But even without such laws, brick-and-mortar businesses are tied to a location in which they are repeat players, and they may not want to take a reputational hit by raising prices during an emergency and violating the “price gouging” norm. By contrast, individual sellers, especially pseudonymous third-party sellers using online platforms, do not rely on repeat interactions to the same degree, and may be harder to track down for prosecution. 

Thus, the social norms and laws exacerbate the conditions that create the need for emergency pricing, and lead to outsized arbitrage opportunities for those willing to violate norms and the law. But, critically, this violation is only a symptom of the larger problem that social norms and laws stand in the way, in the first instance, of retailers using emergency pricing to ration scarce supplies.

Normally, third-party sales sites have much more dynamic pricing than brick and mortar outlets, which just tend to run out of underpriced goods for a period of time rather than raise prices. This explains why Mr. Colvin was able to sell hand sanitizer for prices much higher than retail on Amazon before the site suspended his ability to do so. On the other hand, in response to public criticism, Amazon, Walmart, eBay, and other platforms continue to crack down on third party “price-gouging” on their sites

But even PR-centric anti-gouging campaigns are not ultimately immune to the laws of supply and demand. Even Amazon.com, as a first party seller, ends up needing to raise prices, ostensibly as the pricing feedback mechanisms respond to cost increases up and down the supply chain. 

But without a willingness to allow retailers and producers to use the informational signal of higher prices, there will continue to be more extreme shortages as consumers rush to stockpile underpriced resources.

The desire to help the poor who cannot afford higher priced essentials is what drives the policy responses, but in reality no one benefits from shortages. Those who stockpile the in-demand goods are unlikely to be poor because doing so entails a significant upfront cost. And if they are poor, then the potential for resale at a higher price would be a benefit.

Increased production and distribution

During a crisis, it is imperative that spiking demand is met by increased production. Prices are feedback mechanisms that provide realistic estimates of demand to producers. Even if good-hearted producers forswearing the profit motive want to increase production as an act of charity, they still need to understand consumer demand in order to produce the correct amount. 

Of course, prices are not the only source of information. Producers reading the news that there is a shortage undoubtedly can ramp up their production. But even still, in order to optimize production (i.e., not just blindly increase output and hope they get it right), they need a feedback mechanism. Prices are the most efficient mechanism available for quickly translating the amount of social need (demand) for a given product to guarantee that producers do not undersupply the product  (leaving more people without than need the good), or oversupply the product (consuming more resources than necessary in a time of crisis). Prices, when allowed to adjust to actual demand, thus allow society to avoid exacerbating shortages and misallocating resources.

The opportunity to earn more profit incentivizes distributors all along the supply chain. Amazon is hiring 100,000 workers to help ship all the products which are being ordered right now. Grocers and retailers are doing their best to line the shelves with more in-demand food and supplies

Distributors rely on more than just price signals alone, obviously, such as information about how quickly goods are selling out. But even as retail prices stay low for consumers for many goods, distributors often are paying more to producers in order to keep the shelves full, as in the case of eggs. These are the relevant price signals for producers to increase production to meet demand.

For instance, hand sanitizer companies like GOJO and EO Products are ramping up production in response to known demand (so much that the price of isopropyl alcohol is jumping sharply). Farmers are trying to produce as much as is necessary to meet the increased orders (and prices) they are receiving. Even previously low-demand goods like beans are facing a boom time. These instances are likely caused by a mix of anticipatory response based on general news, as well as the slightly laggier price signals flowing through the supply chain. But, even with an “early warning” from the media, the manufacturers still need to ultimately shape their behavior with more precise information. This comes in the form of orders from retailers at increased frequencies and prices, which are both rising because of insufficient supply. In search of the most important price signal, profits, manufacturers and farmers are increasing production.

These responses to higher prices have the salutary effect of making available more of the products consumers need the most during a crisis. 

Entrepreneurs innovate around bottlenecks 

But the most interesting thing that occurs when prices rise is that entrepreneurs create new substitutes for in-demand products. For instance, distillers have started creating their own hand sanitizers

Unfortunately, however, government regulations on sales of distilled products and concerns about licensing have led distillers to give away those products rather than charge for them. Thus, beneficial as this may be, without the ability to efficiently price such products, not nearly as much will be produced as would otherwise be. The non-emergency price of zero effectively guarantees continued shortages because the demand for these free alternatives will far outstrip supply.

Another example is car companies in the US are now producing ventilators. The FDA waived regulations on the production of new ventilators after General Motors, Ford, and Tesla announced they would be willing to use idle production capacity for the production of ventilators.

As consumers demand more toilet paper, bottled water, and staple foods than can be produced quickly, entrepreneurs respond by refocusing current capabilities on these goods. Examples abound:

Without price signals, entrepreneurs would have far less incentive to shift production and distribution to the highest valued use. 

Conclusion

While stories like that of Mr. Colvin buying all of the hand sanitizer in Tennessee understandably bother people, government efforts to prevent prices from adjusting only impede the information sharing processes inherent in markets. 

If the concern is to help the poor, it would be better to pursue less distortionary public policy than arbitrarily capping prices. The US government, for instance, is currently considering a progressively tiered one-time payment to lower income individuals. 

Moves to create new and enforce existing “price-gouging” laws are likely to become more prevalent the longer shortages persist. Platforms will likely continue to receive pressure to remove “price-gougers,” as well. These policies should be resisted. Not only will these moves not prevent shortages, they will exacerbate them and push the sale of high-demand goods into grey markets where prices will likely be even higher. 

Prices are an important source of information not only for consumers, but for producers, distributors, and entrepreneurs. Short circuiting this signal will only be to the detriment of society.  

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Corbin Barthold, (Senior Litigation Counsel, Washington Legal Foundation).]

The pandemic is serious. COVID-19 will overwhelm our hospitals. It might break our entire healthcare system. To keep the number of deaths in the low hundreds of thousands, a study from Imperial College London finds, we will have to shutter much of our economy for months. Small wonder the markets have lost a third of their value in a relentless three-week plunge. Grievous and cruel will be the struggle to come.

“All men of sense will agree,” Hamilton wrote in Federalist No. 70, “in the necessity of an energetic Executive.” In an emergency, certainly, that is largely true. In the midst of this crisis even a staunch libertarian can applaud the government’s efforts to maintain liquidity, and can understand its urge to start dispersing helicopter money. By at least acting like it knows what it’s doing, the state can lessen many citizens’ sense of panic. Some of the emergency measures might even work.

Of course, many of them won’t. Even a trillion-dollar stimulus package might be too small, and too slowly dispersed, to do much good. What’s worse, that pernicious line, “Don’t let a crisis go to waste,” is in the air. Much as price gougers are trying to arbitrage Purell, political gougers, such as Senator Elizabeth Warren, are trying to cram woke diktats into disaster-relief bills. Even now, especially now, it is well to remember that government is not very good at what it does.

But dreams of dirigisme die hard, especially at the New York Times. “During the Great Depression,” Farhad Manjoo writes, “Franklin D. Roosevelt assembled a mighty apparatus to rebuild a broken economy.” Government was great at what it does, in Manjoo’s view, until neoliberalism arrived in the 1980s and ruined everything. “The incompetence we see now is by design. Over the last 40 years, America has been deliberately stripped of governmental expertise.” Manjoo implores us to restore the expansive state of yesteryear—“the sort of government that promised unprecedented achievement, and delivered.”

This is nonsense. Our government is not incompetent because Grover Norquist tried (and mostly failed) to strangle it. Our government is incompetent because, generally speaking, government is incompetent. The keystone of the New Deal, the National Industrial Recovery Act of 1933, was an incoherent mess. Its stated goals were at once to “reduce and relieve unemployment,” “improve standards of labor,” “avoid undue restriction of production,” “induce and maintain united action of labor and management,” “organiz[e] . . . co-operative action among trade groups,” and “otherwise rehabilitate industry.” The law empowered trade groups to create their own “codes of unfair competition,” a privilege they quite predictably used to form anticompetitive cartels.

At no point in American history has the state, with all its “governmental expertise,” been adept at spending money, stimulus or otherwise. A law supplying funds for the Transcontinental Railroad offered to pay builders more for track laid in the mountains, but failed to specify where those mountains begin. Leland Stanford commissioned a study finding that, lo and behold, the Sierra Nevada begins deep in the Sacramento Valley. When “the federal Interior Department initially challenged [his] innovative geology,” reports the historian H.W. Brands, Stanford sent an agent directly to President Lincoln, a politician who “didn’t know much geology” but “preferred to keep his allies happy.” “My pertinacity and Abraham’s faith moved mountains,” the triumphant lobbyist quipped after the meeting.

The supposed golden age of expert government, the time between the rise of FDR and the fall of LBJ, was no better. At the height of the Apollo program, it occurred to a physics professor at Princeton that if there were a small glass reflector on the Moon, scientists could use lasers to calculate the distance between it and Earth with great accuracy. The professor built the reflector for $5,000 and approached the government. NASA loved the idea, but insisted on building the reflector itself. This it proceeded to do, through its standard contracting process, for $3 million.

When the pandemic at last subsides, the government will still be incapable of setting prices, predicting industry trends, or adjusting to changed circumstances. What F.A. Hayek called the knowledge problem—the fact that useful information is dispersed throughout society—will be as entrenched and insurmountable as ever. Innovation will still have to come, if it is to come at all, overwhelmingly from extensive, vigorous, undirected trial and error in the private sector.

When New York Times columnists are not pining for the great government of the past, they are surmising that widespread trauma will bring about the great government of the future. “The outbreak,” Jamelle Bouie proposes in an article entitled “The Era of Small Government is Over,” has “made our mutual interdependence clear. This, in turn, has made it a powerful, real-life argument for the broadest forms of social insurance.” The pandemic is “an opportunity,” Bouie declares, to “embrace direct state action as a powerful tool.”

It’s a bit rich for someone to write about the coming sense of “mutual interdependence” in the pages of a publication so devoted to sowing grievance and discord. The New York Times is a totem of our divisions. When one of its progressive columnists uses the word “unity,” what he means is “submission to my goals.”

In any event, disunity in America is not a new, or even necessarily a bad, thing. We are a fractious, almost ungovernable people. The colonists rebelled against the British government because they didn’t want to pay it back for defending them from the French during the Seven Years’ War. When Hamilton, champion of the “energetic Executive,” pushed through a duty on liquor, the frontier settlers of western Pennsylvania tarred and feathered the tax collectors. In the Astor Place Riot of 1849, dozens of New Yorkers died in a brawl over which of two men was the better Shakespearean actor. Americans are not housetrained.

True enough, if the virus takes us to the kind of depths not seen in these parts since the Great Depression, all bets are off. Short of that, however, no one should lightly assume that Americans will long tolerate a statist revolution imposed on their fears. And thank goodness for that. Our unruliness, our unwillingness to do what we’re told, is part of what makes our society so dynamic and prosperous.

COVID-19 will shake the world. When it has gone, a new scene will open. We can say very little now about what is going to change. But we can hope that Americans will remain a creative, opinionated, fiercely independent lot. And we can be confident that, come what may, planned administration will remain a source of problems, while unplanned free enterprise will remain the surest source of solutions.


[TOTM: The following is the first in a series of posts by TOTM guests and authors on the politicization of antitrust. The entire series of posts is available here.]

This post is authored by Steven J. Cernak, Partner at Bona Law and Adjunct Professor, University of Michigan Law School and Western Michigan University Thomas M. Cooley Law School. This paper represents the current views of the author alone and not necessarily the views of any past, present or future employer or client.

When some antitrust practitioners hear “the politicization of antitrust,” they cringe while imagining, say, merger approval hanging on the size of the bribe or closeness of the connection with the right politician.  Even a more benign interpretation of the phrase “politicization of antitrust” might drive some antitrust technocrats up the wall:  “Why must the mainstream media and, heaven forbid, politicians start weighing in on what antitrust interpretations, policy and law should be?  Don’t they know that we have it all figured out and, if we decide it needs any tweaks, we’ll make those over drinks at the ABA Antitrust Section Spring Meeting?”

While I agree with the reaction to the cringe-worthy interpretation of “politicization,” I think members of the antitrust community should not be surprised or hostile to the second interpretation, that is, all the new attention from new people.  Such attention is not unusual historically; more importantly, it provides an opportunity to explain the benefits and limits of antitrust enforcement and the competitive process it is meant to protect. 

The Sherman Act itself, along with its state-level predecessors, was the product of a political reaction to perceived problems of the late 19th Century – hence all of today’s references to a “new gilded age” as echoes of the political arguments of 1890.  Since then, the Sherman Act has not been immutable.  The U.S. antitrust laws have changed – and new antitrust enforcers have even been added – when the political debates convinced enough that change was necessary.  Today’s political discussion could be surprising to so many members of the antitrust community because they were not even alive when the last major change was debated and passed

More generally, the U.S. political position on other government regulation of – or intervention or participation in – free markets has varied considerably over the years.  While controversial when they were passed, we now take Medicare and Medicaid for granted and debate “Medicare for all” – why shouldn’t an overhaul of the Sherman Act also be a legitimate political discussion?  The Interstate Commerce Commission might be gone and forgotten but at one time it garnered political support to regulate the most powerful industries of the late 19th and early 20th Century – why should a debate on new ways to regulate today’s powerful industries be out of the question? 

So today’s antitrust practitioners should avoid the temptation to proclaim an “end of history” and that all antitrust policy questions have been asked and answered and instead, as some of us have been suggesting since at least the last election cycle, join the political debate.  But now, for those of us who are generally supportive of the U.S. antitrust status quo, the question is how? 

Some have been pushing back on the supposed evidence that a change in antitrust or other governmental policies is necessary.  For instance, in late 2015 the White House Council of Economic Advisers published a paper on increased concentration in many industries which others have used as evidence of a failure of antitrust law to protect competition.  Josh Wright has used several platforms to point out that the industry measurement was too broad and the concentration level too low to be useful in these discussions.  Also, he reminded readers that concentration and levels of competition are different concepts that are not necessarily linked.  On questions surrounding inequality and stagnation of standards of living, Russ Roberts has produced a series of videos that try to explain why any such questions are difficult to answer with the easy numbers available and why, perhaps, it is not correct that “the rich got all the gains.” 

Others, like Dan Crane for instance, have advanced the debate by trying to get those commentators who are unhappy with the status quo to explain what they see as the problems and the proposed fixes.  While it might be too much to ask for unanimity among a diverse group of commentators, the debate might be more productive now that some more specific complaints and solutions have begun to emerge

Even if the problems are properly identified, we should not allow anyone to blithely assume that any – or any particular – increase in government oversight will solve it without creating different issues.  The Federal Trade Commission tackled this issue in its final hearing on Competition and Consumer Protection in the 21st Century with a panel on Frank Easterbrook’s seminal “Limits of Antitrust” paper.  I was fortunate enough to be on that panel and tried to summarize the ongoing importance of “Limits,” and advance the broader debate, by encouraging those who would change antitrust policy and increase supervision of the market to have appropriate “regulatory humility” (a term borrowed from former FTC Chairman Maureen Ohlhausen) about what can be accomplished.

I identified three varieties of humility present in “Limits” and pertinent here.  First, there is the humility to recognize that mastering anything as complex as an economy or any significant industry will require knowledge of innumerable items, some unseen or poorly understood, and so could be impossible.  Here, Easterbrook echoes Friedrich Hayek’s “Pretense of Knowledge” Nobel acceptance speech. 

Second, there is the humility to recognize that any judge or enforcer, like any other human being, is subject to her own biases and predilections, whether based on experience or the institutional framework within which she works.  While market participants might not be perfect, great thinkers from Madison to Kovacic have recognized that “men (or any agency leaders) are not angels” either.  As Thibault Schrepel has explained, it would be “romantic” to assume that any newly-empowered government enforcer will always act in the best interest of her constituents. 

Finally, there is the humility to recognize that humanity has been around a long time and faced a number of issues and that we might learn something from how our predecessors reacted to what appear to be similar issues in history.  Given my personal history and current interests, I have focused on events from the automotive industry; however, the story of the unassailable power (until it wasn’t) of A&P and how it spawned the Robinson-Patman Act, ably told by Tim Muris and Jonathan Neuchterlein, might be more pertinent here.  So challenging those advocating for big changes to explain why they are so confident this time around can be useful. 

But while all those avenues of argument can be effective in explaining why greater government intervention in the form of new antitrust policies might be worse than the status quo, we also must do a better job at explaining why antitrust and the market forces it protects are actually good for society.  If democratic capitalism really has “lengthened the life span, made the elimination of poverty and famine thinkable, enlarged the range of human choice” as claimed by Michael Novak in The Spirit of Democratic Capitalism, we should do more to spread that good news. 

Maybe we need to spend more time telling and retelling the “I, Pencil” or “It’s a Wonderful Loaf” stories about how well markets can and do work at coordinating the self-interested behavior of many to the benefit of even more.  Then we can illustrate the limited role of antitrust in that complex effort – say, punishing any collusion among the mills or bakers in those two stories to ensure the process works as beautifully and simply displayed.  For the first time in decades, politicians and real people, like the consumers whose welfare we are supposed to be protecting, are paying attention to our wonderful world of antitrust.  We should seize the opportunity to explain what we do and why it matters and discuss if any improvements can be made.

A recently published book, “Kochland – The Secret History of Koch Industries and Corporate Power in America” by Christopher Leonard, presents a gripping account of relentless innovation and the power of the entrepreneur to overcome adversity in pursuit of delivering superior goods and services to the market while also reaping impressive profits. It’s truly an inspirational American story.

Now, I should note that I don’t believe Mr. Leonard actually intended his book to be quite so complimentary to the Koch brothers and the vast commercial empire they built up over the past several decades. He includes plenty of material detailing, for example, their employees playing fast and loose with environmental protection rules, or their labor lawyers aggressively bargaining with unions, sometimes to the detriment of workers. And all of the stories he presents are supported by sympathetic emotional appeals through personal anecdotes. 

But, even then, many of the negative claims are part of a larger theme of Koch Industries progressively improving its business practices. One prominent example is how Koch Industries learned from its environmentally unfriendly past and implemented vigorous programs to ensure “10,000% compliance” with all federal and state environmental laws. 

What really stands out across most or all of the stories Leonard has to tell, however, is the deep appreciation that Charles Koch and his entrepreneurially-minded employees have for the fundamental nature of the market as an information discovery process. Indeed, Koch Industries has much in common with modern technology firms like Amazon in this respect — but decades before the information technology revolution made the full power of “Big Data” gathering and processing as obvious as it is today.

The impressive information operation of Koch Industries

Much of Kochland is devoted to stories in which Koch Industries’ ability to gather and analyze data from across its various units led to the production of superior results for the economy and consumers. For example,  

Koch… discovered that the National Parks Service published data showing the snow pack in the California mountains, data that Koch could analyze to determine how much water would be flowing in future months to generate power at California’s hydroelectric plants. This helped Koch predict with great accuracy the future supply of electricity and the resulting demand for natural gas.

Koch Industries was able to use this information to anticipate the amount of power (megawatt hours) it needed to deliver to the California power grid (admittedly, in a way that was somewhat controversial because of poorly drafted legislation relating to the new regulatory regime governing power distribution and resale in the state).

And, in 2000, while many firms in the economy were still riding the natural gas boom of the 90s, 

two Koch analysts and a reservoir engineer… accurately predicted a coming disaster that would contribute to blackouts along the West Coast, the bankruptcy of major utilities, and skyrocketing costs for many consumers.

This insight enabled Koch Industries to reap huge profits in derivatives trading, and it also enabled it to enter — and essentially rescue — a market segment crucial for domestic farmers: nitrogen fertilizer.

The market volatility in natural gas from the late 90s through early 00s wreaked havoc on the nitrogen fertilizer industry, for which natural gas is the primary input. Farmland — a struggling fertilizer producer — had progressively mismanaged its business over the preceding two decades by focusing on developing lines of business outside of its core competencies, including blithely exposing itself to the volatile natural gas market in pursuit of short-term profits. By the time it was staring bankruptcy in the face, there were no other companies interested in acquiring it. 

Koch’s analysts, however, noticed that many of Farmland’s key fertilizer plants were located in prime locations for reaching local farmers. Once the market improved, whoever controlled those key locations would be in a superior position for selling into the nitrogen fertilizer market. So, by utilizing the data it derived from its natural gas operations (both operating pipelines and storage facilities, as well as understanding the volatility of gas prices and availability through its derivatives trading operations), Koch Industries was able to infer that it could make substantial profits by rescuing this bankrupt nitrogen fertilizer business. 

Emblematic of Koch’s philosophy of only making long-term investments, 

[o]ver the next ten years, [Koch Industries] spent roughly $500 million to outfit the plants with new technology while streamlining production… Koch installed a team of fertilizer traders in the office… [t]he traders bought and sold supplies around the globe, learning more about fertilizer markets each day. Within a few years, Koch Fertilizer built a global distribution network. Koch founded a new company, called Koch Energy Services, which bought and sold natural gas supplies to keep the fertilizer plants stocked.

Thus, Koch Industries not only rescued midwest farmers from shortages that would have decimated their businesses, it invested heavily to ensure that production would continue to increase to meet future demand. 

As noted, this acquisition was consistent with the ethos of Koch Industries, which stressed thinking about investments as part of long-term strategies, in contrast to their “counterparties in the market [who] were obsessed with the near-term horizon.” This led Koch Industries to look at investments over a period measured in years or decades, an approach that allowed the company to execute very intricate investment strategies: 

If Koch thought there was going to be an oversupply of oil in the Gulf Coast region, for example, it might snap up leases on giant oil barges, knowing that when the oversupply hit, companies would be scrambling for extra storage space and willing to pay a premium for the leases that Koch bought on the cheap. This was a much safer way to execute the trade than simply shorting the price of oil—even if Koch was wrong about the supply glut, the downside was limited because Koch could still sell or use the barge leases and almost certainly break even.

Entrepreneurs, regulators, and the problem of incentives

All of these accounts and more in Kochland brilliantly demonstrate a principal salutary role of entrepreneurs in the market, which is to discover slack or scarce resources in the system and manage them in a way that they will be available for utilization when demand increases. Guaranteeing the presence of oil barges in the face of market turbulence, or making sure that nitrogen fertilizer is available when needed, is precisely the sort of result sound public policy seeks to encourage from firms in the economy. 

Government, by contrast — and despite its best intentions — is institutionally incapable of performing the same sorts of entrepreneurial activities as even very large private organizations like Koch Industries. The stories recounted in Kochland demonstrate this repeatedly. 

For example, in the oil tanker episode, Koch’s analysts relied on “huge amounts of data from outside sources” – including “publicly available data…like the federal reports that tracked the volume of crude oil being stored in the United States.” Yet, because that data was “often stale” owing to a rigid, periodic publication schedule, it lacked the specificity necessary for making precise interventions in markets. 

Koch’s analysts therefore built on that data using additional public sources, such as manifests from the Customs Service which kept track of the oil tanker traffic in US waters. Leveraging all of this publicly available data, Koch analysts were able to develop “a picture of oil shipments and flows that was granular in its specificity.”

Similarly, when trying to predict snowfall in the western US, and how that would affect hydroelectric power production, Koch’s analysts relied on publicly available weather data — but extended it with their own analytical insights to make it more suitable to fine-grained predictions. 

By contrast, despite decades of altering the regulatory scheme around natural gas production, transport and sales, and being highly involved in regulating all aspects of the process, the federal government could not even provide the data necessary to adequately facilitate markets. Koch’s energy analysts would therefore engage in various deals that sometimes would only break even — if it meant they could develop a better overall picture of the relevant markets: 

As was often the case at Koch, the company… was more interested in the real-time window that origination deals could provide into the natural gas markets. Just as in the early days of the crude oil markets, information about prices was both scarce and incredibly valuable. There were not yet electronic exchanges that showed a visible price of natural gas, and government data on sales were irregular and relatively slow to come. Every origination deal provided fresh and precise information about prices, supply, and demand.

In most, if not all, of the deals detailed in Kochland, government regulators had every opportunity to find the same trends in the publicly available data — or see the same deficiencies in the data and correct them. Given their access to the same data, government regulators could, in some imagined world, have developed policies to mitigate the effects of natural gas market collapses, handle upcoming power shortages, or develop a reliable supply of fertilizer to midwest farmers. But they did not. Indeed, because of the different sets of incentives they face (among other factors), in the real world, they cannot do so, despite their best intentions.

The incentive to innovate

This gets to the core problem that Hayek described concerning how best to facilitate efficient use of dispersed knowledge in such a way as to achieve the most efficient allocation and distribution of resources: 

The various ways in which the knowledge on which people base their plans is communicated to them is the crucial problem for any theory explaining the economic process, and the problem of what is the best way of utilizing knowledge initially dispersed among all the people is at least one of the main problems of economic policy—or of designing an efficient economic system.

The question of how best to utilize dispersed knowledge in society can only be answered by considering who is best positioned to gather and deploy that knowledge. There is no fundamental objection to “planning”  per se, as Hayek notes. Indeed, in a complex society filled with transaction costs, there will need to be entities capable of internalizing those costs  — corporations or governments — in order to make use of the latent information in the system. The question is about what set of institutions, and what set of incentives governing those institutions, results in the best use of that latent information (and the optimal allocation and distribution of resources that follows from that). 

Armen Alchian captured the different incentive structures between private firms and government agencies well: 

The extent to which various costs and effects are discerned, measured and heeded depends on the institutional system of incentive-punishment for the deciders. One system of rewards-punishment may increase the extent to which some objectives are heeded, whereas another may make other goals more influential. Thus procedures for making or controlling decisions in one rewards-incentive system are not necessarily the “best” for some other system…

In the competitive, private, open-market economy, the wealth-survival prospects are not as strong for firms (or their employees) who do not heed the market’s test of cost effectiveness as for firms who do… as a result the market’s criterion is more likely to be heeded and anticipated by business people. They have personal wealth incentives to make more thorough cost-effectiveness calculations about the products they could produce …

In the government sector, two things are less effective. (1) The full cost and value consequences of decisions do not have as direct and severe a feedback impact on government employees as on people in the private sector. The costs of actions under their consideration are incomplete simply because the consequences of ignoring parts of the full span of costs are less likely to be imposed on them… (2) The effectiveness, in the sense of benefits, of their decisions has a different reward-inventive or feedback system … it is fallacious to assume that government officials are superhumans, who act solely with the national interest in mind and are never influenced by the consequences to their own personal position.

In short, incentives matter — and are a function of the institutional arrangement of the system. Given the same set of data about a scarce set of resources, over the long run, the private sector generally has stronger incentives to manage resources efficiently than does government. As Ludwig von Mises showed, moving those decisions into political hands creates a system of political preferences that is inherently inferior in terms of the production and distribution of goods and services.

Koch Industries: A model of entrepreneurial success

The market is not perfect, but no human institution is perfect. Despite its imperfections, the market provides the best system yet devised for fairly and efficiently managing the practically unlimited demands we place on our scarce resources. 

Kochland provides a valuable insight into the virtues of the market and entrepreneurs, made all the stronger by Mr. Leonard’s implied project of “exposing” the dark underbelly of Koch Industries. The book tells the bad tales, which I’m willing to believe are largely true. I would, frankly, be shocked if any large entity — corporation or government — never ran into problems with rogue employees, internal corporate dynamics gone awry, or a failure to properly understand some facet of the market or society that led to bad investments or policy. 

The story of Koch Industries — presented even as it is through the lens of a “secret history”  — is deeply admirable. It’s the story of a firm that not only learns from its own mistakes, as all firms must do if they are to survive, but of a firm that has a drive to learn in its DNA. Koch Industries relentlessly gathers information from the market, sometimes even to the exclusion of short-term profit. It eschews complex bureaucratic structures and processes, which encourages local managers to find opportunities and nimbly respond.

Kochland is a quick read that presents a gripping account of one of America’s corporate success stories. There is, of course, a healthy amount of material in the book covering the Koch brothers’ often controversial political activities. Nonetheless, even those who hate the Koch brothers on account of politics would do well to learn from the model of entrepreneurial success that Kochland cannot help but describe in its pages. 

[TOTM: The following is the eighth in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case recently decided by Judge Lucy Koh in the Northern District of California. Other posts in this series are here. The blog post is based on a forthcoming paper regarding patent holdup, co-authored by Dirk Auer and Julian Morris.]

Samsung SGH-F480V – controller board – Qualcomm MSM6280

In his latest book, Tyler Cowen calls big business an “American anti-hero”. Cowen argues that the growing animosity towards successful technology firms is to a large extent unwarranted. After all, these companies have generated tremendous prosperity and jobs.

Though it is less known to the public than its Silicon Valley counterparts, Qualcomm perfectly fits the anti-hero mold. Despite being a key contributor to the communications standards that enabled the proliferation of smartphones around the globe – an estimated 5 Billion people currently own a device – Qualcomm has been on the receiving end of considerable regulatory scrutiny on both sides of the Atlantic (including two in the EU; see here and here). 

In the US, Judge Lucy Koh recently ruled that a combination of anticompetitive practices had enabled Qualcomm to charge “unreasonably high royalty rates” for its CDMA and LTE cellular communications technology. Chief among these practices was Qualcomm’s so-called “no license, no chips” policy, whereby the firm refuses to sell baseband processors to implementers that have not taken out a license for its communications technology. Other grievances included Qualcomm’s purported refusal to license its patents to rival chipmakers, and allegations that it attempted to extract exclusivity obligations from large handset manufacturers, such as Apple. According to Judge Koh, these practices resulted in “unreasonably high” royalty rates that failed to comply with Qualcomm’s FRAND obligations.

Judge Koh’s ruling offers an unfortunate example of the numerous pitfalls that decisionmakers face when they second-guess the distributional outcomes achieved through market forces. This is particularly true in the complex standardization space.

The elephant in the room

The first striking feature of Judge Koh’s ruling is what it omits. Throughout the more than two-hundred-page long document, there is not a single reference to the concepts of holdup or holdout (crucial terms of art for a ruling that grapples with the prices charged by an SEP holder). 

At first sight, this might seem like a semantic quibble. But words are important. Patent holdup (along with the “unreasonable” royalties to which it arguably gives rise) is possible only when a number of cumulative conditions are met. Most importantly, the foundational literature on economic opportunism (here and here) shows that holdup (and holdout) mostly occur when parties have made asset-specific sunk investments. This focus on asset-specific investments is echoed by even the staunchest critics of the standardization status quo (here).

Though such investments may well have been present in the case at hand, there is no evidence that they played any part in the court’s decision. This is not without consequences. If parties did not make sunk relationship-specific investments, then the antitrust case against Qualcomm should have turned upon the alleged exclusion of competitors, not the level of Qualcomm’s royalties. The DOJ said this much in its statement of interest concerning Qualcomm’s motion for partial stay of injunction pending appeal. Conversely, if these investments existed, then patent holdout (whereby implementers refuse to license key pieces of intellectual property) was just as much of a risk as patent holdup (here and here). And yet the court completely overlooked this possibility.

The misguided push for component level pricing

The court also erred by objecting to Qualcomm’s practice of basing license fees on the value of handsets, rather than that of modem chips. In simplified terms, implementers paid Qualcomm a percentage of their devices’ resale price. The court found that this was against Federal Circuit law. Instead, it argued that royalties should be based on the value the smallest salable patent-practicing component (in this case, baseband chips). This conclusion is dubious both as a matter of law and of policy.

From a legal standpoint, the question of the appropriate royalty base seems far less clear-cut than Judge Koh’s ruling might suggest. For instance, Gregory Sidak observes that in TCL v. Ericsson Judge Selna used a device’s net selling price as a basis upon which to calculate FRAND royalties. Likewise, in CSIRO v. Cisco, the Court also declined to use the “smallest saleable practicing component” as a royalty base. And finally, as Jonathan Barnett observes, the Circuit Laser Dynamics case law cited  by Judge Koh relates to the calculation of damages in patent infringement suits. There is no legal reason to believe that its findings should hold any sway outside of that narrow context. It is one thing for courts to decide upon the methodology that they will use to calculate damages in infringement cases – even if it is a contested one. It is a whole other matter to shoehorn private parties into adopting this narrow methodology in their private dealings. 

More importantly, from a policy standpoint, there are important advantages to basing royalty rates on the price of an end-product, rather than that of an intermediate component. This type of pricing notably enables parties to better allocate the risk that is inherent in launching a new product. In simplified terms: implementers want to avoid paying large (fixed) license fees for failed devices; and patent holders want to share in the benefits of successful devices that rely on their inventions. The solution, as Alain Bousquet and his co-authors explain, is to agree on royalty payments that are contingent on success in the market:

Because the demand for a new product is uncertain and/or the potential cost reduction of a new technology is not perfectly known, both seller and buyer may be better off if the payment for the right to use an innovation includes a state-contingent royalty (rather than consisting of just a fixed fee). The inventor wants to benefit from a growing demand for a new product, and the licensee wishes to avoid high payments in case of disappointing sales.

While this explains why parties might opt for royalty-based payments over fixed fees, it does not entirely elucidate the practice of basing royalties on the price of an end device. One explanation is that a technology’s value will often stem from its combination with other goods or technologies. Basing royalties on the value of an end-device enables patent holders to more effectively capture the social benefits that flow from these complementarities.

Imagine the price of the smallest saleable component is identical across all industries, despite it being incorporated into highly heterogeneous devices. For instance, the same modem chip could be incorporated into smartphones (of various price ranges), tablets, vehicles, and other connected devices. The Bousquet line of reasoning (above) suggests that it is efficient for the patent holder to earn higher royalties (from the IP that underpins the modem chips) in those segments where market demand is strongest (i.e. where there are stronger complementarities between the modem chip and the end device).

One way to make royalties more contingent on market success is to use the price of the modem (which is presumably identical across all segments) as a royalty base and negotiate a separate royalty rate for each end device (charging a higher rate for devices that will presumably benefit from stronger consumer demand). But this has important drawbacks. For a start, identifying those segments (or devices) that are most likely to be successful is informationally cumbersome for the inventor. Moreover, this practice could land the patent holder in hot water. Antitrust authorities might naïvely conclude that these varying royalty rates violate the “non-discriminatory” part of FRAND.

A much simpler solution is to apply a single royalty rate (or at least attempt to do so) but use the price of the end device as a royalty base. This ensures that the patent holder’s rewards are not just contingent on the number of devices sold, but also on their value. Royalties will thus more closely track the end-device’s success in the marketplace.   

In short, basing royalties on the value of an end-device is an informationally light way for the inventor to capture some of the unforeseen value that might stem from the inclusion of its technology in an end device. Mandating that royalty rates be based on the value of the smallest saleable component ignores this complex reality.

Prices are almost impossible to reconstruct

Judge Koh was similarly imperceptive when assessing Qualcomm’s contribution to the value of key standards, such as LTE and CDMA. 

For a start, she reasoned that Qualcomm’s royalties were large compared to the number of patents it had contributed to these technologies:

Moreover, Qualcomm’s own documents also show that Qualcomm is not the top standards contributor, which confirms Qualcomm’s own statements that QCT’s monopoly chip market share rather than the value of QTL’s patents sustain QTL’s unreasonably high royalty rates.

Given the tremendous heterogeneity that usually exists between the different technologies that make up a standard, simply counting each firm’s contributions is a crude and misleading way to gauge the value of their patent portfolios. Accordingly, Qualcomm argued that it had made pioneering contributions to technologies such as CDMA, and 4G/5G. Though the value of Qualcomm’s technologies is ultimately an empirical question, the court’s crude patent counting  was unlikely to provide a satisfying answer.

Just as problematically, the court also concluded that Qualcomm’s royalties were unreasonably high because “modem chips do not drive handset value.” In its own words:

Qualcomm’s intellectual property is for communication, and Qualcomm does not own intellectual property on color TFT LCD panel, mega-pixel DSC module, user storage memory, decoration, and mechanical parts. The costs of these non-communication-related components have become more expensive and now contribute 60-70% of the phone value. The phone is not just for communication, but also for computing, movie-playing, video-taking, and data storage.

As Luke Froeb and his co-authors have also observed, the court’s reasoning on this point is particularly unfortunate. Though it is clearly true that superior LCD panels, cameras, and storage increase a handset’s value – regardless of the modem chip that is associated with them – it is equally obvious that improvements to these components are far more valuable to consumers when they are also associated with high-performance communications technology.

For example, though there is undoubtedly standalone value in being able to take improved pictures on a smartphone, this value is multiplied by the ability to instantly share these pictures with friends, and automatically back them up on the cloud. Likewise, improving a smartphone’s LCD panel is more valuable if the device is also equipped with a cutting edge modem (both are necessary for consumers to enjoy high-definition media online).

In more technical terms, the court fails to acknowledge that, in the presence of perfect complements, each good makes an incremental contribution of 100% to the value of the whole. A smartphone’s components would be far less valuable to consumers if they were not associated with a high-performance modem, and vice versa. The fallacy to which the court falls prey is perfectly encapsulated by a quote it cites from Apple’s COO:

Apple invests heavily in the handset’s physical design and enclosures to add value, and those physical handset features clearly have nothing to do with Qualcomm’s cellular patents, it is unfair for Qualcomm to receive royalty revenue on that added value.

The question the court should be asking, however, is whether Apple would have gone to the same lengths to improve its devices were it not for Qualcomm’s complementary communications technology. By ignoring this question, Judge Koh all but guaranteed that her assessment of Qualcomm’s royalty rates would be wide of the mark.

Concluding remarks

In short, the FTC v. Qualcomm case shows that courts will often struggle when they try to act as makeshift price regulators. It thus lends further credence to Gergory Werden and Luke Froeb’s conclusion that:

Nothing is more alien to antitrust than enquiring into the reasonableness of prices. 

This is especially true in complex industries, such as the standardization space. The colossal number of parameters that affect the price for a technology are almost impossible to reproduce in a top-down fashion, as the court attempted to do in the Qualcomm case. As a result, courts will routinely draw poor inferences from factors such as the royalty base agreed upon by parties, the number of patents contributed by a firm, and the complex manner in which an individual technology may contribute to the value of an end-product. Antitrust authorities and courts would thus do well to recall the wise words of Friedrich Hayek:

If we can agree that the economic problem of society is mainly one of rapid adaptation to changes in the particular circumstances of time and place, it would seem to follow that the ultimate decisions must be left to the people who are familiar with these circumstances, who know directly of the relevant changes and of the resources immediately available to meet them. We cannot expect that this problem will be solved by first communicating all this knowledge to a central board which, after integrating all knowledge, issues its orders. We must solve it by some form of decentralization.

[This post is the sixth in an ongoing symposium on “Should We Break Up Big Tech?” that features analysis and opinion from various perspectives.]

[This post is authored by Thibault Schrepel, Faculty Associate at the Berkman Center at Harvard University and Assistant Professor in European Economic Law at Utrecht University School of Law.]

The pretense of ignorance

Over the last few years, I have published a series of antitrust conversations with Nobel laureates in economics. I have discussed big tech dominance with most of them, and although they have different perspectives, all of them agreed on one thing: they do not know what the effect of breaking up big tech would be. In fact, I have never spoken with any economist who was able to show me convincing empirical evidence that breaking up big tech would on net be good for consumers. The same goes for political scientists; I have never read any article that, taking everything into consideration, proves empirically that breaking up tech companies would be good for protecting democracies, if that is the objective (please note that I am not even discussing the fact that using antitrust law to do that would violate the rule of law, for more on the subject, click here).

This reminds me of Friedrich Hayek’s Nobel memorial lecture, in which he discussed the “pretense of knowledge.” He argued that some issues will always remain too complex for humans (even helped by quantum computers and the most advanced AI; that’s right!). Breaking up big tech is one such issue; it is simply impossible simultaneously to consider the micro and macro-economic impacts of such an enormous undertaking, which would affect, literally, billions of people. Not to mention the political, sociological and legal issues, all of which combined are beyond human understanding.

Ignorance + fear = fame

In the absence of clear-cut conclusions, here is why (I think), some officials are arguing for breaking up big tech. First, it may be possible that some of them actually believe that it would be great. But I am sure we agree that beliefs should not be a valid basis for such actions. More realistically, the answer can be found in the work of another Nobel laureate, James Buchanan, and in particular his 1978 lecture in Vienna entitled “Politics Without Romance.”

In his lecture and the paper that emerged from it, Buchanan argued that while markets fail, so do governments. The latter is especially relevant insofar as top officials entrusted with public power may, occasionally at least, use that power to benefit their personal interests rather than the public interest. Thus, the presumption that government-imposed corrections for market failures always accomplish the desired objectives must be rejected. Taking that into consideration, it follows that the expected effectiveness of public action should always be established as precisely and scientifically as possible before taking action. Integrating these insights from Hayek and Buchanan, we must conclude that it is not possible to know whether the effects of breaking up big tech would on net be positive.

The question then is why, in the absence of positive empirical evidence, are some officials arguing for breaking up tech giants then? Well, because defending such actions may help them achieve their personal goals. Often, it is more important for public officials to show their muscle and take action, rather showing great care about reaching a positive net result for society. This is especially true when it is practically impossible to evaluate the outcome due to the scale and complexity of the changes that ensue. That enables these officials to take credit for being bold while avoiding blame for the harms.

But for such a call to be profitable for the public officials, they first must legitimize the potential action in the eyes of the majority of the public. Until now, most consumers evidently like the services of tech giants, which is why it is crucial for the top officials engaged in such a strategy to demonize those companies and further explain to consumers why they are wrong to enjoy them. Only then does defending the breakup of tech giants becomes politically valuable.

Some data, one trend

In a recent paper entitled “Antitrust Without Romance,” I have analyzed the speeches of the five current FTC commissioners, as well as the speeches of the current and three previous EU Competition Commissioners. What I found is an increasing trend to demonize big tech companies. In other words, public officials increasingly seek to prepare the general public for the idea that breaking up tech giants would be great.

In Europe, current Competition Commissioner Margrethe Vestager has sought to establish an opposition between the people (referred under the pronoun “us”) and tech companies (referred under the pronoun “them”) in more than 80% of her speeches. She further describes these companies as engaging in manipulation of the public and unleashing violence. She says they, “distort or fabricate information, manipulate people’s views and degrade public debate” and help “harmful, untrue information spread faster than ever, unleashing violence and undermining democracy.” Furthermore, she says they cause, “danger of death.” On this basis, she mentions the possibility of breaking them up (for more data about her speeches, see this link).

In the US, we did not observe a similar trend. Assistant Attorney General Makan Delrahim, who has responsibility for antitrust enforcement at the Department of Justice, describes the relationship between people and companies as being in opposition in fewer than 10% of his speeches. The same goes for most of the FTC commissioners (to see all the data about their speeches, see this link). The exceptions are FTC Chairman Joseph J. Simons, who describes companies’ behavior as “bad” from time to time (and underlines that consumers “deserve” better) and Commissioner Rohit Chopra, who describes the relationship between companies and the people as being in opposition to one another in 30% of his speeches. Chopra also frequently labels companies as “bad.” These are minor signs of big tech demonization compared to what is currently done by European officials. But, unfortunately, part of the US doctrine (which does not hide political objectives) pushes for demonizing big tech companies. One may have reason to fear that such a trend will grow in the US as it has in Europe, especially considering the upcoming presidential campaign in which far-right and far-left politicians seem to agree about the need to break up big tech.

And yet, let’s remember that no-one has any documented, tangible, and reproducible evidence that breaking up tech giants would be good for consumers, or societies at large, or, in fact, for anyone (even dolphins, okay). It might be a good idea; it might be a bad idea. Who knows? But the lack of evidence either way militates against taking such action. Meanwhile, there is strong evidence that these discussions are fueled by a handful of individuals wishing to benefit from such a call for action. They do so, first, by depicting tech giants as representing the new elite in opposition to the people and they then portray themselves as the only saviors capable of taking action.

Epilogue: who knows, life is not a Tarantino movie

For the last 30 years, antitrust law has been largely immune to strategic takeover by political interests. It may now be returning to a previous era in which it was the instrument of a few. This transformation is already happening in Europe (it is expected to hit case law there quite soon) and is getting real in the US, where groups display political goals and make antitrust law a Trojan horse for their personal interests.The only semblance of evidence they bring is a few allegedly harmful micro-practices (see Amazon’s Antitrust Paradox), which they use as a basis for defending the urgent need of macro, structural measures, such as breaking up tech companies. This is disproportionate, but most of all and in the absence of better knowledge, purely opportunistic and potentially foolish. Who knows at this point whether antitrust law will come out intact of this populist and moralist episode? And who knows what the next idea of those who want to use antitrust law for purely political purposes will be. Life is not a Tarantino movie; it may end up badly.

After spending a few years away from ICLE and directly engaging in the day to day grind of indigent criminal defense as a public defender, I now have a new appreciation for the ways economic tools can explain behavior that I had not before studied. For instance, I think the law and economics tradition, specifically the insights of Ludwig von Mises and Friedrich von Hayek on the importance of price signals, can explain one of the major problems for public defenders and their clients: without price signals, there is no rational way to determine the best way to spend one’s time.

I believe the most common complaints about how public defenders represent their clients is better understood not primarily as a lack of funding, as a lack of effort or care, or even simply as a lack of time for overburdened lawyers, but as an allocation problem. In the absence of price signals, there is no rational way to determine the best way to spend one’s time as a public defender. (Note: Many jurisdictions use the model of indigent defense described here, in which lawyers are paid a salary to work for the public defender’s office. However, others use models like contracting lawyers for particular cases, appointing lawyers for a flat fee, relying on non-profit agencies, or combining approaches as some type of hybrid. These models all have their own advantages and disadvantages, but this blog post is only about the issue of price signals for lawyers who work within a public defender’s office.)

As Mises and Hayek taught us, price signals carry a great deal of information; indeed, they make economic calculation possible. Their critique of socialism was built around this idea: that the person in charge of making economic choices without prices and the profit-and-loss mechanism is “groping in the dark.”

This isn’t to say that people haven’t tried to find ways to figure out the best way to spend their time in the absence of the profit-and-loss mechanism. In such environments, bureaucratic rules often replace price signals in directing human action. For instance, lawyers have rules of professional conduct. These rules, along with concerns about reputation and other institutional checks may guide lawyers on how to best spend their time as a general matter. But even these things are no match for price signals in determining the most efficient way to allocate the scarcest resource of all: time.

Imagine two lawyers, one working for a public defender’s office who receives a salary that is not dependent on caseload or billable hours, and another private defense lawyer who charges his client for the work that is put in.

In either case the lawyer who is handed a file for a case scheduled for trial months in advance has a choice to make: do I start working on this now, or do I put it on the backburner because of cases with much closer deadlines? A cursory review of the file shows there may be a possible suppression issue that will require further investigation. A successful suppression motion would likely lead to a resolution of the case that will not result in a conviction, but it would take considerable time – time which could be spent working on numerous client files with closer trial dates. For the sake of this hypothetical, there is a strong legal basis to file suppression motion (i.e., it is not frivolous).

The private defense lawyer has a mechanism beyond what is available to public defenders to determine how to handle this case: price signals. He can bring the suppression issue to his client’s attention, explain the likelihood of success, and then offer to file and argue the suppression motion for some agreed upon price. The client would then have the ability to determine with counsel whether this is worthwhile.

The public defender, on the other hand, does not have price signals to determine where to put this suppression motion among his other workload. He could spend the time necessary to develop the facts and research the law for the suppression motion, but unless there is a quickly approaching deadline for the motion to be filed, there will be many other cases in the queue with closer deadlines begging for his attention. Clients, who have no rationing principle based in personal monetary costs, would obviously prefer their public defender file any and all motions which have any chance whatsoever to help them, regardless of merit.

What this hypothetical shows is that public defenders do not face the same incentive structure as private lawyers when it comes to allocation of time. But neither do criminal defendants. Indigent defendants who qualify for public defender representation often complain about their “public pretender” for “not doing anything for them.” But the simple truth is that the public defender is making choices on how to spend his time more or less by his own determination of where he can be most useful. Deadlines often drive the review of cases, along with who sends the most letters and/or calls. The actual evaluation of which cases have the most merit can fall through the cracks. Often times, this means cases are worked on in a chronological manner, but insufficient time and effort is spent on particular cases that would have merited more investment because of quickly approaching deadlines on other cases. Sometimes this means that the most annoying clients get the most time spent on their behalf, irrespective of the merits of their case. At best, public defenders are acting like battlefield medics and attempt to perform triage by spending their time where they believe they can help the most.

Unlike private criminal defense lawyers, public defenders can’t typically reject cases because their caseload has grown too big, or charge a higher price in order to take on a particularly difficult and time-consuming case. Therefore, the public defender is stuck in a position to simply guess at the best use of their time with the heuristics described above and do the very best they can under the circumstances. Unfortunately, those heuristics simply can’t replace price signals in determining the best use of one’s time.

As criminal justice reform becomes a policy issue for both left and right, law and economics analysis should have a place in the conversation. Any reforms of indigent defense that will be part of this broader effort should take into consideration the calculation problem inherent to the public defender’s office. Other institutional arrangements, like a well-designed voucher system, which do not suffer from this particular problem may be preferable.