Archives For Friedrich Hayek

“Edinburgh, Great Britain – July 7, 2010: Statue of Adam Smith in Edinburgh in front of St. Giles Cathedral at Parliament Square.”

The interplay among political philosophy, competition, and competition law remains, with some notable exceptions, understudied in the literature. Indeed, while examinations of the intersection between economics and competition law have taught us much, relatively little has been said about the value frameworks within which different visions of competition and competition law operate.

As Ronald Coase reminds us, questions of economics and political philosophy are interrelated, so that “problems of welfare economics must ultimately dissolve into a study of aesthetics and morals.” When we talk about economics, we talk about political philosophy, and vice versa. Every political philosophy reproduces economic prescriptions that reflect its core tenets. And every economic arrangement, in turn, evokes the normative values that undergird it. This is as true for socialism and fascism as it is for liberalism and neoliberalism.

Many economists have understood this. Milton Friedman, for instance, who spent most of his career studying social welfare, not ethics, admitted in Free to Choose that he was ultimately concerned with the preservation of a value: the liberty of the individual. Similarly, the avowed purpose of Friedrich Hayek’s The Constitution of Liberty was to maximize the state of human freedom, with coercion—i.e., the opposite of freedom—described as evil. James Buchanan fought to preserve political philosophy within the economic discipline, particularly worrying that:

Political economy was becoming unmoored from the types of philosophic and institutional analysis which were previously central to the field. In its flight from reality, Buchanan feared economics was in danger of abandoning social-philosophic issues for exclusively technical questions.

— John Kroencke, “Three Essays in the History of Economics”

Against this background, I propose to look at competition and competition law from a perspective that explicitly recognizes this connection. The goal is not to substitute, but rather to complement, our comparatively broad understanding of competition economics with a better grasp of the deeper normative implications of regulating competition in a certain way. If we agree with Robert Bork that antitrust is a subcategory of ideology that reflects and reacts upon deeper tensions in our society, the exercise might also be relevant beyond the relatively narrow confines of antitrust scholarship (which, on the other hand, seem to be getting wider and wider).

The Classical Liberal Revolution and the Unshackling of Competition

Mercantilism

When Adam Smith’s The Wealth of Nations was published in 1776, heavy economic regulation of the market through laws, by-laws, tariffs, and special privileges was the norm. Restrictions on imports were seen as protecting national wealth by preventing money from flowing out of the country—a policy premised on the conflation of money with wealth. A morass of legally backed and enforceable monopoly rights, granted either by royal decree or government-sanctioned by-laws, marred competition. Guilds reigned over tradesmen by restricting entry into the professions and segregating markets along narrow geographic lines. At every turn, economic activity was shot through with rules, restrictions, and regulations.

The Revolution in Political Economy

Classical liberals like Smith departed from the then-dominant mercantilist paradigm by arguing that nations prospered through trade and competition, and not protectionism and monopoly privileges. He demonstrated that both the seller and the buyer benefited from trade; and theorized the market as an automatic mechanism that allocated resources efficiently through the spontaneous, self-interested interaction of individuals.

Undergirding this position was the notion of the natural order, which Smith carried over from his own Theory of Moral Sentiments and which elaborated on arguments previously espoused by the French physiocrats (a neologism meaning “the rule of nature”), such as Anne Robert Jacques Turgot, François Quesnay, and Jacques Claude Marie Vincent de Gournay. The basic premise was that there existed a harmonious order of things established and maintained by means of subconscious balancing of the egoism of the individual and the greatest welfare for all.

The implications of this modest insight, which clashed directly with established mercantilist orthodoxy, were tremendous. If human freedom maximized social welfare, the justification for detailed government intervention in the economy was untenable. The principles of laissez-faire (a term probably coined by Gournay, who had been Turgot’s mentor) instead prescribed that the government should adopt a “night watchman” role, tending to modest tasks such as internal and external defense, the mediation of disputes, and certain public works that were not deemed profitable for the individual.

Freeing Competition from the Mercantilist Yoke

Smith’s general attitude also carried over to competition. Following the principles described above, classical liberals believed that price and product adjustments following market interactions among tradesmen (i.e., competition) would automatically maximize social utility. As Smith argued:

In general, if any branch of trade, or any division of labor, be advantageous to the public, the freer and more general the competition, it will always be the more so.

This did not mean that competition occurred in a legal void. Rather, Smith’s point was that there was no need to construct a comprehensive system of competition regulation, as markets would oversee themselves so long as a basic legal and institutional framework was in place and government refrained from actively abetting monopolies. Under this view, the only necessary “competition law” would be those individual laws that made competition possible, such as private property rights, contracts, unfair competition laws, and the laws against government and guild restrictions.

Liberal Political Philosophy: Utilitarian and Deontological Perspectives on Liberty and Individuality

Of course, this sort of volte face in political economy needed to be buttressed by a robust philosophical conception of the individual and the social order. Such ontological and moral theories were articulated in, among others, the Theory of Moral Sentiments and John Stuart Mill’s On Liberty. At the heart of the liberal position was the idea that undue restrictions on human freedom and individuality were not only intrinsically despotic, but also socially wasteful, as they precluded men from enjoying the fruits of the exercise of such freedoms. For instance, infringing the freedom to trade and to compete would rob the public of cheaper goods, while restrictions on freedom of expression would arrest the development of thoughts and ideas through open debate.

It is not clear whether the material or the ethical argument for freedom came first. In other words, whether classical liberalism constituted an ex-post rationalization of a moral preference for individual liberty, or precisely the reverse. The question may be immaterial, as classical liberals generally believed that the deontological and the consequentialist cases for liberty—save in the most peripheral of cases (e.g., violence against others)—largely overlapped.

Conclusion

In sum, classical liberalism offered a holistic, integrated view of societies, markets, morals, and individuals that was revolutionary for the time. The notion of competition as a force to be unshackled—rather than actively constructed and chaperoned—flowed organically from that account and its underlying values and assumptions. These included such values as personal freedom and individualism, along with foundational metaphysical presuppositions, such as the existence of a harmonious natural order that seamlessly guided individual actions for the benefit of the whole.

Where such base values and presumptions are eroded, however, the notion of a largely spontaneous, self-sustaining competitive process loses much of its rational, ethical, and moral legitimacy. Competition thus ceases to be tenable on its “own two feet” and must either be actively engineered and protected, or abandoned altogether as a viable organizing principle. In this sense, the crisis of liberalism the West experienced in the late 19th and early 20th centuries—which attacked the very foundations of classical liberal doctrine—can also be read as a crisis of competition.

In my next post, I’ll discuss the collectivist backlash against liberalism.

[This post adapts elements of “Should ASEAN Antitrust Laws Emulate European Competition Policy?”, published in the Singapore Economic Review (2021). Open access working paper here.]

U.S. and European competition laws diverge in numerous ways that have important real-world effects. Understanding these differences is vital, particularly as lawmakers in the United States, and the rest of the world, consider adopting a more “European” approach to competition.

In broad terms, the European approach is more centralized and political. The European Commission’s Directorate General for Competition (DG Comp) has significant de facto discretion over how the law is enforced. This contrasts with the common law approach of the United States, in which courts elaborate upon open-ended statutes through an iterative process of case law. In other words, the European system was built from the top down, while U.S. antitrust relies on a bottom-up approach, derived from arguments made by litigants (including the government antitrust agencies) and defendants (usually businesses).

This procedural divergence has significant ramifications for substantive law. European competition law includes more provisions akin to de facto regulation. This is notably the case for the “abuse of dominance” standard, in which a “dominant” business can be prosecuted for “abusing” its position by charging high prices or refusing to deal with competitors. By contrast, the U.S. system places more emphasis on actual consumer outcomes, rather than the nature or “fairness” of an underlying practice.

The American system thus affords firms more leeway to exclude their rivals, so long as this entails superior benefits for consumers. This may make the U.S. system more hospitable to innovation, since there is no built-in regulation of conduct for innovators who acquire a successful market position fairly and through normal competition.

In this post, we discuss some key differences between the two systems—including in areas like predatory pricing and refusals to deal—as well as the discretionary power the European Commission enjoys under the European model.

Exploitative Abuses

U.S. antitrust is, by and large, unconcerned with companies charging what some might consider “excessive” prices. The late Associate Justice Antonin Scalia, writing for the Supreme Court majority in the 2003 case Verizon v. Trinko, observed that:

The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system. The opportunity to charge monopoly prices—at least for a short period—is what attracts “business acumen” in the first place; it induces risk taking that produces innovation and economic growth.

This contrasts with European competition-law cases, where firms may be found to have infringed competition law because they charged excessive prices. As the European Court of Justice (ECJ) held in 1978’s United Brands case: “In this case charging a price which is excessive because it has no reasonable relation to the economic value of the product supplied would be such an abuse.”

While United Brands was the EU’s foundational case for excessive pricing, and the European Commission reiterated that these allegedly exploitative abuses were possible when it published its guidance paper on abuse of dominance cases in 2009, the commission had for some time demonstrated apparent disinterest in bringing such cases. In recent years, however, both the European Commission and some national authorities have shown renewed interest in excessive-pricing cases, most notably in the pharmaceutical sector.

European competition law also penalizes so-called “margin squeeze” abuses, in which a dominant upstream supplier charges a price to distributors that is too high for them to compete effectively with that same dominant firm downstream:

[I]t is for the referring court to examine, in essence, whether the pricing practice introduced by TeliaSonera is unfair in so far as it squeezes the margins of its competitors on the retail market for broadband connection services to end users. (Konkurrensverket v TeliaSonera Sverige, 2011)

As Scalia observed in Trinko, forcing firms to charge prices that are below a market’s natural equilibrium affects firms’ incentives to enter markets, notably with innovative products and more efficient means of production. But the problem is not just one of market entry and innovation.  Also relevant is the degree to which competition authorities are competent to determine the “right” prices or margins.

As Friedrich Hayek demonstrated in his influential 1945 essay The Use of Knowledge in Society, economic agents use information gleaned from prices to guide their business decisions. It is this distributed activity of thousands or millions of economic actors that enables markets to put resources to their most valuable uses, thereby leading to more efficient societies. By comparison, the efforts of central regulators to set prices and margins is necessarily inferior; there is simply no reasonable way for competition regulators to make such judgments in a consistent and reliable manner.

Given the substantial risk that investigations into purportedly excessive prices will deter market entry, such investigations should be circumscribed. But the court’s precedents, with their myopic focus on ex post prices, do not impose such constraints on the commission. The temptation to “correct” high prices—especially in the politically contentious pharmaceutical industry—may thus induce economically unjustified and ultimately deleterious intervention.

Predatory Pricing

A second important area of divergence concerns predatory-pricing cases. U.S. antitrust law subjects allegations of predatory pricing to two strict conditions:

  1. Monopolists must charge prices that are below some measure of their incremental costs; and
  2. There must be a realistic prospect that they will able to recoup these initial losses.

In laying out its approach to predatory pricing, the U.S. Supreme Court has identified the risk of false positives and the clear cost of such errors to consumers. It thus has particularly stressed the importance of the recoupment requirement. As the court found in 1993’s Brooke Group Ltd. v. Brown & Williamson Tobacco Corp., without recoupment, “predatory pricing produces lower aggregate prices in the market, and consumer welfare is enhanced.”

Accordingly, U.S. authorities must prove that there are constraints that prevent rival firms from entering the market after the predation scheme, or that the scheme itself would effectively foreclose rivals from entering the market in the first place. Otherwise, the predator would be undercut by competitors as soon as it attempts to recoup its losses by charging supra-competitive prices.

Without the strong likelihood that a monopolist will be able to recoup lost revenue from underpricing, the overwhelming weight of economic evidence (to say nothing of simple logic) is that predatory pricing is not a rational business strategy. Thus, apparent cases of predatory pricing are most likely not, in fact, predatory; deterring or punishing them would actually harm consumers.

By contrast, the EU employs a more expansive legal standard to define predatory pricing, and almost certainly risks injuring consumers as a result. Authorities must prove only that a company has charged a price below its average variable cost, in which case its behavior is presumed to be predatory. Even when a firm charges prices that are between its average variable and average total cost, it can be found guilty of predatory pricing if authorities show that its behavior was part of a plan to eliminate a competitor. Most significantly, in neither case is it necessary for authorities to show that the scheme would allow the monopolist to recoup its losses.

[I]t does not follow from the case‑law of the Court that proof of the possibility of recoupment of losses suffered by the application, by an undertaking in a dominant position, of prices lower than a certain level of costs constitutes a necessary precondition to establishing that such a pricing policy is abusive. (France Télécom v Commission, 2009).

This aspect of the legal standard has no basis in economic theory or evidence—not even in the “strategic” economic theory that arguably challenges the dominant Chicago School understanding of predatory pricing. Indeed, strategic predatory pricing still requires some form of recoupment, and the refutation of any convincing business justification offered in response. For example, ​​in a 2017 piece for the Antitrust Law Journal, Steven Salop lays out the “raising rivals’ costs” analysis of predation and notes that recoupment still occurs, just at the same time as predation:

[T]he anticompetitive conditional pricing practice does not involve discrete predatory and recoupment periods, as in the case of classical predatory pricing. Instead, the recoupment occurs simultaneously with the conduct. This is because the monopolist is able to maintain its current monopoly power through the exclusionary conduct.

The case of predatory pricing illustrates a crucial distinction between European and American competition law. The recoupment requirement embodied in American antitrust law serves to differentiate aggressive pricing behavior that improves consumer welfare—because it leads to overall price decreases—from predatory pricing that reduces welfare with higher prices. It is, in other words, entirely focused on the welfare of consumers.

The European approach, by contrast, reflects structuralist considerations far removed from a concern for consumer welfare. Its underlying fear is that dominant companies could use aggressive pricing to engender more concentrated markets. It is simply presumed that these more concentrated markets are invariably detrimental to consumers. Both the Tetra Pak and France Télécom cases offer clear illustrations of the ECJ’s reasoning on this point:

[I]t would not be appropriate, in the circumstances of the present case, to require in addition proof that Tetra Pak had a realistic chance of recouping its losses. It must be possible to penalize predatory pricing whenever there is a risk that competitors will be eliminated… The aim pursued, which is to maintain undistorted competition, rules out waiting until such a strategy leads to the actual elimination of competitors. (Tetra Pak v Commission, 1996).

Similarly:

[T]he lack of any possibility of recoupment of losses is not sufficient to prevent the undertaking concerned reinforcing its dominant position, in particular, following the withdrawal from the market of one or a number of its competitors, so that the degree of competition existing on the market, already weakened precisely because of the presence of the undertaking concerned, is further reduced and customers suffer loss as a result of the limitation of the choices available to them.  (France Télécom v Commission, 2009).

In short, the European approach leaves less room to analyze the concrete effects of a given pricing scheme, leaving it more prone to false positives than the U.S. standard explicated in the Brooke Group decision. Worse still, the European approach ignores not only the benefits that consumers may derive from lower prices, but also the chilling effect that broad predatory pricing standards may exert on firms that would otherwise seek to use aggressive pricing schemes to attract consumers.

Refusals to Deal

U.S. and EU antitrust law also differ greatly when it comes to refusals to deal. While the United States has limited the ability of either enforcement authorities or rivals to bring such cases, EU competition law sets a far lower threshold for liability.

As Justice Scalia wrote in Trinko:

Aspen Skiing is at or near the outer boundary of §2 liability. The Court there found significance in the defendant’s decision to cease participation in a cooperative venture. The unilateral termination of a voluntary (and thus presumably profitable) course of dealing suggested a willingness to forsake short-term profits to achieve an anticompetitive end. (Verizon v Trinko, 2003.)

This highlights two key features of American antitrust law with regard to refusals to deal. To start, U.S. antitrust law generally does not apply the “essential facilities” doctrine. Accordingly, in the absence of exceptional facts, upstream monopolists are rarely required to supply their product to downstream rivals, even if that supply is “essential” for effective competition in the downstream market. Moreover, as Justice Scalia observed in Trinko, the Aspen Skiing case appears to concern only those limited instances where a firm’s refusal to deal stems from the termination of a preexisting and profitable business relationship.

While even this is not likely the economically appropriate limitation on liability, its impetus—ensuring that liability is found only in situations where procompetitive explanations for the challenged conduct are unlikely—is completely appropriate for a regime concerned with minimizing the cost to consumers of erroneous enforcement decisions.

As in most areas of antitrust policy, EU competition law is much more interventionist. Refusals to deal are a central theme of EU enforcement efforts, and there is a relatively low threshold for liability.

In theory, for a refusal to deal to infringe EU competition law, it must meet a set of fairly stringent conditions: the input must be indispensable, the refusal must eliminate all competition in the downstream market, and there must not be objective reasons that justify the refusal. Moreover, if the refusal to deal involves intellectual property, it must also prevent the appearance of a new good.

In practice, however, all of these conditions have been relaxed significantly by EU courts and the commission’s decisional practice. This is best evidenced by the lower court’s Microsoft ruling where, as John Vickers notes:

[T]he Court found easily in favor of the Commission on the IMS Health criteria, which it interpreted surprisingly elastically, and without relying on the special factors emphasized by the Commission. For example, to meet the “new product” condition it was unnecessary to identify a particular new product… thwarted by the refusal to supply but sufficient merely to show limitation of technical development in terms of less incentive for competitors to innovate.

EU competition law thus shows far less concern for its potential chilling effect on firms’ investments than does U.S. antitrust law.

Vertical Restraints

There are vast differences between U.S. and EU competition law relating to vertical restraints—that is, contractual restraints between firms that operate at different levels of the production process.

On the one hand, since the Supreme Court’s Leegin ruling in 2006, even price-related vertical restraints (such as resale price maintenance (RPM), under which a manufacturer can stipulate the prices at which retailers must sell its products) are assessed under the rule of reason in the United States. Some commentators have gone so far as to say that, in practice, U.S. case law on RPM almost amounts to per se legality.

Conversely, EU competition law treats RPM as severely as it treats cartels. Both RPM and cartels are considered to be restrictions of competition “by object”—the EU’s equivalent of a per se prohibition. This severe treatment also applies to non-price vertical restraints that tend to partition the European internal market.

Furthermore, in the Consten and Grundig ruling, the ECJ rejected the consequentialist, and economically grounded, principle that inter-brand competition is the appropriate framework to assess vertical restraints:

Although competition between producers is generally more noticeable than that between distributors of products of the same make, it does not thereby follow that an agreement tending to restrict the latter kind of competition should escape the prohibition of Article 85(1) merely because it might increase the former. (Consten SARL & Grundig-Verkaufs-GMBH v. Commission of the European Economic Community, 1966).

This treatment of vertical restrictions flies in the face of longstanding mainstream economic analysis of the subject. As Patrick Rey and Jean Tirole conclude:

Another major contribution of the earlier literature on vertical restraints is to have shown that per se illegality of such restraints has no economic foundations.

Unlike the EU, the U.S. Supreme Court in Leegin took account of the weight of the economic literature, and changed its approach to RPM to ensure that the law no longer simply precluded its arguable consumer benefits, writing: “Though each side of the debate can find sources to support its position, it suffices to say here that economics literature is replete with procompetitive justifications for a manufacturer’s use of resale price maintenance.” Further, the court found that the prior approach to resale price maintenance restraints “hinders competition and consumer welfare because manufacturers are forced to engage in second-best alternatives and because consumers are required to shoulder the increased expense of the inferior practices.”

The EU’s continued per se treatment of RPM, by contrast, strongly reflects its “precautionary principle” approach to antitrust. European regulators and courts readily condemn conduct that could conceivably injure consumers, even where such injury is, according to the best economic understanding, exceedingly unlikely. The U.S. approach, which rests on likelihood rather than mere possibility, is far less likely to condemn beneficial conduct erroneously.

Political Discretion in European Competition Law

EU competition law lacks a coherent analytical framework like that found in U.S. law’s reliance on the consumer welfare standard. The EU process is driven by a number of laterally equivalent—and sometimes mutually exclusive—goals, including industrial policy and the perceived need to counteract foreign state ownership and subsidies. Such a wide array of conflicting aims produces lack of clarity for firms seeking to conduct business. Moreover, the discretion that attends this fluid arrangement of goals yields an even larger problem.

The Microsoft case illustrates this problem well. In Microsoft, the commission could have chosen to base its decision on various potential objectives. It notably chose to base its findings on the fact that Microsoft’s behavior reduced “consumer choice.”

The commission, in fact, discounted arguments that economic efficiency may lead to consumer welfare gains, because it determined “consumer choice” among media players was more important:

Another argument relating to reduced transaction costs consists in saying that the economies made by a tied sale of two products saves resources otherwise spent for maintaining a separate distribution system for the second product. These economies would then be passed on to customers who could save costs related to a second purchasing act, including selection and installation of the product. Irrespective of the accuracy of the assumption that distributive efficiency gains are necessarily passed on to consumers, such savings cannot possibly outweigh the distortion of competition in this case. This is because distribution costs in software licensing are insignificant; a copy of a software programme can be duplicated and distributed at no substantial effort. In contrast, the importance of consumer choice and innovation regarding applications such as media players is high. (Commission Decision No. COMP. 37792 (Microsoft)).

It may be true that tying the products in question was unnecessary. But merely dismissing this decision because distribution costs are near-zero is hardly an analytically satisfactory response. There are many more costs involved in creating and distributing complementary software than those associated with hosting and downloading. The commission also simply asserts that consumer choice among some arbitrary number of competing products is necessarily a benefit. This, too, is not necessarily true, and the decision’s implication that any marginal increase in choice is more valuable than any gains from product design or innovation is analytically incoherent.

The Court of First Instance was only too happy to give the commission a pass in its breezy analysis; it saw no objection to these findings. With little substantive reasoning to support its findings, the court fully endorsed the commission’s assessment:

As the Commission correctly observes (see paragraph 1130 above), by such an argument Microsoft is in fact claiming that the integration of Windows Media Player in Windows and the marketing of Windows in that form alone lead to the de facto standardisation of the Windows Media Player platform, which has beneficial effects on the market. Although, generally, standardisation may effectively present certain advantages, it cannot be allowed to be imposed unilaterally by an undertaking in a dominant position by means of tying.

The Court further notes that it cannot be ruled out that third parties will not want the de facto standardisation advocated by Microsoft but will prefer it if different platforms continue to compete, on the ground that that will stimulate innovation between the various platforms. (Microsoft Corp. v Commission, 2007)

Pointing to these conflicting effects of Microsoft’s bundling decision, without weighing either, is a weak basis to uphold the commission’s decision that consumer choice outweighs the benefits of standardization. Moreover, actions undertaken by other firms to enhance consumer choice at the expense of standardization are, on these terms, potentially just as problematic. The dividing line becomes solely which theory the commission prefers to pursue.

What such a practice does is vest the commission with immense discretionary power. Any given case sets up a “heads, I win; tails, you lose” situation in which defendants are easily outflanked by a commission that can change the rules of its analysis as it sees fit. Defendants can play only the cards that they are dealt. Accordingly, Microsoft could not successfully challenge a conclusion that its behavior harmed consumers’ choice by arguing that it improved consumer welfare, on net.

By selecting, in this instance, “consumer choice” as the standard to be judged, the commission was able to evade the constraints that might have been imposed by a more robust welfare standard. Thus, the commission can essentially pick and choose the objectives that best serve its interests in each case. This vastly enlarges the scope of potential antitrust liability, while also substantially decreasing the ability of firms to predict when their behavior may be viewed as problematic. It leads to what, in U.S. courts, would be regarded as an untenable risk of false positives that chill innovative behavior and create nearly unwinnable battles for targeted firms.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Dirk Auer, (Senior Fellow of Law & Economics, ICLE); Eric Fruits (Chief Economist, ICLE; Adjunct Professor of Economics, Portland State University); and Kristian Stout (Associate Director, ICLE

The COVID-19 pandemic is changing the way consumers shop and the way businesses sell. These shifts in behavior, designed to “flatten the curve” of infection through social distancing, are happening across many (if not all) markets. But in many cases, it’s impossible to know now whether these new habits are actually achieving the desired effect. 

Take a seemingly silly example from Oregon. The state is one of only two in the U.S. that prohibits self-serve gas. In response to COVID-19, the state fire marshall announced it would temporarily suspend its enforcement of the prohibition. Public opinion fell into two broad groups. Those who want the option to pump their own gas argue that self-serve reduces the interaction between station attendants and consumers, thereby potentially reducing the spread of coronavirus. On the other hand, those who support the prohibition on self-serve have blasted the fire marshall’s announcement, arguing that all those dirty fingers pressing keypads and all those grubby hands on fuel pumps will likely increase the spread of the virus. 

Both groups may be right, but no one yet knows the net effect. We can only speculate. This picture becomes even more complex when considering other, alternative policies. For instance, would it be more effective for the state of Oregon to curtail gas station visits by forcing the closure of stations? Probably not. Would it be more effective to reduce visits through some form of rationing? Maybe. Maybe not. 

Policymakers will certainly struggle to efficiently decide how firms and consumers should minimize the spread of COVID-19. That struggle is an extension of Hayek’s knowledge problem: policymakers don’t have adequate knowledge of alternatives, preferences, and the associated risks. 

A Hayekian approach — relying on bottom-up rather than top-down solutions to the problem — may be the most appropriate solution. Allowing firms to experiment and iteratively find solutions that work for their consumers and employees (potentially adjusting prices and wages in the process) may be the best that policymakers can do.

The case of online retail platforms

One area where these complex tradeoffs are particularly acute is that of online retail. In response to the pandemic, many firms have significantly boosted their online retail capacity. 

These initiatives have been met with a mix of enthusiasm and disapproval. On the one hand online retail enables consumers to purchase “essential” goods with a significantly reduced risk of COVID-19 contamination. It also allows “non-essential” goods to be sold, despite the closure of their brick and mortar stores. At first blush, this seems like a win-win situation for both consumers and retailers of all sizes, with large retailers ramping up their online operations and independent retailers switching to online platforms such as Amazon.

But there is a potential downside. Even contactless deliveries do present some danger, notably for warehouse workers who run the risk of being infected and subsequently passing the virus on to others. This risk is amplified by the fact that many major retailers, including Walmart, Kroger, CVS, and Albertsons, are hiring more warehouse and delivery workers to meet an increase in online orders. 

This has led some to question whether sales of “non-essential” goods (though the term is almost impossible to define) should be halted. The reasoning is that continuing to supply such goods needlessly puts lives at risk and reduces overall efforts to slow the virus.

Once again, these are incredibly complex questions. It is hard to gauge the overall risk of infection that is produced by the online retail industry’s warehousing and distribution infrastructure. In particular, it is not clear how effective social distancing policies, widely imposed within these workplaces, will be at achieving distancing and, in turn, reducing infections. 

More fundamentally, whatever this risk turns out to be, it is almost impossible to weigh it against an appropriate counterfactual. 

Online retail is not the only area where this complex tradeoff arises. An analogous reasoning could, for instance, also be applied to food delivery platforms. Ordering a meal on UberEats does carry some risk, but so does repeated trips to the grocery store. And there are legitimate concerns about the safety of food handlers working in close proximity to each other.  These considerations make it hard for policymakers to strike the appropriate balance. 

The good news: at least some COVID-related risks are being internalized

But there is also some good news. Firms, consumers and employees all have some incentive to mitigate these risks. 

Consumers want to purchase goods without getting contaminated; employees want to work in safe environments; and firms need to attract both consumers and employees, while minimizing potential liability. These (partially) aligned incentives will almost certainly cause these economic agents to take at least some steps that mitigate the spread of COVID-19. This might notably explain why many firms imposed social distancing measures well before governments started to take notice (here, here, and here). 

For example, one first-order effect of COVID-19 is that it has become more expensive for firms to hire warehouse workers. Not only have firms moved up along the supply curve (by hiring more workers), but the curve itself has likely shifted upwards reflecting the increased opportunity cost of warehouse work. Predictably, this has resulted in higher wages for workers. For example, Amazon and Walmart recently increased the wages they were paying warehouse workers, as have brick and mortar retailers, such as Kroger, who have implemented similar policies.

Along similar lines, firms and employees will predictably bargain — through various channels — over the appropriate level of protection for those workers who must continue to work in-person.

For example, some companies have found ways to reduce risk while continuing operations:

  • CNBC reports Tyson Foods is using walk-through infrared body temperature scanners to check employees’ temperatures as they enter three of the company’s meat processing plants. Other companies planning to use scanners include Goldman Sachs, UPS, Ford, and Carnival Cruise Lines.
  • Kroger’s Fred Meyer chain of supermarkets is limiting the number of customers in each of its stores to half the occupancy allowed under international building codes. Kroger will use infrared sensors and predictive analytics to monitor the new capacity limits. The company already uses the technology to estimate how many checkout lanes are needed at any given time.
  • Trader Joe’s limits occupancy in its store. Customers waiting to enter are asked to stand six feet apart using marked off Trader Joe’s logos on the sidewalk. Shopping carts are separated into groups of “sanitized” and “to be cleaned.” Each cart is thoroughly sprayed with disinfectant and wiped down with a clean cloth.

In other cases, bargaining over the right level of risk-mitigation has been pursued through more coercive channels, such as litigation and lobbying:

  • A recently filed lawsuit alleges that managers at an Illinois Walmart store failed to alert workers after several employees began showing symptoms of COVID-19. The suit claims Walmart “had a duty to exercise reasonable care in keeping the store in a safe and healthy environment and, in particular, to protect employees, customers and other individuals within the store from contracting COVID-19 when it knew or should have known that individuals at the store were at a very high risk of infection and exposure.” 
  • According to CNBC, a group of legislators, unions and Amazon employees in New York wrote a letter to CEO Jeff Bezos calling on him to enact greater protections for warehouse employees who continue to work during the coronavirus outbreak. The Financial Times reports worker protests at Amazon warehouse in the US, France, and Italy. Worker protests have been reported at a Barnes & Noble warehouse. Several McDonald’s locations have been hit with strikes.
  • In many cases, worker concerns about health and safety have been conflated with long-simmering issues of unionization, minimum wage, flexible scheduling, and paid time-off. For example, several McDonald’s strikes were reported to have been organized by “Fight for $15.”

Sometimes, there is simply no mutually-advantageous solution. And businesses are thus left with no other option than temporarily suspending their activities: 

  • For instance, McDonalds and Burger King have spontaneously closed their restaurants — including drive-thru and deliveries — in many European countries (here and here).
  • In Portland, Oregon, ChefStable a restaurant group behind some of the city’s best-known restaurants, closed all 20 of its bars and restaurants for at least four weeks. In what he called a “crisis of conscience,” owner Kurt Huffman concluded it would be impossible to maintain safe social distancing for customers and staff.

This is certainly not to say that all is perfect. Employers, employees and consumers may have very strong disagreements about what constitutes the appropriate level of risk mitigation.

Moreover, the questions of balancing worker health and safety with that of consumers become all the more complex when we recognize that consumers and businesses are operating in a dynamic environment, making sometimes fundamental changes to reduce risk at many levels of the supply chain.

Likewise, not all businesses will be able to implement measures that mitigate the risk of COVID-19. For instance, “Big Business” might be in a better position to reduce risks to its workforce than smaller businesses. 

Larger firms tend to have the resources and economies of scale to make capital investments in temperature scanners or sensors. They have larger workforces where employees can, say, shift from stocking shelves to sanitizing shopping carts. Several large employers, including Amazon, Kroger, and CVS have offered higher wages to employees who are more likely to be exposed to the coronavirus. Smaller firms are less likely to have the resources to offer such wage premiums.

For example, Amazon recently announced that it would implement mandatory temperature checks, that it would provide employees with protective equipment, and that it would increase the frequency and intensity of cleaning for all its sites. And, as already mentioned above, Tyson Foods announced that they would install temperature scanners at a number of sites. It is not clear whether smaller businesses are in a position to implement similar measures. 

That’s not to say that small businesses can’t adjust. It’s just more difficult. For example, a small paint-your-own ceramics shop, Mimosa Studios, had to stop offering painting parties because of government mandated social distancing. One way it’s mitigating the loss of business is with a paint-at-home package. Customers place an order online, and the studio delivers the ceramic piece, paints, and loaner brushes. When the customer is finished painting, Mimosa picks up the piece, fires it, and delivers the finished product. The approach doesn’t solve the problem, but it helps mitigate the losses.

Conclusion

In all likelihood, we can’t actually avoid all bad outcomes. There is, of course, some risk associated with even well-resourced large businesses continuing to operate, even though some of them play a crucial role in coronavirus-related lockdowns. 

Currently, market actors are working within the broad outlines of lockdowns deemed necessary by policymakers. Given the intensely complicated risk calculation necessary to determine if any given individual truly needs an “essential” (or even a “nonessential”) good or service, the best thing that lawmakers can do for now is let properly motivated private actors continue to seek optimal outcomes together within the imposed constraints. 

So far, most individuals and the firms serving them are at least partially internalizing Covid-related risks. The right approach for lawmakers would be to watch this process and determine where it breaks down. Measures targeted to fix those breaches will almost inevitably outperform interventionist planning to determine exactly what is essential, what is nonessential, and who should be allowed to serve consumers in their time of need.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Ben Sperry, (Associate Director, Legal Research, International Center for Law & Economics).]

The visceral reaction to the New York Times’ recent story on Matt Colvin, the man who had 17,700 bottles of hand sanitizer with nowhere to sell them, shows there is a fundamental misunderstanding of the importance of prices and the informational function they serve in the economy. Calls to enforce laws against “price gouging” may actually prove more harmful to consumers and society than allowing prices to rise (or fall, of course) in response to market conditions. 

Nobel-prize winning economist Friedrich Hayek explained how price signals serve as information that allows for coordination in a market society:

We must look at the price system as such a mechanism for communicating information if we want to understand its real function… The most significant fact about this system is the economy of knowledge with which it operates, or how little the individual participants need to know in order to be able to take the right action. In abbreviated form, by a kind of symbol, only the most essential information is passed on and passed on only to those concerned. It is more than a metaphor to describe the price system as a kind of machinery for registering change, or a system of telecommunications which enables individual producers to watch merely the movement of a few pointers, as an engineer might watch the hands of a few dials, in order to adjust their activities to changes of which they may never know more than is reflected in the price movement.

Economic actors don’t need a PhD in economics or even to pay attention to the news about the coronavirus to change their behavior. Higher prices for goods or services alone give important information to individuals — whether consumers, producers, distributors, or entrepreneurs — to conserve scarce resources, produce more, and look for (or invest in creating!) alternatives.

Prices are fundamental to rationing scarce resources, especially during an emergency. Allowing prices to rapidly rise has three salutary effects (as explained by Professor Michael Munger in his terrific twitter thread):

  1. Consumers ration how much they really need;
  2. Producers respond to the rising prices by ramping up supply and distributors make more available; and
  3. Entrepreneurs find new substitutes in order to innovate around bottlenecks in the supply chain. 

Despite the distaste with which the public often treats “price gouging,” officials should take care to ensure that they don’t prevent these three necessary responses from occurring. 

Rationing by consumers

During a crisis, if prices for goods that are in high demand but short supply are forced to stay at pre-crisis levels, the informational signal of a shortage isn’t given — at least by the market directly. This encourages consumers to buy more than is rationally justified under the circumstances. This stockpiling leads to shortages. 

Companies respond by rationing in various ways, like instituting shorter hours or placing limits on how much of certain high-demand goods can be bought by any one consumer. Lines (and unavailability), instead of price, become the primary cost borne by consumers trying to obtain the scarce but underpriced goods. 

If, instead, prices rise in light of the short supply and high demand, price-elastic consumers will buy less, freeing up supply for others. And, critically, price-inelastic consumers (i.e. those who most need the good) will be provided a better shot at purchase.

According to the New York Times story on Mr. Colvin, he focused on buying out the hand sanitizer in rural areas of Tennessee and Kentucky, since the major metro areas were already cleaned out. His goal was to then sell these hand sanitizers (and other high-demand goods) online at market prices. He was essentially acting as a speculator and bringing information to the market (much like an insider trader). If successful, he would be coordinating supply and demand between geographical areas by successfully arbitraging. This often occurs when emergencies are localized, like post-Katrina New Orleans or post-Irma Florida. In those cases, higher prices induced suppliers to shift goods and services from around the country to the affected areas. Similarly, here Mr. Colvin was arguably providing a beneficial service, by shifting the supply of high-demand goods from low-demand rural areas to consumers facing localized shortages. 

For those who object to Mr. Colvin’s bulk purchasing-for-resale scheme, the answer is similar to those who object to ticket resellers: the retailer should raise the price. If the Walmarts, Targets, and Dollar Trees raised prices or rationed supply like the supermarket in Denmark, Mr. Colvin would not have been able to afford nearly as much hand sanitizer. (Of course, it’s also possible — had those outlets raised prices — that Mr. Colvin would not have been able to profitably re-route the excess local supply to those in other parts of the country most in need.)

The role of “price gouging” laws and social norms

A common retort, of course, is that Colvin was able to profit from the pandemic precisely because he was able to purchase a large amount of stock at normal retail prices, even after the pandemic began. Thus, he was not a producer who happened to have a restricted amount of supply in the face of new demand, but a mere reseller who exacerbated the supply shortage problems.

But such an observation truncates the analysis and misses the crucial role that social norms against “price gouging” and state “price gouging” laws play in facilitating shortages during a crisis.

Under these laws, typically retailers may raise prices by at most 10% during a declared state of emergency. But even without such laws, brick-and-mortar businesses are tied to a location in which they are repeat players, and they may not want to take a reputational hit by raising prices during an emergency and violating the “price gouging” norm. By contrast, individual sellers, especially pseudonymous third-party sellers using online platforms, do not rely on repeat interactions to the same degree, and may be harder to track down for prosecution. 

Thus, the social norms and laws exacerbate the conditions that create the need for emergency pricing, and lead to outsized arbitrage opportunities for those willing to violate norms and the law. But, critically, this violation is only a symptom of the larger problem that social norms and laws stand in the way, in the first instance, of retailers using emergency pricing to ration scarce supplies.

Normally, third-party sales sites have much more dynamic pricing than brick and mortar outlets, which just tend to run out of underpriced goods for a period of time rather than raise prices. This explains why Mr. Colvin was able to sell hand sanitizer for prices much higher than retail on Amazon before the site suspended his ability to do so. On the other hand, in response to public criticism, Amazon, Walmart, eBay, and other platforms continue to crack down on third party “price-gouging” on their sites

But even PR-centric anti-gouging campaigns are not ultimately immune to the laws of supply and demand. Even Amazon.com, as a first party seller, ends up needing to raise prices, ostensibly as the pricing feedback mechanisms respond to cost increases up and down the supply chain. 

But without a willingness to allow retailers and producers to use the informational signal of higher prices, there will continue to be more extreme shortages as consumers rush to stockpile underpriced resources.

The desire to help the poor who cannot afford higher priced essentials is what drives the policy responses, but in reality no one benefits from shortages. Those who stockpile the in-demand goods are unlikely to be poor because doing so entails a significant upfront cost. And if they are poor, then the potential for resale at a higher price would be a benefit.

Increased production and distribution

During a crisis, it is imperative that spiking demand is met by increased production. Prices are feedback mechanisms that provide realistic estimates of demand to producers. Even if good-hearted producers forswearing the profit motive want to increase production as an act of charity, they still need to understand consumer demand in order to produce the correct amount. 

Of course, prices are not the only source of information. Producers reading the news that there is a shortage undoubtedly can ramp up their production. But even still, in order to optimize production (i.e., not just blindly increase output and hope they get it right), they need a feedback mechanism. Prices are the most efficient mechanism available for quickly translating the amount of social need (demand) for a given product to guarantee that producers do not undersupply the product  (leaving more people without than need the good), or oversupply the product (consuming more resources than necessary in a time of crisis). Prices, when allowed to adjust to actual demand, thus allow society to avoid exacerbating shortages and misallocating resources.

The opportunity to earn more profit incentivizes distributors all along the supply chain. Amazon is hiring 100,000 workers to help ship all the products which are being ordered right now. Grocers and retailers are doing their best to line the shelves with more in-demand food and supplies

Distributors rely on more than just price signals alone, obviously, such as information about how quickly goods are selling out. But even as retail prices stay low for consumers for many goods, distributors often are paying more to producers in order to keep the shelves full, as in the case of eggs. These are the relevant price signals for producers to increase production to meet demand.

For instance, hand sanitizer companies like GOJO and EO Products are ramping up production in response to known demand (so much that the price of isopropyl alcohol is jumping sharply). Farmers are trying to produce as much as is necessary to meet the increased orders (and prices) they are receiving. Even previously low-demand goods like beans are facing a boom time. These instances are likely caused by a mix of anticipatory response based on general news, as well as the slightly laggier price signals flowing through the supply chain. But, even with an “early warning” from the media, the manufacturers still need to ultimately shape their behavior with more precise information. This comes in the form of orders from retailers at increased frequencies and prices, which are both rising because of insufficient supply. In search of the most important price signal, profits, manufacturers and farmers are increasing production.

These responses to higher prices have the salutary effect of making available more of the products consumers need the most during a crisis. 

Entrepreneurs innovate around bottlenecks 

But the most interesting thing that occurs when prices rise is that entrepreneurs create new substitutes for in-demand products. For instance, distillers have started creating their own hand sanitizers

Unfortunately, however, government regulations on sales of distilled products and concerns about licensing have led distillers to give away those products rather than charge for them. Thus, beneficial as this may be, without the ability to efficiently price such products, not nearly as much will be produced as would otherwise be. The non-emergency price of zero effectively guarantees continued shortages because the demand for these free alternatives will far outstrip supply.

Another example is car companies in the US are now producing ventilators. The FDA waived regulations on the production of new ventilators after General Motors, Ford, and Tesla announced they would be willing to use idle production capacity for the production of ventilators.

As consumers demand more toilet paper, bottled water, and staple foods than can be produced quickly, entrepreneurs respond by refocusing current capabilities on these goods. Examples abound:

Without price signals, entrepreneurs would have far less incentive to shift production and distribution to the highest valued use. 

Conclusion

While stories like that of Mr. Colvin buying all of the hand sanitizer in Tennessee understandably bother people, government efforts to prevent prices from adjusting only impede the information sharing processes inherent in markets. 

If the concern is to help the poor, it would be better to pursue less distortionary public policy than arbitrarily capping prices. The US government, for instance, is currently considering a progressively tiered one-time payment to lower income individuals. 

Moves to create new and enforce existing “price-gouging” laws are likely to become more prevalent the longer shortages persist. Platforms will likely continue to receive pressure to remove “price-gougers,” as well. These policies should be resisted. Not only will these moves not prevent shortages, they will exacerbate them and push the sale of high-demand goods into grey markets where prices will likely be even higher. 

Prices are an important source of information not only for consumers, but for producers, distributors, and entrepreneurs. Short circuiting this signal will only be to the detriment of society.  

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Corbin Barthold, (Senior Litigation Counsel, Washington Legal Foundation).]

The pandemic is serious. COVID-19 will overwhelm our hospitals. It might break our entire healthcare system. To keep the number of deaths in the low hundreds of thousands, a study from Imperial College London finds, we will have to shutter much of our economy for months. Small wonder the markets have lost a third of their value in a relentless three-week plunge. Grievous and cruel will be the struggle to come.

“All men of sense will agree,” Hamilton wrote in Federalist No. 70, “in the necessity of an energetic Executive.” In an emergency, certainly, that is largely true. In the midst of this crisis even a staunch libertarian can applaud the government’s efforts to maintain liquidity, and can understand its urge to start dispersing helicopter money. By at least acting like it knows what it’s doing, the state can lessen many citizens’ sense of panic. Some of the emergency measures might even work.

Of course, many of them won’t. Even a trillion-dollar stimulus package might be too small, and too slowly dispersed, to do much good. What’s worse, that pernicious line, “Don’t let a crisis go to waste,” is in the air. Much as price gougers are trying to arbitrage Purell, political gougers, such as Senator Elizabeth Warren, are trying to cram woke diktats into disaster-relief bills. Even now, especially now, it is well to remember that government is not very good at what it does.

But dreams of dirigisme die hard, especially at the New York Times. “During the Great Depression,” Farhad Manjoo writes, “Franklin D. Roosevelt assembled a mighty apparatus to rebuild a broken economy.” Government was great at what it does, in Manjoo’s view, until neoliberalism arrived in the 1980s and ruined everything. “The incompetence we see now is by design. Over the last 40 years, America has been deliberately stripped of governmental expertise.” Manjoo implores us to restore the expansive state of yesteryear—“the sort of government that promised unprecedented achievement, and delivered.”

This is nonsense. Our government is not incompetent because Grover Norquist tried (and mostly failed) to strangle it. Our government is incompetent because, generally speaking, government is incompetent. The keystone of the New Deal, the National Industrial Recovery Act of 1933, was an incoherent mess. Its stated goals were at once to “reduce and relieve unemployment,” “improve standards of labor,” “avoid undue restriction of production,” “induce and maintain united action of labor and management,” “organiz[e] . . . co-operative action among trade groups,” and “otherwise rehabilitate industry.” The law empowered trade groups to create their own “codes of unfair competition,” a privilege they quite predictably used to form anticompetitive cartels.

At no point in American history has the state, with all its “governmental expertise,” been adept at spending money, stimulus or otherwise. A law supplying funds for the Transcontinental Railroad offered to pay builders more for track laid in the mountains, but failed to specify where those mountains begin. Leland Stanford commissioned a study finding that, lo and behold, the Sierra Nevada begins deep in the Sacramento Valley. When “the federal Interior Department initially challenged [his] innovative geology,” reports the historian H.W. Brands, Stanford sent an agent directly to President Lincoln, a politician who “didn’t know much geology” but “preferred to keep his allies happy.” “My pertinacity and Abraham’s faith moved mountains,” the triumphant lobbyist quipped after the meeting.

The supposed golden age of expert government, the time between the rise of FDR and the fall of LBJ, was no better. At the height of the Apollo program, it occurred to a physics professor at Princeton that if there were a small glass reflector on the Moon, scientists could use lasers to calculate the distance between it and Earth with great accuracy. The professor built the reflector for $5,000 and approached the government. NASA loved the idea, but insisted on building the reflector itself. This it proceeded to do, through its standard contracting process, for $3 million.

When the pandemic at last subsides, the government will still be incapable of setting prices, predicting industry trends, or adjusting to changed circumstances. What F.A. Hayek called the knowledge problem—the fact that useful information is dispersed throughout society—will be as entrenched and insurmountable as ever. Innovation will still have to come, if it is to come at all, overwhelmingly from extensive, vigorous, undirected trial and error in the private sector.

When New York Times columnists are not pining for the great government of the past, they are surmising that widespread trauma will bring about the great government of the future. “The outbreak,” Jamelle Bouie proposes in an article entitled “The Era of Small Government is Over,” has “made our mutual interdependence clear. This, in turn, has made it a powerful, real-life argument for the broadest forms of social insurance.” The pandemic is “an opportunity,” Bouie declares, to “embrace direct state action as a powerful tool.”

It’s a bit rich for someone to write about the coming sense of “mutual interdependence” in the pages of a publication so devoted to sowing grievance and discord. The New York Times is a totem of our divisions. When one of its progressive columnists uses the word “unity,” what he means is “submission to my goals.”

In any event, disunity in America is not a new, or even necessarily a bad, thing. We are a fractious, almost ungovernable people. The colonists rebelled against the British government because they didn’t want to pay it back for defending them from the French during the Seven Years’ War. When Hamilton, champion of the “energetic Executive,” pushed through a duty on liquor, the frontier settlers of western Pennsylvania tarred and feathered the tax collectors. In the Astor Place Riot of 1849, dozens of New Yorkers died in a brawl over which of two men was the better Shakespearean actor. Americans are not housetrained.

True enough, if the virus takes us to the kind of depths not seen in these parts since the Great Depression, all bets are off. Short of that, however, no one should lightly assume that Americans will long tolerate a statist revolution imposed on their fears. And thank goodness for that. Our unruliness, our unwillingness to do what we’re told, is part of what makes our society so dynamic and prosperous.

COVID-19 will shake the world. When it has gone, a new scene will open. We can say very little now about what is going to change. But we can hope that Americans will remain a creative, opinionated, fiercely independent lot. And we can be confident that, come what may, planned administration will remain a source of problems, while unplanned free enterprise will remain the surest source of solutions.


[TOTM: The following is the first in a series of posts by TOTM guests and authors on the politicization of antitrust. The entire series of posts is available here.]

This post is authored by Steven J. Cernak, Partner at Bona Law and Adjunct Professor, University of Michigan Law School and Western Michigan University Thomas M. Cooley Law School. This paper represents the current views of the author alone and not necessarily the views of any past, present or future employer or client.

When some antitrust practitioners hear “the politicization of antitrust,” they cringe while imagining, say, merger approval hanging on the size of the bribe or closeness of the connection with the right politician.  Even a more benign interpretation of the phrase “politicization of antitrust” might drive some antitrust technocrats up the wall:  “Why must the mainstream media and, heaven forbid, politicians start weighing in on what antitrust interpretations, policy and law should be?  Don’t they know that we have it all figured out and, if we decide it needs any tweaks, we’ll make those over drinks at the ABA Antitrust Section Spring Meeting?”

While I agree with the reaction to the cringe-worthy interpretation of “politicization,” I think members of the antitrust community should not be surprised or hostile to the second interpretation, that is, all the new attention from new people.  Such attention is not unusual historically; more importantly, it provides an opportunity to explain the benefits and limits of antitrust enforcement and the competitive process it is meant to protect. 

The Sherman Act itself, along with its state-level predecessors, was the product of a political reaction to perceived problems of the late 19th Century – hence all of today’s references to a “new gilded age” as echoes of the political arguments of 1890.  Since then, the Sherman Act has not been immutable.  The U.S. antitrust laws have changed – and new antitrust enforcers have even been added – when the political debates convinced enough that change was necessary.  Today’s political discussion could be surprising to so many members of the antitrust community because they were not even alive when the last major change was debated and passed

More generally, the U.S. political position on other government regulation of – or intervention or participation in – free markets has varied considerably over the years.  While controversial when they were passed, we now take Medicare and Medicaid for granted and debate “Medicare for all” – why shouldn’t an overhaul of the Sherman Act also be a legitimate political discussion?  The Interstate Commerce Commission might be gone and forgotten but at one time it garnered political support to regulate the most powerful industries of the late 19th and early 20th Century – why should a debate on new ways to regulate today’s powerful industries be out of the question? 

So today’s antitrust practitioners should avoid the temptation to proclaim an “end of history” and that all antitrust policy questions have been asked and answered and instead, as some of us have been suggesting since at least the last election cycle, join the political debate.  But now, for those of us who are generally supportive of the U.S. antitrust status quo, the question is how? 

Some have been pushing back on the supposed evidence that a change in antitrust or other governmental policies is necessary.  For instance, in late 2015 the White House Council of Economic Advisers published a paper on increased concentration in many industries which others have used as evidence of a failure of antitrust law to protect competition.  Josh Wright has used several platforms to point out that the industry measurement was too broad and the concentration level too low to be useful in these discussions.  Also, he reminded readers that concentration and levels of competition are different concepts that are not necessarily linked.  On questions surrounding inequality and stagnation of standards of living, Russ Roberts has produced a series of videos that try to explain why any such questions are difficult to answer with the easy numbers available and why, perhaps, it is not correct that “the rich got all the gains.” 

Others, like Dan Crane for instance, have advanced the debate by trying to get those commentators who are unhappy with the status quo to explain what they see as the problems and the proposed fixes.  While it might be too much to ask for unanimity among a diverse group of commentators, the debate might be more productive now that some more specific complaints and solutions have begun to emerge

Even if the problems are properly identified, we should not allow anyone to blithely assume that any – or any particular – increase in government oversight will solve it without creating different issues.  The Federal Trade Commission tackled this issue in its final hearing on Competition and Consumer Protection in the 21st Century with a panel on Frank Easterbrook’s seminal “Limits of Antitrust” paper.  I was fortunate enough to be on that panel and tried to summarize the ongoing importance of “Limits,” and advance the broader debate, by encouraging those who would change antitrust policy and increase supervision of the market to have appropriate “regulatory humility” (a term borrowed from former FTC Chairman Maureen Ohlhausen) about what can be accomplished.

I identified three varieties of humility present in “Limits” and pertinent here.  First, there is the humility to recognize that mastering anything as complex as an economy or any significant industry will require knowledge of innumerable items, some unseen or poorly understood, and so could be impossible.  Here, Easterbrook echoes Friedrich Hayek’s “Pretense of Knowledge” Nobel acceptance speech. 

Second, there is the humility to recognize that any judge or enforcer, like any other human being, is subject to her own biases and predilections, whether based on experience or the institutional framework within which she works.  While market participants might not be perfect, great thinkers from Madison to Kovacic have recognized that “men (or any agency leaders) are not angels” either.  As Thibault Schrepel has explained, it would be “romantic” to assume that any newly-empowered government enforcer will always act in the best interest of her constituents. 

Finally, there is the humility to recognize that humanity has been around a long time and faced a number of issues and that we might learn something from how our predecessors reacted to what appear to be similar issues in history.  Given my personal history and current interests, I have focused on events from the automotive industry; however, the story of the unassailable power (until it wasn’t) of A&P and how it spawned the Robinson-Patman Act, ably told by Tim Muris and Jonathan Neuchterlein, might be more pertinent here.  So challenging those advocating for big changes to explain why they are so confident this time around can be useful. 

But while all those avenues of argument can be effective in explaining why greater government intervention in the form of new antitrust policies might be worse than the status quo, we also must do a better job at explaining why antitrust and the market forces it protects are actually good for society.  If democratic capitalism really has “lengthened the life span, made the elimination of poverty and famine thinkable, enlarged the range of human choice” as claimed by Michael Novak in The Spirit of Democratic Capitalism, we should do more to spread that good news. 

Maybe we need to spend more time telling and retelling the “I, Pencil” or “It’s a Wonderful Loaf” stories about how well markets can and do work at coordinating the self-interested behavior of many to the benefit of even more.  Then we can illustrate the limited role of antitrust in that complex effort – say, punishing any collusion among the mills or bakers in those two stories to ensure the process works as beautifully and simply displayed.  For the first time in decades, politicians and real people, like the consumers whose welfare we are supposed to be protecting, are paying attention to our wonderful world of antitrust.  We should seize the opportunity to explain what we do and why it matters and discuss if any improvements can be made.

A recently published book, “Kochland – The Secret History of Koch Industries and Corporate Power in America” by Christopher Leonard, presents a gripping account of relentless innovation and the power of the entrepreneur to overcome adversity in pursuit of delivering superior goods and services to the market while also reaping impressive profits. It’s truly an inspirational American story.

Now, I should note that I don’t believe Mr. Leonard actually intended his book to be quite so complimentary to the Koch brothers and the vast commercial empire they built up over the past several decades. He includes plenty of material detailing, for example, their employees playing fast and loose with environmental protection rules, or their labor lawyers aggressively bargaining with unions, sometimes to the detriment of workers. And all of the stories he presents are supported by sympathetic emotional appeals through personal anecdotes. 

But, even then, many of the negative claims are part of a larger theme of Koch Industries progressively improving its business practices. One prominent example is how Koch Industries learned from its environmentally unfriendly past and implemented vigorous programs to ensure “10,000% compliance” with all federal and state environmental laws. 

What really stands out across most or all of the stories Leonard has to tell, however, is the deep appreciation that Charles Koch and his entrepreneurially-minded employees have for the fundamental nature of the market as an information discovery process. Indeed, Koch Industries has much in common with modern technology firms like Amazon in this respect — but decades before the information technology revolution made the full power of “Big Data” gathering and processing as obvious as it is today.

The impressive information operation of Koch Industries

Much of Kochland is devoted to stories in which Koch Industries’ ability to gather and analyze data from across its various units led to the production of superior results for the economy and consumers. For example,  

Koch… discovered that the National Parks Service published data showing the snow pack in the California mountains, data that Koch could analyze to determine how much water would be flowing in future months to generate power at California’s hydroelectric plants. This helped Koch predict with great accuracy the future supply of electricity and the resulting demand for natural gas.

Koch Industries was able to use this information to anticipate the amount of power (megawatt hours) it needed to deliver to the California power grid (admittedly, in a way that was somewhat controversial because of poorly drafted legislation relating to the new regulatory regime governing power distribution and resale in the state).

And, in 2000, while many firms in the economy were still riding the natural gas boom of the 90s, 

two Koch analysts and a reservoir engineer… accurately predicted a coming disaster that would contribute to blackouts along the West Coast, the bankruptcy of major utilities, and skyrocketing costs for many consumers.

This insight enabled Koch Industries to reap huge profits in derivatives trading, and it also enabled it to enter — and essentially rescue — a market segment crucial for domestic farmers: nitrogen fertilizer.

The market volatility in natural gas from the late 90s through early 00s wreaked havoc on the nitrogen fertilizer industry, for which natural gas is the primary input. Farmland — a struggling fertilizer producer — had progressively mismanaged its business over the preceding two decades by focusing on developing lines of business outside of its core competencies, including blithely exposing itself to the volatile natural gas market in pursuit of short-term profits. By the time it was staring bankruptcy in the face, there were no other companies interested in acquiring it. 

Koch’s analysts, however, noticed that many of Farmland’s key fertilizer plants were located in prime locations for reaching local farmers. Once the market improved, whoever controlled those key locations would be in a superior position for selling into the nitrogen fertilizer market. So, by utilizing the data it derived from its natural gas operations (both operating pipelines and storage facilities, as well as understanding the volatility of gas prices and availability through its derivatives trading operations), Koch Industries was able to infer that it could make substantial profits by rescuing this bankrupt nitrogen fertilizer business. 

Emblematic of Koch’s philosophy of only making long-term investments, 

[o]ver the next ten years, [Koch Industries] spent roughly $500 million to outfit the plants with new technology while streamlining production… Koch installed a team of fertilizer traders in the office… [t]he traders bought and sold supplies around the globe, learning more about fertilizer markets each day. Within a few years, Koch Fertilizer built a global distribution network. Koch founded a new company, called Koch Energy Services, which bought and sold natural gas supplies to keep the fertilizer plants stocked.

Thus, Koch Industries not only rescued midwest farmers from shortages that would have decimated their businesses, it invested heavily to ensure that production would continue to increase to meet future demand. 

As noted, this acquisition was consistent with the ethos of Koch Industries, which stressed thinking about investments as part of long-term strategies, in contrast to their “counterparties in the market [who] were obsessed with the near-term horizon.” This led Koch Industries to look at investments over a period measured in years or decades, an approach that allowed the company to execute very intricate investment strategies: 

If Koch thought there was going to be an oversupply of oil in the Gulf Coast region, for example, it might snap up leases on giant oil barges, knowing that when the oversupply hit, companies would be scrambling for extra storage space and willing to pay a premium for the leases that Koch bought on the cheap. This was a much safer way to execute the trade than simply shorting the price of oil—even if Koch was wrong about the supply glut, the downside was limited because Koch could still sell or use the barge leases and almost certainly break even.

Entrepreneurs, regulators, and the problem of incentives

All of these accounts and more in Kochland brilliantly demonstrate a principal salutary role of entrepreneurs in the market, which is to discover slack or scarce resources in the system and manage them in a way that they will be available for utilization when demand increases. Guaranteeing the presence of oil barges in the face of market turbulence, or making sure that nitrogen fertilizer is available when needed, is precisely the sort of result sound public policy seeks to encourage from firms in the economy. 

Government, by contrast — and despite its best intentions — is institutionally incapable of performing the same sorts of entrepreneurial activities as even very large private organizations like Koch Industries. The stories recounted in Kochland demonstrate this repeatedly. 

For example, in the oil tanker episode, Koch’s analysts relied on “huge amounts of data from outside sources” – including “publicly available data…like the federal reports that tracked the volume of crude oil being stored in the United States.” Yet, because that data was “often stale” owing to a rigid, periodic publication schedule, it lacked the specificity necessary for making precise interventions in markets. 

Koch’s analysts therefore built on that data using additional public sources, such as manifests from the Customs Service which kept track of the oil tanker traffic in US waters. Leveraging all of this publicly available data, Koch analysts were able to develop “a picture of oil shipments and flows that was granular in its specificity.”

Similarly, when trying to predict snowfall in the western US, and how that would affect hydroelectric power production, Koch’s analysts relied on publicly available weather data — but extended it with their own analytical insights to make it more suitable to fine-grained predictions. 

By contrast, despite decades of altering the regulatory scheme around natural gas production, transport and sales, and being highly involved in regulating all aspects of the process, the federal government could not even provide the data necessary to adequately facilitate markets. Koch’s energy analysts would therefore engage in various deals that sometimes would only break even — if it meant they could develop a better overall picture of the relevant markets: 

As was often the case at Koch, the company… was more interested in the real-time window that origination deals could provide into the natural gas markets. Just as in the early days of the crude oil markets, information about prices was both scarce and incredibly valuable. There were not yet electronic exchanges that showed a visible price of natural gas, and government data on sales were irregular and relatively slow to come. Every origination deal provided fresh and precise information about prices, supply, and demand.

In most, if not all, of the deals detailed in Kochland, government regulators had every opportunity to find the same trends in the publicly available data — or see the same deficiencies in the data and correct them. Given their access to the same data, government regulators could, in some imagined world, have developed policies to mitigate the effects of natural gas market collapses, handle upcoming power shortages, or develop a reliable supply of fertilizer to midwest farmers. But they did not. Indeed, because of the different sets of incentives they face (among other factors), in the real world, they cannot do so, despite their best intentions.

The incentive to innovate

This gets to the core problem that Hayek described concerning how best to facilitate efficient use of dispersed knowledge in such a way as to achieve the most efficient allocation and distribution of resources: 

The various ways in which the knowledge on which people base their plans is communicated to them is the crucial problem for any theory explaining the economic process, and the problem of what is the best way of utilizing knowledge initially dispersed among all the people is at least one of the main problems of economic policy—or of designing an efficient economic system.

The question of how best to utilize dispersed knowledge in society can only be answered by considering who is best positioned to gather and deploy that knowledge. There is no fundamental objection to “planning”  per se, as Hayek notes. Indeed, in a complex society filled with transaction costs, there will need to be entities capable of internalizing those costs  — corporations or governments — in order to make use of the latent information in the system. The question is about what set of institutions, and what set of incentives governing those institutions, results in the best use of that latent information (and the optimal allocation and distribution of resources that follows from that). 

Armen Alchian captured the different incentive structures between private firms and government agencies well: 

The extent to which various costs and effects are discerned, measured and heeded depends on the institutional system of incentive-punishment for the deciders. One system of rewards-punishment may increase the extent to which some objectives are heeded, whereas another may make other goals more influential. Thus procedures for making or controlling decisions in one rewards-incentive system are not necessarily the “best” for some other system…

In the competitive, private, open-market economy, the wealth-survival prospects are not as strong for firms (or their employees) who do not heed the market’s test of cost effectiveness as for firms who do… as a result the market’s criterion is more likely to be heeded and anticipated by business people. They have personal wealth incentives to make more thorough cost-effectiveness calculations about the products they could produce …

In the government sector, two things are less effective. (1) The full cost and value consequences of decisions do not have as direct and severe a feedback impact on government employees as on people in the private sector. The costs of actions under their consideration are incomplete simply because the consequences of ignoring parts of the full span of costs are less likely to be imposed on them… (2) The effectiveness, in the sense of benefits, of their decisions has a different reward-inventive or feedback system … it is fallacious to assume that government officials are superhumans, who act solely with the national interest in mind and are never influenced by the consequences to their own personal position.

In short, incentives matter — and are a function of the institutional arrangement of the system. Given the same set of data about a scarce set of resources, over the long run, the private sector generally has stronger incentives to manage resources efficiently than does government. As Ludwig von Mises showed, moving those decisions into political hands creates a system of political preferences that is inherently inferior in terms of the production and distribution of goods and services.

Koch Industries: A model of entrepreneurial success

The market is not perfect, but no human institution is perfect. Despite its imperfections, the market provides the best system yet devised for fairly and efficiently managing the practically unlimited demands we place on our scarce resources. 

Kochland provides a valuable insight into the virtues of the market and entrepreneurs, made all the stronger by Mr. Leonard’s implied project of “exposing” the dark underbelly of Koch Industries. The book tells the bad tales, which I’m willing to believe are largely true. I would, frankly, be shocked if any large entity — corporation or government — never ran into problems with rogue employees, internal corporate dynamics gone awry, or a failure to properly understand some facet of the market or society that led to bad investments or policy. 

The story of Koch Industries — presented even as it is through the lens of a “secret history”  — is deeply admirable. It’s the story of a firm that not only learns from its own mistakes, as all firms must do if they are to survive, but of a firm that has a drive to learn in its DNA. Koch Industries relentlessly gathers information from the market, sometimes even to the exclusion of short-term profit. It eschews complex bureaucratic structures and processes, which encourages local managers to find opportunities and nimbly respond.

Kochland is a quick read that presents a gripping account of one of America’s corporate success stories. There is, of course, a healthy amount of material in the book covering the Koch brothers’ often controversial political activities. Nonetheless, even those who hate the Koch brothers on account of politics would do well to learn from the model of entrepreneurial success that Kochland cannot help but describe in its pages. 

[TOTM: The following is the eighth in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case recently decided by Judge Lucy Koh in the Northern District of California. Other posts in this series are here. The blog post is based on a forthcoming paper regarding patent holdup, co-authored by Dirk Auer and Julian Morris.]

Samsung SGH-F480V – controller board – Qualcomm MSM6280

In his latest book, Tyler Cowen calls big business an “American anti-hero”. Cowen argues that the growing animosity towards successful technology firms is to a large extent unwarranted. After all, these companies have generated tremendous prosperity and jobs.

Though it is less known to the public than its Silicon Valley counterparts, Qualcomm perfectly fits the anti-hero mold. Despite being a key contributor to the communications standards that enabled the proliferation of smartphones around the globe – an estimated 5 Billion people currently own a device – Qualcomm has been on the receiving end of considerable regulatory scrutiny on both sides of the Atlantic (including two in the EU; see here and here). 

In the US, Judge Lucy Koh recently ruled that a combination of anticompetitive practices had enabled Qualcomm to charge “unreasonably high royalty rates” for its CDMA and LTE cellular communications technology. Chief among these practices was Qualcomm’s so-called “no license, no chips” policy, whereby the firm refuses to sell baseband processors to implementers that have not taken out a license for its communications technology. Other grievances included Qualcomm’s purported refusal to license its patents to rival chipmakers, and allegations that it attempted to extract exclusivity obligations from large handset manufacturers, such as Apple. According to Judge Koh, these practices resulted in “unreasonably high” royalty rates that failed to comply with Qualcomm’s FRAND obligations.

Judge Koh’s ruling offers an unfortunate example of the numerous pitfalls that decisionmakers face when they second-guess the distributional outcomes achieved through market forces. This is particularly true in the complex standardization space.

The elephant in the room

The first striking feature of Judge Koh’s ruling is what it omits. Throughout the more than two-hundred-page long document, there is not a single reference to the concepts of holdup or holdout (crucial terms of art for a ruling that grapples with the prices charged by an SEP holder). 

At first sight, this might seem like a semantic quibble. But words are important. Patent holdup (along with the “unreasonable” royalties to which it arguably gives rise) is possible only when a number of cumulative conditions are met. Most importantly, the foundational literature on economic opportunism (here and here) shows that holdup (and holdout) mostly occur when parties have made asset-specific sunk investments. This focus on asset-specific investments is echoed by even the staunchest critics of the standardization status quo (here).

Though such investments may well have been present in the case at hand, there is no evidence that they played any part in the court’s decision. This is not without consequences. If parties did not make sunk relationship-specific investments, then the antitrust case against Qualcomm should have turned upon the alleged exclusion of competitors, not the level of Qualcomm’s royalties. The DOJ said this much in its statement of interest concerning Qualcomm’s motion for partial stay of injunction pending appeal. Conversely, if these investments existed, then patent holdout (whereby implementers refuse to license key pieces of intellectual property) was just as much of a risk as patent holdup (here and here). And yet the court completely overlooked this possibility.

The misguided push for component level pricing

The court also erred by objecting to Qualcomm’s practice of basing license fees on the value of handsets, rather than that of modem chips. In simplified terms, implementers paid Qualcomm a percentage of their devices’ resale price. The court found that this was against Federal Circuit law. Instead, it argued that royalties should be based on the value the smallest salable patent-practicing component (in this case, baseband chips). This conclusion is dubious both as a matter of law and of policy.

From a legal standpoint, the question of the appropriate royalty base seems far less clear-cut than Judge Koh’s ruling might suggest. For instance, Gregory Sidak observes that in TCL v. Ericsson Judge Selna used a device’s net selling price as a basis upon which to calculate FRAND royalties. Likewise, in CSIRO v. Cisco, the Court also declined to use the “smallest saleable practicing component” as a royalty base. And finally, as Jonathan Barnett observes, the Circuit Laser Dynamics case law cited  by Judge Koh relates to the calculation of damages in patent infringement suits. There is no legal reason to believe that its findings should hold any sway outside of that narrow context. It is one thing for courts to decide upon the methodology that they will use to calculate damages in infringement cases – even if it is a contested one. It is a whole other matter to shoehorn private parties into adopting this narrow methodology in their private dealings. 

More importantly, from a policy standpoint, there are important advantages to basing royalty rates on the price of an end-product, rather than that of an intermediate component. This type of pricing notably enables parties to better allocate the risk that is inherent in launching a new product. In simplified terms: implementers want to avoid paying large (fixed) license fees for failed devices; and patent holders want to share in the benefits of successful devices that rely on their inventions. The solution, as Alain Bousquet and his co-authors explain, is to agree on royalty payments that are contingent on success in the market:

Because the demand for a new product is uncertain and/or the potential cost reduction of a new technology is not perfectly known, both seller and buyer may be better off if the payment for the right to use an innovation includes a state-contingent royalty (rather than consisting of just a fixed fee). The inventor wants to benefit from a growing demand for a new product, and the licensee wishes to avoid high payments in case of disappointing sales.

While this explains why parties might opt for royalty-based payments over fixed fees, it does not entirely elucidate the practice of basing royalties on the price of an end device. One explanation is that a technology’s value will often stem from its combination with other goods or technologies. Basing royalties on the value of an end-device enables patent holders to more effectively capture the social benefits that flow from these complementarities.

Imagine the price of the smallest saleable component is identical across all industries, despite it being incorporated into highly heterogeneous devices. For instance, the same modem chip could be incorporated into smartphones (of various price ranges), tablets, vehicles, and other connected devices. The Bousquet line of reasoning (above) suggests that it is efficient for the patent holder to earn higher royalties (from the IP that underpins the modem chips) in those segments where market demand is strongest (i.e. where there are stronger complementarities between the modem chip and the end device).

One way to make royalties more contingent on market success is to use the price of the modem (which is presumably identical across all segments) as a royalty base and negotiate a separate royalty rate for each end device (charging a higher rate for devices that will presumably benefit from stronger consumer demand). But this has important drawbacks. For a start, identifying those segments (or devices) that are most likely to be successful is informationally cumbersome for the inventor. Moreover, this practice could land the patent holder in hot water. Antitrust authorities might naïvely conclude that these varying royalty rates violate the “non-discriminatory” part of FRAND.

A much simpler solution is to apply a single royalty rate (or at least attempt to do so) but use the price of the end device as a royalty base. This ensures that the patent holder’s rewards are not just contingent on the number of devices sold, but also on their value. Royalties will thus more closely track the end-device’s success in the marketplace.   

In short, basing royalties on the value of an end-device is an informationally light way for the inventor to capture some of the unforeseen value that might stem from the inclusion of its technology in an end device. Mandating that royalty rates be based on the value of the smallest saleable component ignores this complex reality.

Prices are almost impossible to reconstruct

Judge Koh was similarly imperceptive when assessing Qualcomm’s contribution to the value of key standards, such as LTE and CDMA. 

For a start, she reasoned that Qualcomm’s royalties were large compared to the number of patents it had contributed to these technologies:

Moreover, Qualcomm’s own documents also show that Qualcomm is not the top standards contributor, which confirms Qualcomm’s own statements that QCT’s monopoly chip market share rather than the value of QTL’s patents sustain QTL’s unreasonably high royalty rates.

Given the tremendous heterogeneity that usually exists between the different technologies that make up a standard, simply counting each firm’s contributions is a crude and misleading way to gauge the value of their patent portfolios. Accordingly, Qualcomm argued that it had made pioneering contributions to technologies such as CDMA, and 4G/5G. Though the value of Qualcomm’s technologies is ultimately an empirical question, the court’s crude patent counting  was unlikely to provide a satisfying answer.

Just as problematically, the court also concluded that Qualcomm’s royalties were unreasonably high because “modem chips do not drive handset value.” In its own words:

Qualcomm’s intellectual property is for communication, and Qualcomm does not own intellectual property on color TFT LCD panel, mega-pixel DSC module, user storage memory, decoration, and mechanical parts. The costs of these non-communication-related components have become more expensive and now contribute 60-70% of the phone value. The phone is not just for communication, but also for computing, movie-playing, video-taking, and data storage.

As Luke Froeb and his co-authors have also observed, the court’s reasoning on this point is particularly unfortunate. Though it is clearly true that superior LCD panels, cameras, and storage increase a handset’s value – regardless of the modem chip that is associated with them – it is equally obvious that improvements to these components are far more valuable to consumers when they are also associated with high-performance communications technology.

For example, though there is undoubtedly standalone value in being able to take improved pictures on a smartphone, this value is multiplied by the ability to instantly share these pictures with friends, and automatically back them up on the cloud. Likewise, improving a smartphone’s LCD panel is more valuable if the device is also equipped with a cutting edge modem (both are necessary for consumers to enjoy high-definition media online).

In more technical terms, the court fails to acknowledge that, in the presence of perfect complements, each good makes an incremental contribution of 100% to the value of the whole. A smartphone’s components would be far less valuable to consumers if they were not associated with a high-performance modem, and vice versa. The fallacy to which the court falls prey is perfectly encapsulated by a quote it cites from Apple’s COO:

Apple invests heavily in the handset’s physical design and enclosures to add value, and those physical handset features clearly have nothing to do with Qualcomm’s cellular patents, it is unfair for Qualcomm to receive royalty revenue on that added value.

The question the court should be asking, however, is whether Apple would have gone to the same lengths to improve its devices were it not for Qualcomm’s complementary communications technology. By ignoring this question, Judge Koh all but guaranteed that her assessment of Qualcomm’s royalty rates would be wide of the mark.

Concluding remarks

In short, the FTC v. Qualcomm case shows that courts will often struggle when they try to act as makeshift price regulators. It thus lends further credence to Gergory Werden and Luke Froeb’s conclusion that:

Nothing is more alien to antitrust than enquiring into the reasonableness of prices. 

This is especially true in complex industries, such as the standardization space. The colossal number of parameters that affect the price for a technology are almost impossible to reproduce in a top-down fashion, as the court attempted to do in the Qualcomm case. As a result, courts will routinely draw poor inferences from factors such as the royalty base agreed upon by parties, the number of patents contributed by a firm, and the complex manner in which an individual technology may contribute to the value of an end-product. Antitrust authorities and courts would thus do well to recall the wise words of Friedrich Hayek:

If we can agree that the economic problem of society is mainly one of rapid adaptation to changes in the particular circumstances of time and place, it would seem to follow that the ultimate decisions must be left to the people who are familiar with these circumstances, who know directly of the relevant changes and of the resources immediately available to meet them. We cannot expect that this problem will be solved by first communicating all this knowledge to a central board which, after integrating all knowledge, issues its orders. We must solve it by some form of decentralization.

[This post is the sixth in an ongoing symposium on “Should We Break Up Big Tech?” that features analysis and opinion from various perspectives.]

[This post is authored by Thibault Schrepel, Faculty Associate at the Berkman Center at Harvard University and Assistant Professor in European Economic Law at Utrecht University School of Law.]

The pretense of ignorance

Over the last few years, I have published a series of antitrust conversations with Nobel laureates in economics. I have discussed big tech dominance with most of them, and although they have different perspectives, all of them agreed on one thing: they do not know what the effect of breaking up big tech would be. In fact, I have never spoken with any economist who was able to show me convincing empirical evidence that breaking up big tech would on net be good for consumers. The same goes for political scientists; I have never read any article that, taking everything into consideration, proves empirically that breaking up tech companies would be good for protecting democracies, if that is the objective (please note that I am not even discussing the fact that using antitrust law to do that would violate the rule of law, for more on the subject, click here).

This reminds me of Friedrich Hayek’s Nobel memorial lecture, in which he discussed the “pretense of knowledge.” He argued that some issues will always remain too complex for humans (even helped by quantum computers and the most advanced AI; that’s right!). Breaking up big tech is one such issue; it is simply impossible simultaneously to consider the micro and macro-economic impacts of such an enormous undertaking, which would affect, literally, billions of people. Not to mention the political, sociological and legal issues, all of which combined are beyond human understanding.

Ignorance + fear = fame

In the absence of clear-cut conclusions, here is why (I think), some officials are arguing for breaking up big tech. First, it may be possible that some of them actually believe that it would be great. But I am sure we agree that beliefs should not be a valid basis for such actions. More realistically, the answer can be found in the work of another Nobel laureate, James Buchanan, and in particular his 1978 lecture in Vienna entitled “Politics Without Romance.”

In his lecture and the paper that emerged from it, Buchanan argued that while markets fail, so do governments. The latter is especially relevant insofar as top officials entrusted with public power may, occasionally at least, use that power to benefit their personal interests rather than the public interest. Thus, the presumption that government-imposed corrections for market failures always accomplish the desired objectives must be rejected. Taking that into consideration, it follows that the expected effectiveness of public action should always be established as precisely and scientifically as possible before taking action. Integrating these insights from Hayek and Buchanan, we must conclude that it is not possible to know whether the effects of breaking up big tech would on net be positive.

The question then is why, in the absence of positive empirical evidence, are some officials arguing for breaking up tech giants then? Well, because defending such actions may help them achieve their personal goals. Often, it is more important for public officials to show their muscle and take action, rather showing great care about reaching a positive net result for society. This is especially true when it is practically impossible to evaluate the outcome due to the scale and complexity of the changes that ensue. That enables these officials to take credit for being bold while avoiding blame for the harms.

But for such a call to be profitable for the public officials, they first must legitimize the potential action in the eyes of the majority of the public. Until now, most consumers evidently like the services of tech giants, which is why it is crucial for the top officials engaged in such a strategy to demonize those companies and further explain to consumers why they are wrong to enjoy them. Only then does defending the breakup of tech giants becomes politically valuable.

Some data, one trend

In a recent paper entitled “Antitrust Without Romance,” I have analyzed the speeches of the five current FTC commissioners, as well as the speeches of the current and three previous EU Competition Commissioners. What I found is an increasing trend to demonize big tech companies. In other words, public officials increasingly seek to prepare the general public for the idea that breaking up tech giants would be great.

In Europe, current Competition Commissioner Margrethe Vestager has sought to establish an opposition between the people (referred under the pronoun “us”) and tech companies (referred under the pronoun “them”) in more than 80% of her speeches. She further describes these companies as engaging in manipulation of the public and unleashing violence. She says they, “distort or fabricate information, manipulate people’s views and degrade public debate” and help “harmful, untrue information spread faster than ever, unleashing violence and undermining democracy.” Furthermore, she says they cause, “danger of death.” On this basis, she mentions the possibility of breaking them up (for more data about her speeches, see this link).

In the US, we did not observe a similar trend. Assistant Attorney General Makan Delrahim, who has responsibility for antitrust enforcement at the Department of Justice, describes the relationship between people and companies as being in opposition in fewer than 10% of his speeches. The same goes for most of the FTC commissioners (to see all the data about their speeches, see this link). The exceptions are FTC Chairman Joseph J. Simons, who describes companies’ behavior as “bad” from time to time (and underlines that consumers “deserve” better) and Commissioner Rohit Chopra, who describes the relationship between companies and the people as being in opposition to one another in 30% of his speeches. Chopra also frequently labels companies as “bad.” These are minor signs of big tech demonization compared to what is currently done by European officials. But, unfortunately, part of the US doctrine (which does not hide political objectives) pushes for demonizing big tech companies. One may have reason to fear that such a trend will grow in the US as it has in Europe, especially considering the upcoming presidential campaign in which far-right and far-left politicians seem to agree about the need to break up big tech.

And yet, let’s remember that no-one has any documented, tangible, and reproducible evidence that breaking up tech giants would be good for consumers, or societies at large, or, in fact, for anyone (even dolphins, okay). It might be a good idea; it might be a bad idea. Who knows? But the lack of evidence either way militates against taking such action. Meanwhile, there is strong evidence that these discussions are fueled by a handful of individuals wishing to benefit from such a call for action. They do so, first, by depicting tech giants as representing the new elite in opposition to the people and they then portray themselves as the only saviors capable of taking action.

Epilogue: who knows, life is not a Tarantino movie

For the last 30 years, antitrust law has been largely immune to strategic takeover by political interests. It may now be returning to a previous era in which it was the instrument of a few. This transformation is already happening in Europe (it is expected to hit case law there quite soon) and is getting real in the US, where groups display political goals and make antitrust law a Trojan horse for their personal interests.The only semblance of evidence they bring is a few allegedly harmful micro-practices (see Amazon’s Antitrust Paradox), which they use as a basis for defending the urgent need of macro, structural measures, such as breaking up tech companies. This is disproportionate, but most of all and in the absence of better knowledge, purely opportunistic and potentially foolish. Who knows at this point whether antitrust law will come out intact of this populist and moralist episode? And who knows what the next idea of those who want to use antitrust law for purely political purposes will be. Life is not a Tarantino movie; it may end up badly.

After spending a few years away from ICLE and directly engaging in the day to day grind of indigent criminal defense as a public defender, I now have a new appreciation for the ways economic tools can explain behavior that I had not before studied. For instance, I think the law and economics tradition, specifically the insights of Ludwig von Mises and Friedrich von Hayek on the importance of price signals, can explain one of the major problems for public defenders and their clients: without price signals, there is no rational way to determine the best way to spend one’s time.

I believe the most common complaints about how public defenders represent their clients is better understood not primarily as a lack of funding, as a lack of effort or care, or even simply as a lack of time for overburdened lawyers, but as an allocation problem. In the absence of price signals, there is no rational way to determine the best way to spend one’s time as a public defender. (Note: Many jurisdictions use the model of indigent defense described here, in which lawyers are paid a salary to work for the public defender’s office. However, others use models like contracting lawyers for particular cases, appointing lawyers for a flat fee, relying on non-profit agencies, or combining approaches as some type of hybrid. These models all have their own advantages and disadvantages, but this blog post is only about the issue of price signals for lawyers who work within a public defender’s office.)

As Mises and Hayek taught us, price signals carry a great deal of information; indeed, they make economic calculation possible. Their critique of socialism was built around this idea: that the person in charge of making economic choices without prices and the profit-and-loss mechanism is “groping in the dark.”

This isn’t to say that people haven’t tried to find ways to figure out the best way to spend their time in the absence of the profit-and-loss mechanism. In such environments, bureaucratic rules often replace price signals in directing human action. For instance, lawyers have rules of professional conduct. These rules, along with concerns about reputation and other institutional checks may guide lawyers on how to best spend their time as a general matter. But even these things are no match for price signals in determining the most efficient way to allocate the scarcest resource of all: time.

Imagine two lawyers, one working for a public defender’s office who receives a salary that is not dependent on caseload or billable hours, and another private defense lawyer who charges his client for the work that is put in.

In either case the lawyer who is handed a file for a case scheduled for trial months in advance has a choice to make: do I start working on this now, or do I put it on the backburner because of cases with much closer deadlines? A cursory review of the file shows there may be a possible suppression issue that will require further investigation. A successful suppression motion would likely lead to a resolution of the case that will not result in a conviction, but it would take considerable time – time which could be spent working on numerous client files with closer trial dates. For the sake of this hypothetical, there is a strong legal basis to file suppression motion (i.e., it is not frivolous).

The private defense lawyer has a mechanism beyond what is available to public defenders to determine how to handle this case: price signals. He can bring the suppression issue to his client’s attention, explain the likelihood of success, and then offer to file and argue the suppression motion for some agreed upon price. The client would then have the ability to determine with counsel whether this is worthwhile.

The public defender, on the other hand, does not have price signals to determine where to put this suppression motion among his other workload. He could spend the time necessary to develop the facts and research the law for the suppression motion, but unless there is a quickly approaching deadline for the motion to be filed, there will be many other cases in the queue with closer deadlines begging for his attention. Clients, who have no rationing principle based in personal monetary costs, would obviously prefer their public defender file any and all motions which have any chance whatsoever to help them, regardless of merit.

What this hypothetical shows is that public defenders do not face the same incentive structure as private lawyers when it comes to allocation of time. But neither do criminal defendants. Indigent defendants who qualify for public defender representation often complain about their “public pretender” for “not doing anything for them.” But the simple truth is that the public defender is making choices on how to spend his time more or less by his own determination of where he can be most useful. Deadlines often drive the review of cases, along with who sends the most letters and/or calls. The actual evaluation of which cases have the most merit can fall through the cracks. Often times, this means cases are worked on in a chronological manner, but insufficient time and effort is spent on particular cases that would have merited more investment because of quickly approaching deadlines on other cases. Sometimes this means that the most annoying clients get the most time spent on their behalf, irrespective of the merits of their case. At best, public defenders are acting like battlefield medics and attempt to perform triage by spending their time where they believe they can help the most.

Unlike private criminal defense lawyers, public defenders can’t typically reject cases because their caseload has grown too big, or charge a higher price in order to take on a particularly difficult and time-consuming case. Therefore, the public defender is stuck in a position to simply guess at the best use of their time with the heuristics described above and do the very best they can under the circumstances. Unfortunately, those heuristics simply can’t replace price signals in determining the best use of one’s time.

As criminal justice reform becomes a policy issue for both left and right, law and economics analysis should have a place in the conversation. Any reforms of indigent defense that will be part of this broader effort should take into consideration the calculation problem inherent to the public defender’s office. Other institutional arrangements, like a well-designed voucher system, which do not suffer from this particular problem may be preferable.

Ours is not an age of nuance.  It’s an age of tribalism, of teams—“Yer either fer us or agin’ us!”  Perhaps I should have been less surprised, then, when I read the unfavorable review of my book How to Regulate in, of all places, the Federalist Society Review.

I had expected some positive feedback from reviewer J. Kennerly Davis, a contributor to the Federalist Society’s Regulatory Transparency Project.  The “About” section of the Project’s website states:

In the ultra-complex and interconnected digital age in which we live, government must issue and enforce regulations to protect public health and safety.  However, despite the best of intentions, government regulation can fail, stifle innovation, foreclose opportunity, and harm the most vulnerable among us.  It is for precisely these reasons that we must be diligent in reviewing how our policies either succeed or fail us, and think about how we might improve them.

I might not have expressed these sentiments in such pro-regulation terms.  For example, I don’t think government should regulate, even “to protect public health and safety,” absent (1) a market failure and (2) confidence that systematic governmental failures won’t cause the cure to be worse than the disease.  I agree, though, that regulation is sometimes appropriate, that government interventions often fail (in systematic ways), and that regulatory policies should regularly be reviewed with an eye toward reducing the combined costs of market and government failures.

Those are, in fact, the central themes of How to Regulate.  The book sets forth an overarching goal for regulation (minimize the sum of error and decision costs) and then catalogues, for six oft-cited bases for regulating, what regulatory tools are available to policymakers and how each may misfire.  For every possible intervention, the book considers the potential for failure from two sources—the knowledge problem identified by F.A. Hayek and public choice concerns (rent-seeking, regulatory capture, etc.).  It ends up arguing:

  • for property rights-based approaches to environmental protection (versus the command-and-control status quo);
  • for increased reliance on the private sector to produce public goods;
  • that recognizing property rights, rather than allocating usage, is the best way to address the tragedy of the commons;
  • that market-based mechanisms, not shareholder suits and mandatory structural rules like those imposed by Sarbanes-Oxley and Dodd-Frank, are the best way to constrain agency costs in the corporate context;
  • that insider trading restrictions should be left to corporations themselves;
  • that antitrust law should continue to evolve in the consumer welfare-focused direction Robert Bork recommended;
  • against the FCC’s recently abrogated net neutrality rules;
  • that occupational licensure is primarily about rent-seeking and should be avoided;
  • that incentives for voluntary disclosure will usually obviate the need for mandatory disclosure to correct information asymmetry;
  • that the claims of behavioral economics do not justify paternalistic policies to protect people from themselves; and
  • that “libertarian-paternalism” is largely a ruse that tends to morph into hard paternalism.

Given the congruence of my book’s prescriptions with the purported aims of the Regulatory Transparency Project—not to mention the laundry list of specific market-oriented policies the book advocates—I had expected a generally positive review from Mr. Davis (whom I sincerely thank for reading and reviewing the book; book reviews are a ton of work).

I didn’t get what I’d expected.  Instead, Mr. Davis denounced my book for perpetuating “progressive assumptions about state and society” (“wrongheaded” assumptions, the editor’s introduction notes).  He responded to my proposed methodology with a “meh,” noting that it “is not clearly better than the status quo.”  His one compliment, which I’ll gladly accept, was that my discussion of economic theory was “generally accessible.”

Following are a few thoughts on Mr. Davis’s critiques.

Are My Assumptions Progressive?

According to Mr. Davis, my book endorses three progressive concepts:

(i) the idea that market based arrangements among private parties routinely misallocate resources, (ii) the idea that government policymakers are capable of formulating executive directives that can correct private ordering market failures and optimize the allocation of resources, and (iii) the idea that the welfare of society is actually something that exists separate and apart from the individual welfare of each of the members of society.

I agree with Mr. Davis that these are progressive ideas.  If my book embraced them, it might be fair to label it “progressive.”  But it doesn’t.  Not one of them.

  1. Market Failure

Nothing in my book suggests that “market based arrangements among private parties routinely misallocate resources.”  I do say that “markets sometimes fail to work well,” and I explain how, in narrow sets of circumstances, market failures may emerge.  Understanding exactly what may happen in those narrow sets of circumstances helps to identify the least restrictive option for addressing problems and would thus would seem a pre-requisite to effective policymaking for a conservative or libertarian.  My mere invocation of the term “market failure,” however, was enough for Mr. Davis to kick me off the team.

Mr. Davis ignored altogether the many points where I explain how private ordering fixes situations that could lead to poor market performance.  At the end of the information asymmetry chapter, for example, I write,

This chapter has described information asymmetry as a problem, and indeed it is one.  But it can also present an opportunity for profit.  Entrepreneurs have long sought to make money—and create social value—by developing ways to correct informational imbalances and thereby facilitate transactions that wouldn’t otherwise occur.

I then describe the advent of companies like Carfax, AirBnb, and Uber, all of which offer privately ordered solutions to instances of information asymmetry that might otherwise create lemons problems.  I conclude:

These businesses thrive precisely because of information asymmetry.  By offering privately ordered solutions to the problem, they allow previously under-utilized assets to generate heretofore unrealized value.  And they enrich the people who created and financed them.  It’s a marvelous thing.

That theme—that potential market failures invite privately ordered solutions that often obviate the need for any governmental fix—permeates the book.  In the public goods chapter, I spend a great deal of time explaining how privately ordered devices like assurance contracts facilitate the production of amenities that are non-rivalrous and non-excludable.  In discussing the tragedy of the commons, I highlight Elinor Ostrom’s work showing how “groups of individuals have displayed a remarkable ability to manage commons goods effectively without either privatizing them or relying on government intervention.”  In the chapter on externalities, I spend a full seven pages explaining why Coasean bargains are more likely than most people think to prevent inefficiencies from negative externalities.  In the chapter on agency costs, I explain why privately ordered solutions like the market for corporate control would, if not precluded by some ill-conceived regulations, constrain agency costs better than structural rules from the government.

Disregarding all this, Mr. Davis chides me for assuming that “markets routinely fail.”  And, for good measure, he explains that government interventions are often a bigger source of failure, a point I repeatedly acknowledge, as it is a—perhaps the—central theme of the book.

  1. Trust in Experts

In what may be the strangest (and certainly the most misleading) part of his review, Mr. Davis criticizes me for placing too much confidence in experts by giving short shrift to the Hayekian knowledge problem and the insights of public choice.

          a.  The Knowledge Problem

According to Mr. Davis, the approach I advocate “is centered around fully functioning experts.”  He continues:

This progressive trust in experts is misplaced.  It is simply false to suppose that government policymakers are capable of formulating executive directives that effectively improve upon private arrangements and optimize the allocation of resources.  Friedrich Hayek and other classical liberals have persuasively argued, and everyday experience has repeatedly confirmed, that the information needed to allocate resources efficiently is voluminous and complex and widely dispersed.  So much so that government experts acting through top down directives can never hope to match the efficiency of resource allocation made through countless voluntary market transactions among private parties who actually possess the information needed to allocate the resources most efficiently.

Amen and hallelujah!  I couldn’t agree more!  Indeed, I said something similar when I came to the first regulatory tool my book examines (and criticizes), command-and-control pollution rules.  I wrote:

The difficulty here is an instance of a problem that afflicts regulation generally.  At the end of the day, regulating involves centralized economic planning:  A regulating “planner” mandates that productive resources be allocated away from some uses and toward others.  That requires the planner to know the relative value of different resource uses.  But such information, in the words of Nobel laureate F.A. Hayek, “is not given to anyone in its totality.”  The personal preferences of thousands or millions of individuals—preferences only they know—determine whether there should be more widgets and fewer gidgets, or vice-versa.  As Hayek observed, voluntary trading among resource owners in a free market generates prices that signal how resources should be allocated (i.e., toward the uses for which resource owners may command the highest prices).  But centralized economic planners—including regulators—don’t allocate resources on the basis of relative prices.  Regulators, in fact, generally assume that prices are wrong due to the market failure the regulators are seeking to address.  Thus, the so-called knowledge problem that afflicts regulation generally is particularly acute for command-and-control approaches that require regulators to make refined judgments on the basis of information about relative costs and benefits.

That was just the first of many times I invoked the knowledge problem to argue against top-down directives and in favor of market-oriented policies that would enable individuals to harness local knowledge to which regulators would not be privy.  The index to the book includes a “knowledge problem” entry with no fewer than nine sub-entries (e.g., “with licensure regimes,” “with Pigouvian taxes,” “with mandatory disclosure regimes”).  There are undoubtedly more mentions of the knowledge problem than those listed in the index, for the book assesses the degree to which the knowledge problem creates difficulties for every regulatory approach it considers.

Mr. Davis does mention one time where I “acknowledge[] the work of Hayek” and “recognize[] that context specific information is vitally important,” but he says I miss the point:

Having conceded these critical points [about the importance of context-specific information], Professor Lambert fails to follow them to the logical conclusion that private ordering arrangements are best for regulating resources efficiently.  Instead, he stops one step short, suggesting that policymakers defer to the regulator most familiar with the regulated party when they need context-specific information for their analysis.  Professor Lambert is mistaken.  The best information for resource allocation is not to be found in the regional office of the regulator.  It resides with the persons who have long been controlled and directed by the progressive regulatory system.  These are the ones to whom policymakers should defer.

I was initially puzzled by Mr. Davis’s description of how my approach would address the knowledge problem.  It’s inconsistent with the way I described the problem (the “regional office of the regulator” wouldn’t know people’s personal preferences, etc.), and I couldn’t remember ever suggesting that regulatory devolution—delegating decisions down toward local regulators—was the solution to the knowledge problem.

When I checked the citation in the sentences just quoted, I realized that Mr. Davis had misunderstood the point I was making in the passage he cited (my own fault, no doubt, not his).  The cited passage was at the very end of the book, where I was summarizing the book’s contributions.  I claimed to have set forth a plan for selecting regulatory approaches that would minimize the sum of error and decision costs.  I wanted to acknowledge, though, the irony of promulgating a generally applicable plan for regulating in a book that, time and again, decries top-down imposition of one-size-fits-all rules.  Thus, I wrote:

A central theme of this book is that Hayek’s knowledge problem—the fact that no central planner can possess and process all the information needed to allocate resources so as to unlock their greatest possible value—applies to regulation, which is ultimately a set of centralized decisions about resource allocation.  The very knowledge problem besetting regulators’ decisions about what others should do similarly afflicts pointy-headed academics’ efforts to set forth ex ante rules about what regulators should do.  Context-specific information to which only the “regulator on the spot” is privy may call for occasional departures from the regulatory plan proposed here.

As should be obvious, my point was not that the knowledge problem can generally be fixed by regulatory devolution.  Rather, I was acknowledging that the general regulatory approach I had set forth—i.e., the rules policymakers should follow in selecting among regulatory approaches—may occasionally misfire and should thus be implemented flexibly.

           b.  Public Choice Concerns

A second problem with my purported trust in experts, Mr. Davis explains, stems from the insights of public choice:

Actual policymakers simply don’t live up to [Woodrow] Wilson’s ideal of the disinterested, objective, apolitical, expert technocrat.  To the contrary, a vast amount of research related to public choice theory has convincingly demonstrated that decisions of regulatory agencies are frequently shaped by politics, institutional self-interest and the influence of the entities the agencies regulate.

Again, huzzah!  Those words could have been lifted straight out of the three full pages of discussion I devoted to public choice concerns with the very first regulatory intervention the book considered.  A snippet from that discussion:

While one might initially expect regulators pursuing the public interest to resist efforts to manipulate regulation for private gain, that assumes that government officials are not themselves rational, self-interest maximizers.  As scholars associated with the “public choice” economic tradition have demonstrated, government officials do not shed their self-interested nature when they step into the public square.  They are often receptive to lobbying in favor of questionable rules, especially since they benefit from regulatory expansions, which tend to enhance their job status and often their incomes.  They also tend to become “captured” by powerful regulatees who may shower them with personal benefits and potentially employ them after their stints in government have ended.

That’s just a slice.  Elsewhere in those three pages, I explain (1) how the dynamic of concentrated benefits and diffuse costs allows inefficient protectionist policies to persist, (2) how firms that benefit from protectionist regulation are often assisted by “pro-social” groups that will make a public interest case for the rules (Bruce Yandle’s Bootleggers and Baptists syndrome), and (3) the “[t]wo types of losses [that] result from the sort of interest-group manipulation public choice predicts.”  And that’s just the book’s initial foray into public choice.  The entry for “public choice concerns” in the book’s index includes eight sub-entries.  As with the knowledge problem, I addressed the public choice issues that could arise from every major regulatory approach the book considered.

For Mr. Davis, though, that was not enough to keep me out of the camp of Wilsonian progressives.  He explains:

Professor Lambert devotes a good deal of attention to the problem of “agency capture” by regulated entities.  However, he fails to acknowledge that a symbiotic relationship between regulators and regulated is not a bug in the regulatory system, but an inherent feature of a system defined by extensive and continuing government involvement in the allocation of resources.

To be honest, I’m not sure what that last sentence means.  Apparently, I didn’t recite some talismanic incantation that would indicate that I really do believe public choice concerns are a big problem for regulation.  I did say this in one of the book’s many discussions of public choice:

A regulator that has both regular contact with its regulatees and significant discretionary authority over them is particularly susceptible to capture.  The regulator’s discretionary authority provides regulatees with a strong motive to win over the regulator, which has the power to hobble the regulatee’s potential rivals and protect its revenue stream.  The regular contact between the regulator and the regulatee provides the regulatee with better access to those in power than that available to parties with opposing interests.  Moreover, the regulatee’s preferred course of action is likely (1) to create concentrated benefits (to the regulatee) and diffuse costs (to consumers generally), and (2) to involve an expansion of the regulator’s authority.  The upshot is that that those who bear the cost of the preferred policy are less likely to organize against it, and regulators, who benefit from turf expansion, are more likely to prefer it.  Rate-of-return regulation thus involves the precise combination that leads to regulatory expansion at consumer expense: broad and discretionary government power, close contact between regulators and regulatees, decisions that generally involve concentrated benefits and diffuse costs, and regular opportunities to expand regulators’ power and prestige.

In light of this combination of features, it should come as no surprise that the history of rate-of-return regulation is littered with instances of agency capture and regulatory expansion.

Even that was not enough to convince Mr. Davis that I reject the Wilsonian assumption of “disinterested, objective, apolitical, expert technocrat[s].”  I don’t know what more I could have said.

  1. Social Welfare

Mr. Davis is right when he says, “Professor Lambert’s ultimate goal for his book is to provide policymakers with a resource that will enable them to make regulatory decisions that produce greater social welfare.”  But nowhere in my book do I suggest, as he says I do, “that the welfare of society is actually something that exists separate and apart from the individual welfare of each of the members of society.”  What I mean by “social welfare” is the aggregate welfare of all the individuals in a society.  And I’m careful to point out that only they know what makes them better off.  (At one point, for example, I write that “[g]overnment planners have no way of knowing how much pleasure regulatees derive from banned activities…or how much displeasure they experience when they must comply with an affirmative command…. [W]ith many paternalistic policies and proposals…government planners are really just guessing about welfare effects.”)

I agree with Mr. Davis that “[t]here is no single generally accepted methodology that anyone can use to determine objectively how and to what extent the welfare of society will be affected by a particular regulatory directive.”  For that reason, nowhere in the book do I suggest any sort of “metes and bounds” measurement of social welfare.  (I certainly do not endorse the use of GDP, which Mr. Davis rightly criticizes; that term appears nowhere in the book.)

Rather than prescribing any sort of precise measurement of social welfare, my book operates at the level of general principles:  We have reasons to believe that inefficiencies may arise when conditions are thus; there is a range of potential government responses to this situation—from doing nothing, to facilitating a privately ordered solution, to mandating various actions; based on our experience with these different interventions, the likely downsides of each (stemming from, for example, the knowledge problem and public choice concerns) are so-and-so; all things considered, the aggregate welfare of the individuals within this group will probably be greatest with policy x.

It is true that the thrust of the book is consequentialist, not deontological.  But it’s a book about policy, not ethics.  And its version of consequentialism is rule, not act, utilitarianism.  Is a consequentialist approach to policymaking enough to render one a progressive?  Should we excise John Stuart Mill’s On Liberty from the classical liberal canon?  I surely hope not.

Is My Proposed Approach an Improvement?

Mr. Davis’s second major criticism of my book—that what it proposes is “just the status quo”—has more bite.  By that, I mean two things.  First, it’s a more painful criticism to receive.  It’s easier for an author to hear “you’re saying something wrong” than “you’re not saying anything new.”

Second, there may be more merit to this criticism.  As Mr. Davis observes, I noted in the book’s introduction that “[a]t times during the drafting, I … wondered whether th[e] book was ‘original’ enough.”  I ultimately concluded that it was because it “br[ought] together insights of legal theorists and economists of various stripes…and systematize[d] their ideas into a unified, practical approach to regulating.”  Mr. Davis thinks I’ve overstated the book’s value, and he may be right.

The current regulatory landscape would suggest, though, that my book’s approach to selecting among potential regulatory policies isn’t “just the status quo.”  The approach I recommend would generate the specific policies catalogued at the outset of this response (in the bullet points).  The fact that those policies haven’t been implemented under the existing regulatory approach suggests that what I’m recommending must be something different than the status quo.

Mr. Davis observes—and I acknowledge—that my recommended approach resembles the review required of major executive agency regulations under Executive Order 12866, President Clinton’s revised version of President Reagan’s Executive Order 12291.  But that order is quite limited in its scope.  It doesn’t cover “minor” executive agency rules (those with expected costs of less than $100 million) or rules from independent agencies or from Congress or from courts or at the state or local level.  Moreover, I understand from talking to a former administrator of the Office of Information and Regulatory Affairs, which is charged with implementing the order, that it has actually generated little serious consideration of less restrictive alternatives, something my approach emphasizes.

What my book proposes is not some sort of governmental procedure; indeed, I emphasize in the conclusion that the book “has not addressed … how existing regulatory institutions should be reformed to encourage the sort of analysis th[e] book recommends.”  Instead, I propose a way to think through specific areas of regulation, one that is informed by a great deal of learning about both market and government failures.  The best audience for the book is probably law students who will someday find themselves influencing public policy as lawyers, legislators, regulators, or judges.  I am thus heartened that the book is being used as a text at several law schools.  My guess is that few law students receive significant exposure to Hayek, public choice, etc.

So, who knows?  Perhaps the book will make a difference at the margin.  Or perhaps it will amount to sound and fury, signifying nothing.  But I don’t think a classical liberal could fairly say that the analysis it counsels “is not clearly better than the status quo.”

A Truly Better Approach to Regulating

Mr. Davis ends his review with a stirring call to revamp the administrative state to bring it “in complete and consistent compliance with the fundamental law of our republic embodied in the Constitution, with its provisions interpreted to faithfully conform to their original public meaning.”  Among other things, he calls for restoring the separation of powers, which has been erased in agencies that combine legislative, executive, and judicial functions, and for eliminating unchecked government power, which results when the legislature delegates broad rulemaking and adjudicatory authority to politically unaccountable bureaucrats.

Once again, I concur.  There are major problems—constitutional and otherwise—with the current state of administrative law and procedure.  I’d be happy to tear down the existing administrative state and begin again on a constitutionally constrained tabula rasa.

But that’s not what my book was about.  I deliberately set out to write a book about the substance of regulation, not the process by which rules should be imposed.  I took that tack for two reasons.  First, there are numerous articles and books, by scholars far more expert than I, on the structure of the administrative state.  I could add little value on administrative process.

Second, the less-addressed substantive question—what, as a substantive matter, should a policy addressing x do?—would exist even if Mr. Davis’s constitutionally constrained regulatory process were implemented.  Suppose that we got rid of independent agencies, curtailed delegations of rulemaking authority to the executive branch, and returned to a system in which Congress wrote all rules, the executive branch enforced them, and the courts resolved any disputes.  Someone would still have to write the rule, and that someone (or group of people) should have some sense of the pros and cons of one approach over another.  That is what my book seeks to provide.

A hard core Hayekian—one who had immersed himself in Law, Legislation, and Liberty—might respond that no one should design regulation (purposive rules that Hayek would call thesis) and that efficient, “purpose-independent” laws (what Hayek called nomos) will just emerge as disputes arise.  But that is not Mr. Davis’s view.  He writes:

A system of governance or regulation based on the rule of law attains its policy objectives by proscribing actions that are inconsistent with those objectives.  For example, this type of regulation would prohibit a regulated party from discharging a pollutant in any amount greater than the limiting amount specified in the regulation.  Under this proscriptive approach to regulation, any and all actions not specifically prohibited are permitted.

Mr. Davis has thus contemplated a purposive rule, crafted by someone.  That someone should know the various policy options and the upsides and downsides of each.  How to Regulate could help.

Conclusion

I’m not sure why Mr. Davis viewed my book as no more than dressed-up progressivism.  Maybe he was triggered by the book’s cover art, which he says “is faithful to the progressive tradition,” resembling “the walls of public buildings from San Francisco to Stalingrad.”  Maybe it was a case of Sunstein Derangement Syndrome.  (Progressive legal scholar Cass Sunstein had nice things to say about the book, despite its criticisms of a number of his ideas.)  Or perhaps it was that I used the term “market failure.”  Many conservatives and libertarians fear, with good reason, that conceding the existence of market failures invites all sorts of government meddling.

At the end of the day, though, I believe we classical liberals should stop pretending that market outcomes are always perfect, that pure private ordering is always and everywhere the best policy.  We should certainly sing markets’ praises; they usually work so well that people don’t even notice them, and we should point that out.  We should continually remind people that government interventions also fail—and in systematic ways (e.g., the knowledge problem and public choice concerns).  We should insist that a market failure is never a sufficient condition for a governmental fix; one must always consider whether the cure will be worse than the disease.  In short, we should take and promote the view that government should operate “under a presumption of error.”

That view, economist Aaron Director famously observed, is the essence of laissez faire.  It’s implicit in the purpose statement of the Federalist Society’s Regulatory Transparency Project.  And it’s the central point of How to Regulate.

So let’s go easy on the friendly fire.

I remain deeply skeptical of any antitrust challenge to the AT&T/Time Warner merger.  Vertical mergers like this one between a content producer and a distributor are usually efficiency-enhancing.  The theories of anticompetitive harm here rely on a number of implausible assumptions — e.g., that the combined company would raise content prices (currently set at profit-maximizing levels so that any price increase would reduce profits on content) in order to impair rivals in the distribution market and enhance profits there.  So I’m troubled that DOJ seems poised to challenge the merger.

I am, however, heartened — I think — by a speech Assistant Attorney General Makan Delrahim recently delivered at the ABA’s Antitrust Fall Forum. The crux of the speech, which is worth reading in its entirety, was that behavioral remedies — effectively having the government regulate a merged company’s day-to-day business decisions — are almost always inappropriate in merger challenges.

That used to be DOJ’s official position.  The Antitrust Division’s 2004 Remedies Guide proclaimed that “[s]tructural remedies are preferred to conduct remedies in merger cases because they are relatively clean and certain, and generally avoid costly government entanglement in the market.”

During the Obama administration, DOJ changed its tune.  Its 2011 Remedies Guide removed the statement quoted above as well as an assertion that behavioral remedies would be appropriate only in limited circumstances.  The 2011 Guide instead remained neutral on the choice between structural and conduct remedies, explaining that “[i]n certain factual circumstances, structural relief may be the best choice to preserve competition.  In a different set of circumstances, behavioral relief may be the best choice.”  The 2011 Guide also deleted the older Guide’s discussion of the limitations of conduct remedies.

Not surprisingly in light of the altered guidance, several of the Obama DOJ’s merger challenges—Ticketmaster/Live Nation, Comcast/NBC Universal, and Google/ITA Software, for example—resulted in settlements involving detailed and significant regulation of the combined firm’s conduct.  The settlements included mandatory licensing requirements, price regulation, compulsory arbitration of pricing disputes with recipients of mandated licenses, obligations to continue to develop and support certain products, the establishment of informational firewalls between divisions of the merged companies, prohibitions on price and service discrimination among customers, and various reporting requirements.

Settlements of such sort move antitrust a long way from the state of affairs described by then-professor Stephen Breyer, who wrote in his classic book Regulation and Its Reform:

[I]n principle the antitrust laws differ from classical regulation both in their aims and in their methods.  The antitrust laws seek to create or maintain the conditions of a competitive marketplace rather than replicate the results of competition or correct for the defects of competitive markets.  In doing so, they act negatively, through a few highly general provisions prohibiting certain forms of private conduct.  They do not affirmatively order firms to behave in specified ways; for the most part, they tell private firms what not to do . . . .  Only rarely do the antitrust enforcement agencies create the detailed web of affirmative legal obligations that characterizes classical regulation.

I am pleased to see Delrahim signaling a move away from behavioral remedies.  As Alden Abbott and I explained in our article, Recognizing the Limits of Antitrust: The Roberts Court Versus the Enforcement Agencies,

[C]onduct remedies present at least four difficulties from a limits of antitrust perspective.  First, they may thwart procompetitive conduct by the regulated firm.  When it comes to regulating how a firm interacts with its customers and rivals, it is extremely difficult to craft rules that will ban the bad without also precluding the good.  For example, requiring a merged firm to charge all customers the same price, a commonly imposed conduct remedy, may make it hard for the firm to serve clients who impose higher costs and may thwart price discrimination that actually enhances overall market output.  Second, conduct remedies entail significant direct implementation costs.  They divert enforcers’ attention away from ferreting out anticompetitive conduct elsewhere in the economy and require managers of regulated firms to focus on appeasing regulators rather than on meeting their customers’ desires.  Third, conduct remedies tend to grow stale.  Because competitive conditions are constantly changing, a conduct remedy that seems sensible when initially crafted may soon turn out to preclude beneficial business behavior.  Finally, by transforming antitrust enforcers into regulatory agencies, conduct remedies invite wasteful lobbying and, ultimately, destructive agency capture.

The first three of these difficulties are really aspects of F.A. Hayek’s famous knowledge problem.  I was thus particularly heartened by this part of Delrahim’s speech:

The economic liberty approach to industrial organization is also good economic policy.  F. A. Hayek won the 1974 Nobel Prize in economics for his work on the problems of central planning and the benefits of a decentralized free market system.  The price system of the free market, he explained, operates as a mechanism for communicating disaggregated information.  “[T]he ultimate decisions must be left to the people who are familiar with the[] circumstances.”  Regulation, I humbly submit in contrast, involves an arbiter unfamiliar with the circumstances that cannot possibly account for the wealth of information and dynamism that the free market incorporates.

So why the reservation in my enthusiasm?  Because eschewing conduct remedies may result in barring procompetitive mergers that might have been allowed with behavioral restraints.  If antitrust enforcers are going to avoid conduct remedies on Hayekian and Public Choice grounds, then they should challenge a merger only if they are pretty darn sure it presents a substantial threat to competition.

Delrahim appears to understand the high stakes of a “no behavioral remedies” approach to merger review:  “To be crystal clear, [having a strong presumption against conduct remedies] cuts both ways—if a merger is illegal, we should only accept a clean and complete solution, but if the merger is legal we should not impose behavioral conditions just because we can do so to expand our power and because the merging parties are willing to agree to get their merger through.”

The big question is whether the Trump DOJ will refrain from challenging mergers that do not pose a clear and significant threat to competition and consumer welfare.  On that matter, the jury is out.