Archives For Markets

[The following is adapted from a piece in the Economic Forces newsletter, which you can subscribe to on Substack.]

Everyone is worried about growing concentration in U.S. markets. President Joe Biden’s July 2021 executive order on competition begins with the assertion that “excessive market concentration threatens basic economic liberties, democratic accountability, and the welfare of workers, farmers, small businesses, startups, and consumers.” No word on the threat of concentration to baby puppies, but the takeaway is clear. Concentration is everywhere, and it’s bad.

On the academic side, Ufuk Akcigit and Sina Ates have an interesting paper on “ten facts”—worrisome facts, in my reading—about business dynamism. Fact No. 1: “Market concentration has risen.” Can’t get higher than No. 1, last time I checked.

Unlike most people commenting on concentration, I don’t see any reason to see high or rising concentration itself as a bad thing (although it may be a sign of problems). One key takeaway from industrial organization is that high concentration tells us nothing about levels of competition and so has no direct normative implication. I bring this up all the time (see 1234).

So without worrying about whether rising concentration is a good or bad thing, this post asks, “is rising concentration a thing?” Is there any there there? Where is it rising? For what measures? Just the facts, ma’am.

How to Measure Concentration

I will focus here primarily on product-market concentration and save labor-market concentration for a later post. The following is a brief literature review. I do not cover every paper. If I missed an important one, tell me in the comments.

There are two steps to calculating concentration. First, define the market. In empirical work, a market usually includes the product sold or the input bought (e.g., apples) and a relevant geographic region (United States). With those two bits of information decided, we have a “market” (apples sold in the United States).

Once we have defined the relevant market, we need a measure of concentration within that market. The most straightforward measure to use is to look at the use-concentration ratio of some number of firms. If you see “CR4,” it refers to the percentage of total sales in the market is by the four largest firms? One problem with this measure is that CR4 ignores everything about the fifth largest and smaller firms.

The other option used to quantify concentration is called the Herfindahl-Hirschman index (HHI), which is a number between 0 and 10,000 (or 0 and 1, if it is normalized), with 10,000 meaning all of the sales go to one firm and 0 being the limit as many firms each have smaller and smaller shares. The benefit of the HHI is that it uses information on the whole distribution of firms, not just the top few.[1]

The Biggest Companies

With those preliminaries out of the way, let’s start with concentration among the biggest firms over the longest time-period and work our way to more granular data.

When people think of “corporate concentration,” they think of the giant companies like Standard Oil, Ford, Walmart, and Google. People maybe even picture a guy with a monocle, that sort of thing.

How much of total U.S. sales go to the biggest firms? How has that changed over time? These questions are the focus of Spencer Y. Kwon, Yueran Ma, and Kaspar Zimmermann’s (2022) “100 Years of Rising Corporate Concentration.”

Spoiler alert: they find rising corporate concentration. But what does that mean?

They look at the concentration of assets and sales concentrated among the largest 1% and 0.1% of businesses. For sales, due to data limitations, they need to use net income (excluding firms with negative net income) for the first half and receipts (sales) for the second half.

In 1920, the top 1% of firms had about 60% of total sales. Now, that number is above 80%. For the top 0.1%, the number rose from about 35% to 65%. Asset concentration (blue below) is even more striking, rising to almost 100% for the top 1% of firms.

Kwon, Ma, and Zimmermann (2022)

Is this just mechanical from the definitions? That was my first concern. Suppose you have a bunch of small firms enter that have no effect on the economy. Everyone starts a Substack that makes no money. 🤔 This mechanically bumps big firms in the top 1.1% into the top 1% and raises the share. The authors had thought about this more than my 2 minutes of reading, so they did something simple.

The simple comparison is to limit the economy to just the top 10% of firms. What share goes to the top 1%? In that world, when small firms enter, there is still a bump from the top 1.1% to 1%, but there is also a bump from 10.1% to 10%. Both the numerator and denominator of the ratio are mechanically increasing. That doesn’t perfectly solve the issue, since the bump to the 1.1% firm is, by definition, bigger than the bump from the 10.1% firm, but it’s a quick comparison. Still, we see a similar rise in the top 1%.

Big companies are getting bigger, even relatively.

I’m not sure how much weight to put on this paper for thinking about concentration trends. It’s an interesting paper, and that’s why I started with it. But I’m very hesitant to think of “all goods and services in the United States” as a relevant market for any policy question, especially antitrust-type questions, which is where we see the most talk about concentration. But if you’re interested in corporate concentration influencing politics, these numbers may be super relevant.

At the industry level, which is closer to an antitrust market but still not one, they find similar trends. The paper’s website (yes, the paper has a website. Your papers don’t?) has a simple display of the industry-level trends. They match the aggregate change, but the timing differs.

Industry-Level Concentration Trends, Public Firms

Moving down from big to small, we can start asking about publicly traded firms. These tend to be larger firms, but the category doesn’t capture all firms and is biased, as I’ve pointed out before.

Grullon, Larkin, and Michaely (2019) look at the average HHI at the 3-digit NAICS level (for example, oil and gas is “a market”). Below is the plot of the (sales-weighted) average HHI for publicly traded firms. It dropped in the 80s and early 90s, rose rapidly in the late 90s and early 2000s, and has slowly risen since. I’d say “concentration is rising” is the takeaway.

Average publicly-traded HHI (3-digit NAICS) from Gullon, Larkin, and Michaely (2019)

The average hides how the distribution has changed. For antitrust, we may care whether a few industries have seen a large increase in concentration or all industries have seen a small increase.

The figure below plots from 1997-2012. We’ve seen many industries with a large increase (>40%) in the HHI. We get a similar picture if we look at the share of sales to the top 4 firms.

Distribution of changes in publicly traded HHI (3-digit NAICS) between 1997-2012 from Gullon, Larkin, and Michaely (2019)

One issue with NAICS is that it was designed to lump firms together from a producer’s perspective, not the consumer’s perspective. We will say more about that below.

Another issue in Compustat is that we only have industry at the firm level, not the establishment level. For example, every 3M office or plant gets labeled as “Miscellaneous Manufactured Commodities” and doesn’t separate out the plants that make tape (like my hometown) from those that make surgical gear.

But firms are increasingly doing wider and wider business. That may not matter if you’re worried about political corruption from concentration. But if you’re thinking about markets, it seems problematic that, in Compustat, all of Amazon’s web services (cloud servers) revenue gets lumped into NAICS 454 “Nonstore Retailers,” since that’s Amazon’s firm-level designation.

Hoberg and Phillips (2022) try to account for this increasing “scope” of businesses. They make an adjustment to allow a firm to exist in multiple industries. After making this correction, they find a falling average HHI.

Hoberg and Phillips (2021)

Industry-Level Concentration Trends, All Firms

Why stick to just publicly traded firms? That could be especially problematic since we know that the set of public firms is different from private firms, and the differences have changed over time. Public firms compete with private firms and so are in the same market for many questions.

And we have data on public and private firms. Well, I don’t. I’m stuck with Compustat data. But big names have the data.

Autor, Dorn, Katz, Patterson, and Van Reenen (2020), in their famous “superstar firms” paper, have U.S. Census panel data at the firm and establishment level, covering six major sectors: manufacturing, retail trade, wholesale trade, services, utilities and transportation, and finance. They focus on the share of the top 4 (CR4) or the top 20 (CR20) firms, both in terms of sales and employment. Every series, besides employment in manufacturing, has seen an increase. In retail, there has been nearly a doubling of the sales share to the top 4 firms.

Autor, Dorn, Katz, Patterson, and Van Reenen (2020)

I guess that settles it. Three major papers show the same trend. It’s settled… If only economic trends were so simple.

What About Narrower Product Markets?

For antitrust cases, we define markets slightly differently. We don’t use NAICS codes, since they are designed to lump together similar producers, not similar products. We also don’t use the six “major industries” in the Census, since those are also too large to be meaningful for antitrust. Instead, the product level is much smaller.

Luckily, Benkard, Yurukoglu, and Zhang (2021) construct concentration measures that are intended to capture consumption-based product markets. They have respondent-level data from the annual “Survey of the American Consumer” available from MRI Simmons, a market-research firm. The survey asks specific questions about which brands consumers buy.

They define markets into 457 product-market categories, separated into 29 locations. Product “markets” are then aggregated into “sectors.” Another interesting feature is that they know the ownership of different products, even if the brand name is different. Ownership is what matters for antitrust.

They find falling concentration at the market level (the narrowest product), both at the local and the national level. At the sector level (which aggregates markets), there is a slight increase.

Benkard, Yurukoglu, and Zhang (2021)

If you focus on industries with an HHI above 2500, the level that is considered “highly concentrated” in the U.S. Horizontal Merger Guidelines, the “highly concentrated” fell from 48% in 1994 to 39% in 2019. I’m not sure how seriously to take this threshold, since the merger guidelines take a different approach to defining markets. Overall, the authors say, “we find no evidence that market power (sic) has been getting worse over time in any broad-based way.”

Is the United States a Market?

Markets are local

Benkard, Yurukoglu, and Zhang make an important point about location. In what situations is the United States the appropriate geographic region? The U.S. housing market is not a meaningful market. If my job and family are in Minnesota, I’m not considering buying a house in California. Those are different markets.

While the first few papers above focused on concentration in the United States as a whole or within U.S. companies, is that really the appropriate market? Maybe markets are much more localized, and the trends could be different.

Along comes Rossi-Hansberg, Sarte, and Trachter (2021) with a paper titled “Diverging Trends in National and Local Concentration.” In that paper, they argue that there are, you guessed it, diverging trends in national and local concentration. If we look at concentration at different geographic levels, we get a different story. Their main figure shows that, as we move to smaller geographic regions, concentration goes from rising over time to falling over time.

Figure 1 from Rossi-Hansberg, Sarte, and Trachter (2020)

How is it possible to have such a different story depending on area?

Imagine a world where each town has its own department store. At the national level, concentration is low, but each town has a high concentration. Now Walmart enters the picture and sets up shop in 10,000 towns. That increases national concentration while reducing local concentration, which goes from one store to two. That sort of dynamic seems plausible, and the authors spend a lot of time discussing Walmart.

The paper was really important, because it pushed people to think more carefully about the type of concentration that they wanted to study. Just because data tends to be at the national level doesn’t mean that’s appropriate.

As with all these papers, however, the data source matters. There are a few concerns with the “National Establishment Time Series” (NETS) data used, as outlined in Crane and Decker (2020). Lots of the data is imputed, meaning it was originally missing and then filled in with statistical techniques. Almost every Walmart stores has exactly the median sales to worker ratio. This suggests the data starts with the number of workers and imputes the sales data from there. That’s fine if you are interested in worker concentration, but this paper is about sales.

Instead of relying on NETS data, Smith and Ocampo (2022) have Census data on product-level revenue for all U.S. retail stores between 1992 and 2012. The downside is that it is only retail, but that’s an important sector and can help us make sense of the “Walmart enters town” concentration story.

Unlike Rossi-Hansberg, Sarte, and Trachter, Smith and Ocampo find rising concentration at both the local and national levels. It depends on the exact specification. They find changes in local concentration between -1.5 and 12.6 percentage points. Regardless, the –17 percentage points of Rossi-Hansberg, Sarte, and Trachter is well outside their estimates. To me, that suggests we should be careful with the “declining local concentration” story.

Smith and Ocampo (2022).

Ultimately, for local stories, data is the limitation. Take all of the data issues at the aggregate level and then try to drill down to the ZIP code or city level. It’s tough. It just doesn’t exist in general, outside of Census data for a few sectors. The other option is to dig into a particular industry, Miller, Osborne, Sheu, and Sileo (2022) study the cement industry. 😱 (They find rising concentration.)

Markets are global

Instead of going more local, what if we go the other way? What makes markets unique in 2022 vs. 1980 is not that they are local but that they are global. Who cares if U.S. manufacturing is more concentrated if U.S. firms now compete in a global market?

The standard approach (used in basically all the papers above) computes market shares based on where the good was manufactured and doesn’t look at where the goods end up. (Compustat data is more of a mess because it includes lots of revenue from foreign establishments of U.S. firms.)

What happens when we look at where goods are ultimately sold? Again, that’s relevant for antitrust. Amiti and Heise (2021) augment the usual Census of Manufacturers with transaction-level import data from the Longitudinal Firm Trade Transactions Database (LFTTD) of the Census Bureau. They see U.S. customs forms. That’s “export-adjusted.”

They then do something similar for imports to come up with “market concentration.” That is their measure of concentration for all firms selling in the U.S., irrespective of where the firm is located. That line is completely flat from 1992-2012.

Again, this is only manufacturing, but it is a striking example of how we need to be careful with our measures of concentration. This seems like a very important correction of concentration for most questions and for many industries. Tech is clearly a global market.

Conclusion

If I step back from all of these results, I think it is safe to say that concentration is rising by most measures. However, there are lots of caveats. In a sector like manufacturing, the relevant global market is not more concentrated. The Rossi-Hansberg, Sarte, and Trachter paper suggests, despite data issues, local concentration could be falling. Again, we need to be careful.

Alex Tabarrok says trust literatures, not papers. What does that imply here?

Take the last paper by Amiti and Heise. Yes, it is only one industry, but in the one industry that we have the import/export correction, the concentration results flip. That leaves me unsure of what is going on.


[1] There’s often a third step. If we are interested in what is going on in the overall economy, we need to somehow average across different markets. There is sometimes debate about how to average a bunch of HHIs. Let’s not worry too much about that for purposes of this post. Generally, if you’re looking at the concentration of sales, the industries are weighted by sales.

Speaking about his new book in a ProMarket interview, David Dayen inadvertently captures what is perhaps the essential disconnect between antitrust reformers (populists, neo-Brandeisians, hipsters, whatever you may call them) and those of us who are more comfortable with the antitrust status quo (whatever you may call us). He says: “The antitrust doctrine that we’ve seen over the last 40 years simply does not match the lived experience of people.”

Narratives of Consumer Experience of Markets

This emphasis on “lived experience” runs through Dayen’s antitrust perspective. Citing to Hal Singer’s review of the book, the interview notes that “the heart of Dayen’s book is the personal accounts of ordinary Americans—airline passengers, hospital patients, farmers, and small business owners—attempting to achieve a slice of the American dream and facing insurmountable barriers in the form of unaccountable private monopolies.” As Singer notes in his review, “Dayen’s personalized storytelling, free of any stodgy regression analysis, is more likely to move policymakers” than are traditional economic arguments.

Dayen’s focus on individual narratives — of the consumer’s lived experience — is fundamentally different than the traditional antitrust economist’s perspective on competition and the market. It is worth exploring the differences between the two. The basic argument that I make below is that Dayen is right but also that he misunderstands the purpose of competition in a capitalist economy. A robustly competitive market is a brutal rat race that places each individual on an accelerating treadmill. There is no satiation or satisfaction for the individual consumer in these markets. But it is this very lack of satisfaction, this endless thirst for more, that makes competitive markets so powerful, and ultimately beneficial, for consumers. 

This is the fundamental challenge and paradox of capitalism. Satisfaction requires perspective that most consumers often don’t feel, and that many consumers never will feel. It requires the ability to step off that treadmill occasionally and to look how far society and individual welfare has come, even if individually one feels like they have not moved at all. It requires recognizing that the alternative to an uncomfortable flight to visit family isn’t a comfortable one, but an unaffordable one; that the alternative to low cost, processed foods, isn’t abundant higher-quality food but greater poverty for those who already can least afford food; that the alternative to a startup being beholden to Google’s and Amazon’s terms of service isn’t a market in which they have boundless access to these platforms’ infrastructures, but one in which each startup needs to entirely engineer its own infrastructure. In all of these cases, the fundamental tradeoff is between having something that is less perfect than an imagined ideal of it, and not having it at all

What Dayen refers to as consumers’ “lived experience” is really their “perceived experience.” This is important to how markets work. Competition is driven by consumers’ perception that things could be better (and by entrepreneurs’ perception that they can make it so). This perception is what keeps us on the treadmill. Consumers don’t look to their past generations and say “wow, by nearly every measure my life can be better than theirs with less effort!” They focus on what they don’t have yet, on the seemingly better lives of their contemporaries.

This description of markets may sound grotesquely dehumanizing. To the extent that it really is, this is because we live in a world of scarcity. There will always be tradeoffs and in a literally real way no consumer will ever have everything that she needs, let alone that she wants. 

On the flip side, this is what drives markets to make consumers better off. Consumers’ wants drive producers’ factories and innovators’ minds. There is no supply curve without a demand curve. And consumers are able to satisfy their own needs by becoming producers who work to satisfy the wants and needs of others. 

A Fair Question: Are Markets Worth It?

Dayen’s perspective on this description of markets, shared with his fellow reform-minded anti-antitrust crusaders, is that the typical consumers’ perceived experience of the market demonstrates that markets don’t work — that they have been captured by monopolists seeking to extract every ounce of revenue from each individual consumer. But this is not a story of monopolies. It is more plainly the story of markets. What Dayen identifies as a problem with the markets really is just the markets working as they are supposed to.

If this is just how markets work, it is fair to ask whether they are worth it. Importantly, those of us who answer “yes” need not be blind to or dismissive of concerns such as Dayen’s — to the concerns of the typical consumer. Economists have long recognized that capitalist markets are about allocative efficiency, not distributive efficiency — about making society as a whole as wealthy as possible but not about making sure that that wealth is fairly distributed. 

The antitrust reform movement is driven by advocates who long for a world in which everyone is poorer but feels more equal, as opposed to what they perceive as a world in which a few monopolists are extremely wealthy and everyone else feels poor. Their perception of this as the but-for world is not unreasonable, but it is also not accurate. The better world is the one with thriving, prosperous, markets,in which consumers broadly feel that they share in this prosperity. It may be the case that such a world has some oligopolies and even monopolies — that is what economic efficiency sometimes looks like. 

But those firms’ prosperity need not be adverse to consumers’ experience of the market. The challenging question is how we achieve this outcome. But that is a question of politics and macroeconomic policy, and of corporate social policy. It is a question of national identity, whether consumers’ perception of the economic treadmill can pivot from one of perceived futility to one of recognizing their lived contributions to society. It is one that antitrust law as it exists today contributes to answering, but not one that antitrust law on its own can ever answer.

On the other hand, were we to follow the populists’ lead and turn antitrust into a remedy for the perceived maladies of the market, we would risk the engine that improves consumers’ actual lived experience. The alternative to an antitrust driven by economic analysis and that errs on the side of not disrupting markets in favor of perceived injuries is an antitrust in which markets are beholden to the whims of politicians and enforcement officials. This is a world in which litigation is used by politicians to make it appear they are delivering on impossible promises, in which litigation is used to displace blame for politicians’ policy failures, in which litigation is used to distract from socio-political events entirely unrelated to the market. 

Concerns such as Dayen’s are timeless and not unreasonable. But the reflexive action is not the answer to such concerns. Rather, the response always must be to ask “opposed to what?” What is the but-for world? Here, Dayen and his peers suffer both Type I and Type II errors. They misdiagnose antitrust and non-competitive markets as the cause of their perceived problems. And they are overly confident in their proposed solutions to those problems, not recognizing the real harms that their proposed politicization of antitrust and markets poses.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Dirk Auer, (Senior Fellow of Law & Economics, ICLE); Eric Fruits (Chief Economist, ICLE; Adjunct Professor of Economics, Portland State University); and Kristian Stout (Associate Director, ICLE

The COVID-19 pandemic is changing the way consumers shop and the way businesses sell. These shifts in behavior, designed to “flatten the curve” of infection through social distancing, are happening across many (if not all) markets. But in many cases, it’s impossible to know now whether these new habits are actually achieving the desired effect. 

Take a seemingly silly example from Oregon. The state is one of only two in the U.S. that prohibits self-serve gas. In response to COVID-19, the state fire marshall announced it would temporarily suspend its enforcement of the prohibition. Public opinion fell into two broad groups. Those who want the option to pump their own gas argue that self-serve reduces the interaction between station attendants and consumers, thereby potentially reducing the spread of coronavirus. On the other hand, those who support the prohibition on self-serve have blasted the fire marshall’s announcement, arguing that all those dirty fingers pressing keypads and all those grubby hands on fuel pumps will likely increase the spread of the virus. 

Both groups may be right, but no one yet knows the net effect. We can only speculate. This picture becomes even more complex when considering other, alternative policies. For instance, would it be more effective for the state of Oregon to curtail gas station visits by forcing the closure of stations? Probably not. Would it be more effective to reduce visits through some form of rationing? Maybe. Maybe not. 

Policymakers will certainly struggle to efficiently decide how firms and consumers should minimize the spread of COVID-19. That struggle is an extension of Hayek’s knowledge problem: policymakers don’t have adequate knowledge of alternatives, preferences, and the associated risks. 

A Hayekian approach — relying on bottom-up rather than top-down solutions to the problem — may be the most appropriate solution. Allowing firms to experiment and iteratively find solutions that work for their consumers and employees (potentially adjusting prices and wages in the process) may be the best that policymakers can do.

The case of online retail platforms

One area where these complex tradeoffs are particularly acute is that of online retail. In response to the pandemic, many firms have significantly boosted their online retail capacity. 

These initiatives have been met with a mix of enthusiasm and disapproval. On the one hand online retail enables consumers to purchase “essential” goods with a significantly reduced risk of COVID-19 contamination. It also allows “non-essential” goods to be sold, despite the closure of their brick and mortar stores. At first blush, this seems like a win-win situation for both consumers and retailers of all sizes, with large retailers ramping up their online operations and independent retailers switching to online platforms such as Amazon.

But there is a potential downside. Even contactless deliveries do present some danger, notably for warehouse workers who run the risk of being infected and subsequently passing the virus on to others. This risk is amplified by the fact that many major retailers, including Walmart, Kroger, CVS, and Albertsons, are hiring more warehouse and delivery workers to meet an increase in online orders. 

This has led some to question whether sales of “non-essential” goods (though the term is almost impossible to define) should be halted. The reasoning is that continuing to supply such goods needlessly puts lives at risk and reduces overall efforts to slow the virus.

Once again, these are incredibly complex questions. It is hard to gauge the overall risk of infection that is produced by the online retail industry’s warehousing and distribution infrastructure. In particular, it is not clear how effective social distancing policies, widely imposed within these workplaces, will be at achieving distancing and, in turn, reducing infections. 

More fundamentally, whatever this risk turns out to be, it is almost impossible to weigh it against an appropriate counterfactual. 

Online retail is not the only area where this complex tradeoff arises. An analogous reasoning could, for instance, also be applied to food delivery platforms. Ordering a meal on UberEats does carry some risk, but so does repeated trips to the grocery store. And there are legitimate concerns about the safety of food handlers working in close proximity to each other.  These considerations make it hard for policymakers to strike the appropriate balance. 

The good news: at least some COVID-related risks are being internalized

But there is also some good news. Firms, consumers and employees all have some incentive to mitigate these risks. 

Consumers want to purchase goods without getting contaminated; employees want to work in safe environments; and firms need to attract both consumers and employees, while minimizing potential liability. These (partially) aligned incentives will almost certainly cause these economic agents to take at least some steps that mitigate the spread of COVID-19. This might notably explain why many firms imposed social distancing measures well before governments started to take notice (here, here, and here). 

For example, one first-order effect of COVID-19 is that it has become more expensive for firms to hire warehouse workers. Not only have firms moved up along the supply curve (by hiring more workers), but the curve itself has likely shifted upwards reflecting the increased opportunity cost of warehouse work. Predictably, this has resulted in higher wages for workers. For example, Amazon and Walmart recently increased the wages they were paying warehouse workers, as have brick and mortar retailers, such as Kroger, who have implemented similar policies.

Along similar lines, firms and employees will predictably bargain — through various channels — over the appropriate level of protection for those workers who must continue to work in-person.

For example, some companies have found ways to reduce risk while continuing operations:

  • CNBC reports Tyson Foods is using walk-through infrared body temperature scanners to check employees’ temperatures as they enter three of the company’s meat processing plants. Other companies planning to use scanners include Goldman Sachs, UPS, Ford, and Carnival Cruise Lines.
  • Kroger’s Fred Meyer chain of supermarkets is limiting the number of customers in each of its stores to half the occupancy allowed under international building codes. Kroger will use infrared sensors and predictive analytics to monitor the new capacity limits. The company already uses the technology to estimate how many checkout lanes are needed at any given time.
  • Trader Joe’s limits occupancy in its store. Customers waiting to enter are asked to stand six feet apart using marked off Trader Joe’s logos on the sidewalk. Shopping carts are separated into groups of “sanitized” and “to be cleaned.” Each cart is thoroughly sprayed with disinfectant and wiped down with a clean cloth.

In other cases, bargaining over the right level of risk-mitigation has been pursued through more coercive channels, such as litigation and lobbying:

  • A recently filed lawsuit alleges that managers at an Illinois Walmart store failed to alert workers after several employees began showing symptoms of COVID-19. The suit claims Walmart “had a duty to exercise reasonable care in keeping the store in a safe and healthy environment and, in particular, to protect employees, customers and other individuals within the store from contracting COVID-19 when it knew or should have known that individuals at the store were at a very high risk of infection and exposure.” 
  • According to CNBC, a group of legislators, unions and Amazon employees in New York wrote a letter to CEO Jeff Bezos calling on him to enact greater protections for warehouse employees who continue to work during the coronavirus outbreak. The Financial Times reports worker protests at Amazon warehouse in the US, France, and Italy. Worker protests have been reported at a Barnes & Noble warehouse. Several McDonald’s locations have been hit with strikes.
  • In many cases, worker concerns about health and safety have been conflated with long-simmering issues of unionization, minimum wage, flexible scheduling, and paid time-off. For example, several McDonald’s strikes were reported to have been organized by “Fight for $15.”

Sometimes, there is simply no mutually-advantageous solution. And businesses are thus left with no other option than temporarily suspending their activities: 

  • For instance, McDonalds and Burger King have spontaneously closed their restaurants — including drive-thru and deliveries — in many European countries (here and here).
  • In Portland, Oregon, ChefStable a restaurant group behind some of the city’s best-known restaurants, closed all 20 of its bars and restaurants for at least four weeks. In what he called a “crisis of conscience,” owner Kurt Huffman concluded it would be impossible to maintain safe social distancing for customers and staff.

This is certainly not to say that all is perfect. Employers, employees and consumers may have very strong disagreements about what constitutes the appropriate level of risk mitigation.

Moreover, the questions of balancing worker health and safety with that of consumers become all the more complex when we recognize that consumers and businesses are operating in a dynamic environment, making sometimes fundamental changes to reduce risk at many levels of the supply chain.

Likewise, not all businesses will be able to implement measures that mitigate the risk of COVID-19. For instance, “Big Business” might be in a better position to reduce risks to its workforce than smaller businesses. 

Larger firms tend to have the resources and economies of scale to make capital investments in temperature scanners or sensors. They have larger workforces where employees can, say, shift from stocking shelves to sanitizing shopping carts. Several large employers, including Amazon, Kroger, and CVS have offered higher wages to employees who are more likely to be exposed to the coronavirus. Smaller firms are less likely to have the resources to offer such wage premiums.

For example, Amazon recently announced that it would implement mandatory temperature checks, that it would provide employees with protective equipment, and that it would increase the frequency and intensity of cleaning for all its sites. And, as already mentioned above, Tyson Foods announced that they would install temperature scanners at a number of sites. It is not clear whether smaller businesses are in a position to implement similar measures. 

That’s not to say that small businesses can’t adjust. It’s just more difficult. For example, a small paint-your-own ceramics shop, Mimosa Studios, had to stop offering painting parties because of government mandated social distancing. One way it’s mitigating the loss of business is with a paint-at-home package. Customers place an order online, and the studio delivers the ceramic piece, paints, and loaner brushes. When the customer is finished painting, Mimosa picks up the piece, fires it, and delivers the finished product. The approach doesn’t solve the problem, but it helps mitigate the losses.

Conclusion

In all likelihood, we can’t actually avoid all bad outcomes. There is, of course, some risk associated with even well-resourced large businesses continuing to operate, even though some of them play a crucial role in coronavirus-related lockdowns. 

Currently, market actors are working within the broad outlines of lockdowns deemed necessary by policymakers. Given the intensely complicated risk calculation necessary to determine if any given individual truly needs an “essential” (or even a “nonessential”) good or service, the best thing that lawmakers can do for now is let properly motivated private actors continue to seek optimal outcomes together within the imposed constraints. 

So far, most individuals and the firms serving them are at least partially internalizing Covid-related risks. The right approach for lawmakers would be to watch this process and determine where it breaks down. Measures targeted to fix those breaches will almost inevitably outperform interventionist planning to determine exactly what is essential, what is nonessential, and who should be allowed to serve consumers in their time of need.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Hal Singer, (Managing Director, econONE; Adjunct Professor, Georgeown University, McDonough School of Business).]

In these harrowing times, it is a natural to fixate on the problem of testing—and how the United States got so far behind South Korea on this front—as a means to arrest the spread of Coronavirus. Under this remedy, once testing becomes ubiquitous, the government could track and isolate everyone who has been in recent contact with someone who has been diagnosed with Covid-19. 

A good start, but there are several pitfalls from “contact tracing” or what I call “standalone testing.” First, it creates an outsized role for government and raises privacy concerns relating to how data on our movements and in-person contacts are shared. Second, unless the test results were instantaneously available and continuously updated, data from the tests would not be actionable. A subject could be clear of the virus on Tuesday, get tested on Wednesday, and be exposed to the virus on Friday.

Third, and one easily recognizable to economists, is that standalone testing does not provide any means by which healthy subjects of the test can credibly signal to their peers that they are now safe to be around. Given the skewed nature of economy towards services—from restaurants to gyms and yoga studios to coffee bars—it is vital that we interact physically. To return to work or to enter a restaurant or any other high-density environment, both the healthy subject must convey to her peers that she is healthy, and other co-workers or patrons in a high-density environment must signal their health to the subject. Without this mutual trust, healthy workers will be reluctant to return to the workplace or to integrate back into society. It is not enough for complete strangers to say “I’m safe.” How do I know you are safe?

As law professor Thom Lambert tweeted, this information problem is related to the famous lemons problem identified by Nobel laureate George Akerlof: We “can’t tell ‘quality’ so we assume everyone’s a lemon and act accordingly. We once had that problem with rides from strangers, but entrepreneurship and technology solved the problem.”

Akerlof recognized that markets were prone to failure in the face of “asymmetric information,” or when a seller knows a material fact that the buyer does not. He showed a market for used cars could degenerate into a market exclusively for lemons, because buyers rationally are not willing to pay the full value of a good car and the discount they would impose on all sellers would drive good cars away.

To solve this related problem, we need a way to verify our good health. Borrowing Lambert’s analogy, most Americans (barring hitchhikers) would never jump in a random car without knowledge that the driver worked for a reputable ride-hailing service or licensed taxi. When an Uber driver pulls up to the curb, the rider can feel confident that the driver has been verified (and vice versa) by a third party—in this case, Uber—and if there’s any doubt of the driver’s credentials, the driver typically speaks the passenger’s name when the door is still ajar. Uber also mitigated the lemons problem by allowing passengers and drivers to engage in reciprocal rating.

Similarly, when a passenger shows up at the airport, he presents a ticket, typically in electronic form on his phone, to a TSA officer. The phone is scanned by security, and verification of ticket and TSA PreCheck status is confirmed via rapid communication with the airline. The same verification is repeated at stadium venues across America, thanks in part to technology developed by StubHub.

A similar verification technology could be deployed to solve the trust problem relating to Coronavirus. It is meant to complement standalone testing. Here’s how it might work:

Each household would have a designated testing center in their community and potentially a test kit in their own homes. Testing would done routinely and free of charge, so as to ensure that test results are up to date. (Given the positive externalities associated with mass testing and verification, the optimal price is not positive.) Just as an airline sends confirmation of a ticket purchase, the company responsible for administering the test would report the results within an hour to the subject and it would store for 24 hours in the vendor’s app. In contrast to the invasive role of government in contact tracing, the only role for government here would be to approve of qualified vendors of the testing equipment.

Armed with third-party verification of her health status on her phone, the subject could present these results to a gatekeeper at any facility. Suppose the subject typically takes the metro to work, and stops at her gym before going home. Under this regime, she would present her phone to three gatekeepers (metro, work, gym) to obtain access. Of course, subjects who test positive for Coronavirus would not gain access to these secure sites until the virus left their system and they subsequently test negative. Seems harsh for them, but imposing this restriction isn’t really a degradation in mobility relative to the status quo, under which access is denied to everyone.

When I floated this idea on Twitter a few days ago, it was generally well received, but even supporters spotted potential shortcomings. For example, users could have a fraudulent app on their phones, or otherwise fake a negative result. Yet government sanctioning of a select groups of test vendors should prevent this type of fraud. Private gatekeepers such as restaurants presumably would not have to operate under any mandate; they have a clear incentive not only to restrict access to verified patrons, but also to advertise that they have strict rules on admission. By the same token, if they did, for some reason, allowed people to enter without verification, they could do so. But patrons’ concern for their own health likely would undermine such a permissive policy.

Other skeptics raised privacy concerns. But if a user voluntarily conveys her health status to a gatekeeper, so long as the information stops there, it’s hard to conceive a privacy violation. Another potential violation would be an equipment vendor’s sharing information of a user’s health status with third parties. Of course, the government could impose restrictions on a vendor’s data sharing as a condition of granting a license to test and verify. But given the circumstances, such sharing could support contact tracing, or allow supplies to be mobilized to certain areas where there are outbreaks. 

Still others noted that some Americans lack phones. For these Americans, I’d suggest paper verification would suffice, or even better yet, subsidized phones.

No solution is flawless. And it’s incredible that we even have to think this way. But who could have imagined, even a few weeks ago, that we would be pinned in our basements, afraid to interact with the world in close quarters? Desperate times call for creative and economically sound measures.

A recently published book, “Kochland – The Secret History of Koch Industries and Corporate Power in America” by Christopher Leonard, presents a gripping account of relentless innovation and the power of the entrepreneur to overcome adversity in pursuit of delivering superior goods and services to the market while also reaping impressive profits. It’s truly an inspirational American story.

Now, I should note that I don’t believe Mr. Leonard actually intended his book to be quite so complimentary to the Koch brothers and the vast commercial empire they built up over the past several decades. He includes plenty of material detailing, for example, their employees playing fast and loose with environmental protection rules, or their labor lawyers aggressively bargaining with unions, sometimes to the detriment of workers. And all of the stories he presents are supported by sympathetic emotional appeals through personal anecdotes. 

But, even then, many of the negative claims are part of a larger theme of Koch Industries progressively improving its business practices. One prominent example is how Koch Industries learned from its environmentally unfriendly past and implemented vigorous programs to ensure “10,000% compliance” with all federal and state environmental laws. 

What really stands out across most or all of the stories Leonard has to tell, however, is the deep appreciation that Charles Koch and his entrepreneurially-minded employees have for the fundamental nature of the market as an information discovery process. Indeed, Koch Industries has much in common with modern technology firms like Amazon in this respect — but decades before the information technology revolution made the full power of “Big Data” gathering and processing as obvious as it is today.

The impressive information operation of Koch Industries

Much of Kochland is devoted to stories in which Koch Industries’ ability to gather and analyze data from across its various units led to the production of superior results for the economy and consumers. For example,  

Koch… discovered that the National Parks Service published data showing the snow pack in the California mountains, data that Koch could analyze to determine how much water would be flowing in future months to generate power at California’s hydroelectric plants. This helped Koch predict with great accuracy the future supply of electricity and the resulting demand for natural gas.

Koch Industries was able to use this information to anticipate the amount of power (megawatt hours) it needed to deliver to the California power grid (admittedly, in a way that was somewhat controversial because of poorly drafted legislation relating to the new regulatory regime governing power distribution and resale in the state).

And, in 2000, while many firms in the economy were still riding the natural gas boom of the 90s, 

two Koch analysts and a reservoir engineer… accurately predicted a coming disaster that would contribute to blackouts along the West Coast, the bankruptcy of major utilities, and skyrocketing costs for many consumers.

This insight enabled Koch Industries to reap huge profits in derivatives trading, and it also enabled it to enter — and essentially rescue — a market segment crucial for domestic farmers: nitrogen fertilizer.

The market volatility in natural gas from the late 90s through early 00s wreaked havoc on the nitrogen fertilizer industry, for which natural gas is the primary input. Farmland — a struggling fertilizer producer — had progressively mismanaged its business over the preceding two decades by focusing on developing lines of business outside of its core competencies, including blithely exposing itself to the volatile natural gas market in pursuit of short-term profits. By the time it was staring bankruptcy in the face, there were no other companies interested in acquiring it. 

Koch’s analysts, however, noticed that many of Farmland’s key fertilizer plants were located in prime locations for reaching local farmers. Once the market improved, whoever controlled those key locations would be in a superior position for selling into the nitrogen fertilizer market. So, by utilizing the data it derived from its natural gas operations (both operating pipelines and storage facilities, as well as understanding the volatility of gas prices and availability through its derivatives trading operations), Koch Industries was able to infer that it could make substantial profits by rescuing this bankrupt nitrogen fertilizer business. 

Emblematic of Koch’s philosophy of only making long-term investments, 

[o]ver the next ten years, [Koch Industries] spent roughly $500 million to outfit the plants with new technology while streamlining production… Koch installed a team of fertilizer traders in the office… [t]he traders bought and sold supplies around the globe, learning more about fertilizer markets each day. Within a few years, Koch Fertilizer built a global distribution network. Koch founded a new company, called Koch Energy Services, which bought and sold natural gas supplies to keep the fertilizer plants stocked.

Thus, Koch Industries not only rescued midwest farmers from shortages that would have decimated their businesses, it invested heavily to ensure that production would continue to increase to meet future demand. 

As noted, this acquisition was consistent with the ethos of Koch Industries, which stressed thinking about investments as part of long-term strategies, in contrast to their “counterparties in the market [who] were obsessed with the near-term horizon.” This led Koch Industries to look at investments over a period measured in years or decades, an approach that allowed the company to execute very intricate investment strategies: 

If Koch thought there was going to be an oversupply of oil in the Gulf Coast region, for example, it might snap up leases on giant oil barges, knowing that when the oversupply hit, companies would be scrambling for extra storage space and willing to pay a premium for the leases that Koch bought on the cheap. This was a much safer way to execute the trade than simply shorting the price of oil—even if Koch was wrong about the supply glut, the downside was limited because Koch could still sell or use the barge leases and almost certainly break even.

Entrepreneurs, regulators, and the problem of incentives

All of these accounts and more in Kochland brilliantly demonstrate a principal salutary role of entrepreneurs in the market, which is to discover slack or scarce resources in the system and manage them in a way that they will be available for utilization when demand increases. Guaranteeing the presence of oil barges in the face of market turbulence, or making sure that nitrogen fertilizer is available when needed, is precisely the sort of result sound public policy seeks to encourage from firms in the economy. 

Government, by contrast — and despite its best intentions — is institutionally incapable of performing the same sorts of entrepreneurial activities as even very large private organizations like Koch Industries. The stories recounted in Kochland demonstrate this repeatedly. 

For example, in the oil tanker episode, Koch’s analysts relied on “huge amounts of data from outside sources” – including “publicly available data…like the federal reports that tracked the volume of crude oil being stored in the United States.” Yet, because that data was “often stale” owing to a rigid, periodic publication schedule, it lacked the specificity necessary for making precise interventions in markets. 

Koch’s analysts therefore built on that data using additional public sources, such as manifests from the Customs Service which kept track of the oil tanker traffic in US waters. Leveraging all of this publicly available data, Koch analysts were able to develop “a picture of oil shipments and flows that was granular in its specificity.”

Similarly, when trying to predict snowfall in the western US, and how that would affect hydroelectric power production, Koch’s analysts relied on publicly available weather data — but extended it with their own analytical insights to make it more suitable to fine-grained predictions. 

By contrast, despite decades of altering the regulatory scheme around natural gas production, transport and sales, and being highly involved in regulating all aspects of the process, the federal government could not even provide the data necessary to adequately facilitate markets. Koch’s energy analysts would therefore engage in various deals that sometimes would only break even — if it meant they could develop a better overall picture of the relevant markets: 

As was often the case at Koch, the company… was more interested in the real-time window that origination deals could provide into the natural gas markets. Just as in the early days of the crude oil markets, information about prices was both scarce and incredibly valuable. There were not yet electronic exchanges that showed a visible price of natural gas, and government data on sales were irregular and relatively slow to come. Every origination deal provided fresh and precise information about prices, supply, and demand.

In most, if not all, of the deals detailed in Kochland, government regulators had every opportunity to find the same trends in the publicly available data — or see the same deficiencies in the data and correct them. Given their access to the same data, government regulators could, in some imagined world, have developed policies to mitigate the effects of natural gas market collapses, handle upcoming power shortages, or develop a reliable supply of fertilizer to midwest farmers. But they did not. Indeed, because of the different sets of incentives they face (among other factors), in the real world, they cannot do so, despite their best intentions.

The incentive to innovate

This gets to the core problem that Hayek described concerning how best to facilitate efficient use of dispersed knowledge in such a way as to achieve the most efficient allocation and distribution of resources: 

The various ways in which the knowledge on which people base their plans is communicated to them is the crucial problem for any theory explaining the economic process, and the problem of what is the best way of utilizing knowledge initially dispersed among all the people is at least one of the main problems of economic policy—or of designing an efficient economic system.

The question of how best to utilize dispersed knowledge in society can only be answered by considering who is best positioned to gather and deploy that knowledge. There is no fundamental objection to “planning”  per se, as Hayek notes. Indeed, in a complex society filled with transaction costs, there will need to be entities capable of internalizing those costs  — corporations or governments — in order to make use of the latent information in the system. The question is about what set of institutions, and what set of incentives governing those institutions, results in the best use of that latent information (and the optimal allocation and distribution of resources that follows from that). 

Armen Alchian captured the different incentive structures between private firms and government agencies well: 

The extent to which various costs and effects are discerned, measured and heeded depends on the institutional system of incentive-punishment for the deciders. One system of rewards-punishment may increase the extent to which some objectives are heeded, whereas another may make other goals more influential. Thus procedures for making or controlling decisions in one rewards-incentive system are not necessarily the “best” for some other system…

In the competitive, private, open-market economy, the wealth-survival prospects are not as strong for firms (or their employees) who do not heed the market’s test of cost effectiveness as for firms who do… as a result the market’s criterion is more likely to be heeded and anticipated by business people. They have personal wealth incentives to make more thorough cost-effectiveness calculations about the products they could produce …

In the government sector, two things are less effective. (1) The full cost and value consequences of decisions do not have as direct and severe a feedback impact on government employees as on people in the private sector. The costs of actions under their consideration are incomplete simply because the consequences of ignoring parts of the full span of costs are less likely to be imposed on them… (2) The effectiveness, in the sense of benefits, of their decisions has a different reward-inventive or feedback system … it is fallacious to assume that government officials are superhumans, who act solely with the national interest in mind and are never influenced by the consequences to their own personal position.

In short, incentives matter — and are a function of the institutional arrangement of the system. Given the same set of data about a scarce set of resources, over the long run, the private sector generally has stronger incentives to manage resources efficiently than does government. As Ludwig von Mises showed, moving those decisions into political hands creates a system of political preferences that is inherently inferior in terms of the production and distribution of goods and services.

Koch Industries: A model of entrepreneurial success

The market is not perfect, but no human institution is perfect. Despite its imperfections, the market provides the best system yet devised for fairly and efficiently managing the practically unlimited demands we place on our scarce resources. 

Kochland provides a valuable insight into the virtues of the market and entrepreneurs, made all the stronger by Mr. Leonard’s implied project of “exposing” the dark underbelly of Koch Industries. The book tells the bad tales, which I’m willing to believe are largely true. I would, frankly, be shocked if any large entity — corporation or government — never ran into problems with rogue employees, internal corporate dynamics gone awry, or a failure to properly understand some facet of the market or society that led to bad investments or policy. 

The story of Koch Industries — presented even as it is through the lens of a “secret history”  — is deeply admirable. It’s the story of a firm that not only learns from its own mistakes, as all firms must do if they are to survive, but of a firm that has a drive to learn in its DNA. Koch Industries relentlessly gathers information from the market, sometimes even to the exclusion of short-term profit. It eschews complex bureaucratic structures and processes, which encourages local managers to find opportunities and nimbly respond.

Kochland is a quick read that presents a gripping account of one of America’s corporate success stories. There is, of course, a healthy amount of material in the book covering the Koch brothers’ often controversial political activities. Nonetheless, even those who hate the Koch brothers on account of politics would do well to learn from the model of entrepreneurial success that Kochland cannot help but describe in its pages. 

Paul H. Rubin is the Dobbs Professor of Economics Emeritus, Emory University, and President, Southern Economic Association, 2013

I want to thank Geoff for inviting me to blog about my new book.

My book, The Capitalist Paradox: How Cooperation Enables Free Market Competition, Bombardier Books, 2019, has been published. The main question I address in this short book is: Given the obvious benefits of markets over socialism, why do so many still oppose markets? I have been concerned with this issue for many years. Given the current state of American politics, the question is even more important than when I began the book.

I begin by pointing out that humans are not good intuitive economists. Our minds evolved in a simple setting where the economy was simple, with little trade, little specialization (except by age and gender), and little capital. In this world there was no need for our brains to evolve to understand economics. (Politics is a different story.) The main takeaway from this world was that our minds evolved to view the world as zero-sum.  Zero-sum thinking is the error behind most policy errors in economics.

The second part of the argument is that in many cases, when economists are discussing efficiency issues (such as optimal taxation) listeners are hearing distribution issues. So we economists would do better to begin with a discussion showing that there are efficiency (“size of the pie”) effects before showing what they are in a particular case.  That is, we should show that taxation can affect total income before showing how it does so in a particular case. I call this “really basic economics,” which should be taught before basic economics. It is sometimes said that experts understand their field so well that they are “mind blind” to the basics, and that is the situation here.

I then show that competition is an improper metaphor for economics.  Discussions of competition brings up sports (and in economics the notion of competition was borrowed from sports) and sports is zero-sum. Thus, when economists discuss competition, they reinforce people’s notion that economics is zero sum.  People do not like competition. A quote from the book:

Here are some common modifiers of “competition” and the number of Google references to each:

“Cutthroat competition” (256,000), “excessive competition” (159,000), “destructive competition” (105,000), “ruthless competition” (102,000), “ferocious competition” (66,700), “vicious competition” (53,500), “unfettered competition” (37,000), “unrestrained competition” (34,500), “harmful competition” (18,000), and “dog-eat-dog competition” (15, 000). Conversely, for “beneficial competition” there are 16,400 references. For “beneficial cooperation” there are 548,000 references, and almost no references to any of the negative modifiers of cooperation.

The final point, and what ties it all together, is a discussion showing that the economy is actually more cooperative than it is competitive. There are more cooperative relationships in an economy than there are competitive interactions.  The basic economic element is a transaction, and transactions are cooperative.  Competition chooses the best agents to cooperate with, but cooperation does the work and creates the consumer surplus. Thus, referring to markets as “cooperative” rather than “competitive” would not only reduce hostility towards markets, but would also be more accurate.

An economist reading this book would probably not learn much economics. I do not advocate any major change in economic theory from competition to cooperation. But I propose a different way to view the economy, and one that might help us better explain what we are doing to students and to policy makers, including voters.

On Debating Imaginary Felds

Gus Hurwitz —  18 September 2013

Harold Feld, in response to a recent Washington Post interview with AEI’s Jeff Eisenach about AEI’s new Center for Internet, Communications, and Technology Policy, accused “neo-conservative economists (or, as [Feld] might generalize, the ‘Right’)” of having “stopped listening to people who disagree with them. As a result, they keep saying the same thing over and over again.”

(Full disclosure: The Center for Internet, Communications, and Technology Policy includes TechPolicyDaily.com, to which I am a contributor.)

Perhaps to the surprise of many, I’m going to agree with Feld. But in so doing, I’m going to expand upon his point: The problem with anti-economics social activists (or, as we might generalize, the ‘Left’)[*] is that they have stopped listening to people who disagree with them. As a result, they keep saying the same thing over and over again.

I don’t mean this to be snarky. Rather, it is a very real problem throughout modern political discourse, and one that we participants in telecom and media debates frequently contribute to. One of the reasons that I love – and sometimes hate – researching and teaching in this area is that fundamental tensions between government and market regulation lie at its core. These tensions present challenging and engaging questions, making work in this field exciting, but are sometimes intractable and often evoke passion instead of analysis, making work in this field seem Sisyphean.

One of these tensions is how to secure for consumers those things which the market does not (appear to) do a good job of providing. For instance, those of us on both the left and right are almost universally agreed that universal service is a desirable goal. The question – for both sides – is how to provide it. Feld reminds us that “real world economics is painfully complicated.” I would respond to him that “real world regulation is painfully complicated.”

I would point at Feld, while jumping up and down shouting “J’accuse! Nirvana Fallacy!” – but I’m certain that Feld is aware of this fallacy, just as I hope he’s aware that those of us who have spent much of our lives studying economics are bitterly aware that economics and markets are complicated things. Indeed, I think those of us who study economics are even more aware of this than is Feld – it is, after all, one of our mantras that “The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.” This mantra is particularly apt in telecommunications, where one of the most consistent and important lessons of the past century has been that the market tends to outperform regulation.

This isn’t because the market is perfect; it’s because regulation is less perfect. Geoff recently posted a salient excerpt from Tom Hazlett’s 1997 Reason interview of Ronald Coase, in which Coase recounted that “When I was editor of The Journal of Law and Economics, we published a whole series of studies of regulation and its effects. Almost all the studies – perhaps all the studies – suggested that the results of regulation had been bad, that the prices were higher, that the product was worse adapted to the needs of consumers, than it otherwise would have been.”

I don’t want to get into a tit-for-tat over individual points that Feld makes. But I will look at one as an example: his citation to The Market for Lemons. This is a classic paper, in which Akerlof shows that information asymmetries can cause rational markets to unravel. But does it, as Feld says, show “market failure in the presence of robust competition?” That is a hotly debated point in the economics literature. One view – the dominant view, I believe – is that it does not. See, e.g., the EconLib discussion (“Akerlof did not conclude that the lemon problem necessarily implies a role for government”). Rather, the market has responded through the formation of firms that service and certify used cars, document car maintenance, repairs and accidents, warranty cars, and suffer reputational harms for selling lemons. Of course, folks argue, and have long argued, both sides. As Feld says, economics is painfully complicated – it’s a shame he draws a simple and reductionist conclusion from one of the seminal articles is modern economics, and a further shame he uses that conclusion to buttress his policy position. J’accuse!

I hope that this is in no way taken as an attack on Feld – and I wish his piece was less of an attack on Jeff. Fundamentally, he raises a very important point, that there is a real disconnect between the arguments used by the “left” and “right” and how those arguments are understood by the other. Indeed, some of my current work is exploring this very disconnect and how it affects telecom debates. I’m really quite thankful to Feld for highlighting his concern that at least one side is blind to the views of the other – I hope that he’ll be receptive to the idea that his side is subject to the same criticism.

[*] I do want to respond specifically to what I think is an important confusion in Feld piece, which motivated my admittedly snarky labelling of the “left.” I think that he means “neoclassical economics,” not “neo-conservative economics” (which he goes on to dub “Neocon economics”). Neoconservativism is a political and intellectual movement, focused primarily on US foreign policy – it is rarely thought of as a particular branch of economics. To the extent that it does hold to a view of economics, it is actually somewhat skeptical of free markets, especially of lack of moral grounding and propensity to forgo traditional values in favor of short-run, hedonistic, gains.