Archives For economics

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Tim Brennan, (Professor, Economics & Public Policy, University of Maryland; former FCC; former FTC).]

Thinking about how to think about the coronavirus situation I keep coming back to three economic ideas that seem distinct but end up being related. First, a back of the envelope calculation suggests shutting down the economy for a while to reduce the spread of Covid-19. This leads to my second point, that political viability, if not simple fairness, dictates that the winners compensate the losers. The extent of both of these forces my main point, to understand why we can’t just “get the prices right” and let the market take care of it. Insisting that the market works in this situation could undercut the very strong arguments for why we should defer to markets in the vast majority of circumstances.

Is taking action worth it?

The first question is whether shutting down the economy to reduce the spread of Covid-19 is a good bet. Being an economist, I turn to benefit-cost analysis (BCA). All I can offer here is a back-of-the-envelope calculation, which may be an insult to envelopes. (This paper has a more serious calculation with qualitatively similar findings.) With all caveats recognized, the willingness to pay of an average person in the US to social distancing and closure policies, WTP, is

        WTP = X% times Y% times VSL,

where X% is the fraction of the population that might be seriously affected, Y% is the reduction in the likelihood of death for this population from these policies, and VSL is the “value of statistical life” used in BCA calculations, in the ballpark of $9.5M.

For X%, take the percentage of the population over 65 (a demographic including me). This is around 16%. I’m not an epidemiologist, so for Y%, the reduced likelihood of death (either from reduced transmission or reduced hospital overload), I can only speculate. Say it’s 1%, which naively seems pretty small. Even with that, the average willingness to pay would be

        WTP = 16% times 1% times $9.5M = $15,200.

Multiply that by a US population of roughly 330M gives a total national WTP of just over $5 trillion, or about 23% of GDP. Using conventional measures, this looks like a good trade in an aggregate benefit-cost sense, even leaving out willingness to pay to reduce the likelihood of feeling sick and the benefits to those younger than 65. Of course, among the caveats is not just whether to impose distancing and closures, but how long to have them (number of weeks), how severe they should be (gathering size limits, coverage of commercial establishments), and where they should be imposed (closing schools, colleges).  

Actual, not just hypothetical, compensation

The justification for using BCA is that the winners could compensate the losers. In the coronavirus setting, the equity considerations are profound. Especially when I remember that GDP is not a measure of consumer surplus, I ask myself how many months of the disruption (and not just lost wages) from unemployment should low-income waiters, cab drivers, hotel cleaners, and the like bear to reduce my over-65 likelihood of dying. 

Consequently, an important component of this policy to respect equity and quite possibly obtaining public acceptance is that the losers be compensated. In that respect, the justification for packages such as the proposal working (as I write) through Congress is not stimulus—after all, it’s  harder to spend money these days—as much as compensating those who’ve lost jobs as a result of this policy. Stimulus can come when the economy is ready to be jump-started.

Markets don’t always work, perhaps like now 

This brings me to a final point—why is this a public policy matter? My answer to almost any policy question is the glib “just get the prices right and the market will take care of it.” That doesn’t seem all that popular now. Part of that is the politics of fairness: Should the wealthy get the ventilators? Should hoarding of hand sanitizer be rewarded? But much of it may be a useful reminder that markets do not work seamlessly and instantaneously, and may not be the best allocation mechanism in critical times.

That markets are not always best should be a familiar theme to TOTM readers. The cost of using markets is the centerpiece for Ronald Coase’s 1937 Nature of the Firm and 1960 Problem of Social Cost justification for allocation through the courts. Many of us, including me on TOTM, have invoked these arguments to argue against public interventions in the structure of firms, particularly antitrust actions regarding vertical integration. Another common theme is that the common law tends toward efficiency because of the market-like evolutionary processes in property, tort, and contract case law.

This perspective is a useful reminder that the benefits of markets should always be “compared to what?” In one familiar case, the benefits of markets are clear when compared to the snail’s pace, limited information, and political manipulability of administrative price setting. But when one is talking about national emergencies and the inelastic demands, distributional consequences, and the lack of time for the price mechanism to work its wonders, one can understand and justify the use of the plethora of mandates currently imposed or contemplated. 

The common law also appears not to be a good alternative. One can imagine the litigation nightmare if everyone who got the virus attempted to identify and sue some defendant for damages. A similar nightmare awaits if courts were tasked with determning how the risk of a pandemic would have been allocated were contracts ideal.

Much of this may be belaboring the obvious. My concern is that if those of us who appreciate the virtues of markets exaggerate their applicability, those skeptical of markets may use this episode to say that markets inherently fail and more of the economy should be publicly administered. Better to rely on facts rather than ideology, and to regard the current situation as the awful but justifiable exception that proves the general rule.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Sam Bowman, (Director of Competition Policy, ICLE).]

No support package for workers and businesses during the coronavirus shutdown can be comprehensive. In the UK, for example, the government is offering to pay 80% of the wages of furloughed workers, but this will not apply to self-employed people or many gig economy workers, and so far it’s been hard to think of a way of giving them equivalent support. It’s likely that the bill going through Congress will have similar issues.

Whether or not solutions are found for these problems, it may be worth putting in place what you might call a ‘backstop’ policy that allows people to access money in case they cannot access it through the other policies that are being put into place. This doesn’t need to provide equivalent support to other packages, just to ensure that everyone has access to the money they need during the shutdown to pay their bills and rent, and cover other essential costs. The aim here is just to keep everyone afloat.

One mechanism for doing this might be to offer income-contingent loans to anyone currently resident in the country during the shutdown period. These are loans whose repayment is determined by the borrower’s income later on, and are how students in the UK and Australia pay for university. 

In the UK, for example, under the current student loan repayment terms, once a student has graduated, their earnings above a certain income threshold (currently £25,716/year) are taxed at 9% to repay the loan. So, if I earn £30,000/year and have a loan to repay, I pay an additional £385.56/year to repay the loan (9% of the £4,284 I’m earning above the income threshold); if I earn £40,000/year, I pay an additional £1,285.56/year. The loan incurs an annual interest rate equal to an annual measure of inflation plus 3%. Once you have paid off the loan, no more repayments are taken, and any amount still unpaid thirty years after the loan was first taken out is written off.

In practice, these terms mean that there is a significant subsidy to university students, most of whom never pay off the full amount. Under a less generous repayment scheme that was in place until recently, with a lower income threshold for repayment, out of every £1 borrowed by students the long-run cost to the government was 43.3p. This is regarded by many as a feature of the system rather than a bug, because of the belief that university education has positive externalities, and because this approach pools some of the risk associated with pursuing a graduate-level career (the risk of ending up with a low-paid job despite having spent a lot on your education, for example).

For loans available to the wider public, a different set of repayment criteria could apply. We could allow anyone who has filed a W-2 or 1099 tax statement in the past eighteen months (or filed a self-assessment tax return in the UK) to borrow up to something around 20% of median national annual income, to be paid back via an extra few percentage points on their federal income tax or, in the UK, National Insurance contributions over the following ten years, with the rate returning to normal after they have paid off the loan. Some other provision may have to be made for people approaching retirement.

With a low, inflation-indexed interest rate, this would allow people who need funds to access them, but make it mostly pointless for anyone who did not need to borrow. 

If, like student tuition fees, loans were written off after a certain period, low earners would probably never pay back the entirety of the ‘loan’ – as a one-off transfer (ie, one that does not distort work or savings incentives for recipients) to low paid people, this is probably not a bad thing. Most people, though, would pay back as and when they were able to. For self-employed people in particular, it could be a valuable source of liquidity during an unexpected period where they cannot work. Overall, it would function as a cash transfer to lower earners, and a liquidity injection for everyone else who takes advantage of the scheme.

This would have advantages over money being given to every US or UK citizen, as some have proposed, because most of the money being given out would be repaid, so the net burden on taxpayers would be lower and so the deadweight losses created by the additional tax needed to pay for it would be smaller. But you would also eliminate the need for means-testing, relying on self-selection instead.

The biggest obstacle to rolling something like this out may be administrative. However, if the government committed to setting up something like this, banks and credit card companies may be willing to step in in the short-run to issue short-term loans in the knowledge that people could be able to repay them once the government scheme was set up. To facilitate this, the government could guarantee the loans made by banks and credit card companies now, then allow people to opt into the income-contingent loans later, so there was no need for legislation immediately.

Speed is extremely important in helping people plug the gaps in their finances. As a complement to the government’s other plans, income-contingent loans to groups like self-employed people may be a useful way of catching people who would otherwise fall through the cracks.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Mark Jamison, (Director and Gunter Professor, Public Utility Research Center, University of Florida and Visiting Scholar with the American Enterprise Institute.).]

The economic impacts of the coronavirus pandemic, and of the government responses to it, are significant and could be staggering, especially for small businesses. Goldman Sachs estimates a potential 24% drop in US GDP for the second quarter of 2020 and a 4% decline for the year. Its small business survey found that a little over half of small businesses might last for less than three months in this economic downturn. Small business employs nearly 60 million people in the US. How many will be out of work this year is anyone’s guess, but the number will be large.

What should small businesses do? First, focus on staying in business because their customers and employees need them to be healthy when the economy begins to recover. That will certainly mean slowing down business activity and decreasing payroll to manage losses, and managing liquidity.

Second, look for opportunities in the present crisis. Consumers are slowing their spending, but they will spend for things they still need and need now. And there will be new demand for things they didn’t need much before, like more transportation of food, support for health needs, and crisis management. Which business sectors will recover first? Those whose downturns represented delayed demand, such as postponed repairs and business travel, rather than evaporated demand, such as luxury items.

Third, they can watch for and take advantage of government support programs. Many programs simply provide low-cost loans, which do not solve the small-business problem of customers not buying: Borrowing money to meet payroll for idle workers simply delays business closure and makes bankruptcy more likely. But some grants and tax breaks are under discussion (see below).

Fourth, they can renegotiate loans and contracts. One of the mistakes lenders made in the past is holding stressed borrowers’ feet to the fire, which only led to more, and more costly loan defaults. At least some lenders have learned. So lenders and some suppliers might be willing to receive some payments rather than none.

What should government do? Unfortunately, Washington seems to think that so-called stimulus spending is the cure for any economic downturn. This isn’t true. I’ll explain why below, but let me first get to what is more productive. 

The major problem is that customers are unable to buy and businesses are unable to produce because of the responses to the coronavirus. Sometimes transactions are impossible, but there are times where buying and selling is simply made more costly by the pandemic and the government responses. So government support for the economy should address these problems directly.

For buyers, government officials should recognize that buying is hard and costly for them. So policies should include improving their abilities to buy during this time. Sales tax holidays, especially on healthcare, food, and transportation would be helpful. 

Waivers of postal fees would make e-commerce cheaper. And temporary support for fixed costs, such as mortgages, would free money for other things. Tax breaks for the gig economy would lower service costs and provide new employment opportunities. And tax credits for durables like home improvements would lower costs of social distancing.

But the better opportunities for government impact are on the business side because small business affects both the supply of services and the incomes of consumers.

For small business policy, my American Enterprise Institute colleagues Glenn Hubbard and Michael Strain have done the most thoughtful work that I have seen. They note that the problems for small businesses are that they do not have enough business activity to meet payroll and other bills. This means that “(t)he goal should be to replace a large portion of the revenue (not just the payroll expenses) those businesses would have generated in the absence of being shut down due to the coronavirus.” 

They suggest policies to replace 80 percent of the small business revenue loss. How? By providing grants in the form of government-backed commercial loans that are forgiven if the business continues and maintains payroll, subject to workers being allowed to quit if they find better opportunities. 

What else might work? Tax breaks that lower business costs. These can be breaks in payroll taxes, marginal income tax rates, equipment purchases, permitting, etc., including tax holidays. Rollback of current business losses would trigger tax refunds that improve businesses finances. 

One of the least useful ideas for small businesses is interest-free loans. These might be great for large businesses who are largely managing their financial positions. But such loans fail to address the basic small business problem of keeping the doors open when customers aren’t buying.

Finally, why doesn’t traditional stimulus work, even in other times of economic downturn? Traditional spending-based stimulus assumes that the economic problem is that people want to build things, but not buy them. That’s not a very good assumption. Especially today, where the problems are the higher cost of buying, or perhaps the impossibility of buying with social distancing, and the higher costs of doing businesses. Keeping businesses in business is the key to supporting the economy. 

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Brent Skorup, (Senior Research Fellow, Mercatus Center, George Mason University).]

One of the most visible economic effects of the COVID-19 spread is the decrease in airline customers. Alec Stapp alerted me to the recent outrage over “ghost flights,” where airlines fly nearly empty planes to maintain their “slots.” 

The airline industry is unfortunately in economic freefall as governments prohibit and travelers pull back on air travel. When the health and industry crises pass, lawmakers will have an opportunity to evaluate the mistakes of the past when it comes to airport congestion and airspace design.

This issue of ghost flights pops up occasionally and offers a lesson in the problems with government rationing of public resources. In this case, the public resource are airport slots: designated times, say, 15 or 30 minutes, a plane may takeoff or land at an airport. (Last week US and EU regulators temporarily waived the use-it-or-lose it rule for slots to mitigate the embarrassing cost and environmental damage caused by forcing airlines to fly empty planes.)

The slots at major hubs at peak times of day are extremely scarce–there’s only so many hours in a day. Today, slot assignment are administratively rationed in a way that favors large, incumbent airlines. As the Wall Street Journal summarized last year,

For decades, airlines have largely divided runway access between themselves at twice-yearly meetings run by the IATA (an airline trade group).

Airport slots are property. They’re valuable. They can be defined, partitioned, leased, put up as collateral, and, in the US, they can be sold and transferred within or between airports.

You just can’t call slots property. Many lawmakers, regulators, and airline representatives refuse to acknowledge the obvious. Stating that slots are valuable public property would make clear the anticompetitive waste that the 40-year slot assignment experiment generates. 

Like many government programs, the slot rationing began in the US as a temporary program decades ago as a response to congestion at New York airports. Slots are currently used to ration access at LGA, JFK, and DCA. And while they don’t use formal slot rationing, the FAA also rations access at four other busy airports: ORD, Newark, LAX, and SFO.

Fortunately, cracks are starting to form. In 2008, at the tailend of the Bush administration, the FAA proposed to auction some slots in New York City’s three airports. The plan was delayed by litigation from incumbent airlines and an adverse finding from the GAO. With a change in administration, the Obama FAA rescinded the plan in 2009.

Before the Obama FAA recission, the mask slipped a bit in the GAO’s criticism of the slot auction plan: 

FAA’s argument that slots are property proves too much—it suggests that the agency has been improperly giving away potentially millions of dollars of federal property, for no compensation, since it created the slot system in 1968.

Gulp.

Though the GAO helped scuttle the plan, the damage has been done. The idea has now entered public policy discourse: giving away valuable public property is precisely what’s going on. 

The implicit was made explicit in 2011 when, despite spiking the Bush FAA plan, the Obama FAA auctioned two dozen high-value slots. (The reversal and lack of controversy is puzzling to me.) Delta and US Airways wanted to swap some 160 slots at New York and DC airports. As a condition of the mega-swap, the Obama FAA required they divest 24 slots at those popular airports, which the agency auctioned to new entrants. Seven low-fare airlines bid in the auction and Jetblue and WestJet won the divested slots, paying about $90 million combined

The older fictions are rapidly eroding. There is an active secondary market in slots in some nations and when prices are released it becomes clear that the legacy rationing amounts to public property setasides to insiders. In 2016 it leaked, for instance, that an airline paid £58 million for a pair of take-off and landing slots at Heathrow. Other slot sales are in the tens of millions of dollars.

The 2011 FAA auctions and the loosening of rules globally around slot sales signal that the competition benefits from slot markets are too obvious to ignore. Competition from new entry drives down airfare and increases the number of flights.

For instance, a few months ago researchers used a booking app to scour 50 trillion flight itineraries to see new entrants’ effect on airline ticket prices between 2017 and 2019. As the Wall Street Journal reported, the entry of a low-fare carrier reduced ticket prices by 17% on average. The bigger effect was on output–new entry led to a 30% YoY increase in flights.

It’s becoming harder to justify the legacy view, which allow incumbent airlines to dominate the slot allocations via international conferences and national regulations that require “grandfather” slot usage. In a separate article last year, the Wall Street Journal reported that airlines are reluctantly ceding more power to airports in the assignment of slots. This is another signal in the long-running tug-of-war between airports and airlines. Airports generally want to open slots for new competitors–incumbent airlines do not.

The reason for the change of heart? The Journal says,

Airlines and airports reached the deal in part because of concerns governments should start to sell slots.

Gulp. Ghost flights are a government failure but a rational response to governments withholding the benefits of property from airlines. The slot rationing system encourages flying uneconomical flights, smaller planes, and excess carbon emissions. The COVID-19 crisis allowed the public a glimpse at the dysfunctional system. It won’t be easy, but aviation regulators worldwide need to assess slots policy and airspace access before the administrative rationing system spreads to the emerging urban air mobility and drone delivery markets.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Luke Froeb, (William C. Oehmig Chair in Free Enterprise and Entrepreneurship, Owen Graduate School of Management, Vanderbilt University; former Chief Economist at the US DOJ Antitrust Division and US FTC).]

Summary: Trying to quarantine everyone until a vaccine is available doesn’t seem feasible. In addition, restrictions mainly delay when the epidemic explodes, e.g., see previous post on Flattening the Curve. In this paper, we propose subsidies to both individuals and businesses, to better align private incentives with social goals, while leaving it up to individuals and businesses to decide for themselves which risks to take.

For example, testing would give individuals the information necessary to make the best decision about whether to shelter in place or, if they have recovered and are now immune, to come out.  But, the negative consequences of a positive test, e.g., quarantine, can deter people from getting tested. Rewards for those who present for a test and submit to isolation when they have active disease could offset such externalities.

Another problem is that many people aren’t free on their own to implement protective measures related to work. Some form of incentive for work from home, closing down production in some part, or extra protection for workers could be imagined for employers. Businesses that offer worker health care might be incentivized by sharing in the extra virus health care costs realized by workers in exchange for a health care subsidy.

Essay: In the midst of an epidemic it is evident that social policy must adjust in furtherance of the public good. Institutions of all sorts, not the least of which government, will have to take extraordinary actions. People should expect their relationships with these institutions to change, at least for some time. These adjustments will need to be informed by applicable epidemiological data and models, subject to the usual uncertainties. But the problems to be faced are not only epidemiological but economic. There will be tradeoffs to be made between safer, restrictive rules and riskier, unconstrained behaviors. Costs to be faced are both social and individual.  As such, we should not expect a uniform public policy to make suitable choices for all individuals, nor assume that individuals making good decisions for themselves will combine for a good social outcome.  Imagine instead an alternative, where social costs are evaluated and appropriate individual incentives are devised, allowing individuals to make informed decisions with respect to their own circumstances and the social externalities reflected in those incentives.

We are currently in the US at the beginning of the coronavirus epidemic.  This is not the flu. It is maybe ten times as lethal as the flu, perhaps a little more lethal proportionally in the most susceptible populations. It is new, so there is little or no natural immunity, and no vaccine available for maybe 18 months. Like the flu, there is no really effective treatment yet for those that become sickest, particularly because the virus is most deadly through the complications it causes with existing conditions, so treatment options should not perhaps be expected to help with epidemic spread or to reduce lethality. It is spread relatively easily from person to person, though not as easily as the measles, perhaps significantly before the infected person shows symptoms. And it may be that people can get the virus, become contagious and spread the disease, while never showing symptoms themselves. We now have a test for active coronavirus, though it is still somewhat hard to get in the US, and we can expect at some point in the near future to have an antibody test that will show when people either have or have had and recovered from the virus.

There are some obvious social and individual costs to people catching this virus. First there are the deaths from the disease. Then there are the costs of treating those ill. Finally, there are costs from the lost productivity of those fallen ill. If there is a sudden and extreme increase in the numbers of sick people, all of these costs can be expected to rise, perhaps significantly. When hospitals have patients in excess of existing capacity, expanding capacity will be difficult and expensive, and death rates can be expected to rise.

An ideal public health strategy in the face of an epidemic is to keep people from falling sick. At the beginning of the epidemic, the few people with the disease need to be found and quarantined, and those with whom they have had contact need to be traced and isolated so that any carrying the disease can be stopped from passing it on. If there is no natural reservoir of disease that reintroduces the disease, it may be possible to eradicate the disease. When there were few cases, this might have been practical, but that effort has clearly failed, and there are far too many carriers of the disease now to track. 

Now the emphasis must be on measures to reduce transmission of the disease. This entails modifying behaviors that facilitate the disease passing from person to person. If the rate of infection can be reduced enough, to the point where the number of people each infected person can be expected to infect is less than one on average, then the disease will naturally die out. Once most people have had the disease, or have been vaccinated, most of the people an infected person would have infected are immune so the rate of new infections will naturally fall to less than one and the disease will die out. Because so many people have immunity to many varieties of the flu, its spread can be controlled in particular through vaccination, the only difficulty being that new strains are appearing all of the time. The difficulty with coronavirus is that simple measures for reducing the spread of the disease do not seem to be effective enough and extreme measures will be much more expensive. Moreover, because the coronavirus is a pandemic, even if one region succeeds in reducing transmission and has the disease fade, reintroduction from other regions can be expected to relight the fire of epidemic. Measures for reducing transmission will need to be maintained for some time, likely until a vaccine is available or natural heard immunity is established through the majority of the population having had the disease.

The flu strikes every year and we seem to tolerate it without extreme measures of social distancing. Perhaps there’s nothing that needs to be done now, nothing worth doing now, to slow the coronavirus epidemic. But what would the cost of such an attitude be? The virus would spread like wildfire, infecting in a matter of months perhaps the majority of the population. Even with an estimate of 70 to 150 million Americans, at a 1% death rate that means 0.7 to 1.5 million would die. But that many cases all at once would overwhelm the medical system, and the intensive care required to keep the death rate even this low. A surge in cases might mean an increase in death rate.

At the other extreme, we seem to be heading into a period where everyone is urged to shelter-in-place, or required to be locked down, so as to reduce social contacts to near zero and thereby interrupt the spread of the virus. This may be effective, perhaps even necessary to prevent an immediate surge of demand on hospitals. But it is also expensive in the disruptions it entails. The number of active infections can be drastically reduced over a time scale corresponding to an individual’s course of the disease. Removing the restrictions would mean then that the epidemic resumes from the new lower level with somewhat more of the population already immune. It seems unlikely the disease can be eradicated by such measures because of the danger of reintroduction from other regions where the virus is active. The strategy of holding everyone in this isolation until a vaccine becomes available isn’t likely to be palatable. Releasing restrictions slowly so as to keep the level of the disease at an acceptable level would likely mean that most of the population would get the disease before the vaccine became available. Even if the most at risk population remained isolated, the estimated death rate over the majority of the population implies a nontrivial number of deaths. How do we decide how many and who to risk in order to get the economy functioning?

Consider then a system of incentives to individuals to help communicate the social externalities and guide their decisions. If there is a high prevalence of active disease in the general population, then hospitals will see excessive demand and it will be unsafe for high risk individuals to expose themselves to even minimal social interactions. A low prevalence of active disease can be more easily tolerated by hospitals, with a lower resulting death rate, and higher risk individuals may be more able to interact and provide for themselves. To promote a lower level of disease, individuals should be incentivized to delay getting sick, practicing social distancing and reducing contacts in a trade-off with ordinary necessary activity and respecting their personal risk category and risk tolerance. This lower level of disease is the “flattening of the curve”, but it also imagines the most at risk segment of the population might choose to isolate for a longer term, hoping to hold out for a vaccine.

If later disease or no disease is preferable, how do we incentivize it? Can we at the same time incentivize more usual infection control measures? Eventually everyone will either need to take an antibody test, to determine that they have had the disease and developed immunity and so are safe to resume all normal activities, or else need the vaccination. People may also be tested for active disease. We can’t penalize people for showing up with active disease, as this would mean they would skip the test and likely continue infecting other people. We should reward those who present for a test and submit to isolation when they have active disease. We can reward also those who submit to the antibody test and test positive (for the first time) who can then resume normal activities. On the other hand, we want people to delay when they get sick through prudent measures. Thus it would be a good idea to increase over time the reward for first showing up with the disease. To avoid incentivizing delay in testing, the reward for a positive test should increase as a function of the last antibody test that was negative, i.e., the reward is more if you can prove you had avoided the disease as of your last antibody test. The size of the rewards should be significant enough to cause a change of behavior but commensurate with the social cost savings induced. If we are planning on giving Americans multiple $1000 checks to get the economy going anyway, then such monies could be spent on incentives alternatively. This imagines antibody testing will be available, relatively easy and inexpensive in maybe three months, and antibody tests might be repeated maybe every three months. And of course this assumes the trajectory of the epidemic can be controlled well enough in the short term and predicted well enough in the long term to make such a scheme possible.

HT:  Colleague Steven Tschantz

This post originally appeared on the Managerial Econ Blog

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Ben Sperry, (Associate Director, Legal Research, International Center for Law & Economics).]

The visceral reaction to the New York Times’ recent story on Matt Colvin, the man who had 17,700 bottles of hand sanitizer with nowhere to sell them, shows there is a fundamental misunderstanding of the importance of prices and the informational function they serve in the economy. Calls to enforce laws against “price gouging” may actually prove more harmful to consumers and society than allowing prices to rise (or fall, of course) in response to market conditions. 

Nobel-prize winning economist Friedrich Hayek explained how price signals serve as information that allows for coordination in a market society:

We must look at the price system as such a mechanism for communicating information if we want to understand its real function… The most significant fact about this system is the economy of knowledge with which it operates, or how little the individual participants need to know in order to be able to take the right action. In abbreviated form, by a kind of symbol, only the most essential information is passed on and passed on only to those concerned. It is more than a metaphor to describe the price system as a kind of machinery for registering change, or a system of telecommunications which enables individual producers to watch merely the movement of a few pointers, as an engineer might watch the hands of a few dials, in order to adjust their activities to changes of which they may never know more than is reflected in the price movement.

Economic actors don’t need a PhD in economics or even to pay attention to the news about the coronavirus to change their behavior. Higher prices for goods or services alone give important information to individuals — whether consumers, producers, distributors, or entrepreneurs — to conserve scarce resources, produce more, and look for (or invest in creating!) alternatives.

Prices are fundamental to rationing scarce resources, especially during an emergency. Allowing prices to rapidly rise has three salutary effects (as explained by Professor Michael Munger in his terrific twitter thread):

  1. Consumers ration how much they really need;
  2. Producers respond to the rising prices by ramping up supply and distributors make more available; and
  3. Entrepreneurs find new substitutes in order to innovate around bottlenecks in the supply chain. 

Despite the distaste with which the public often treats “price gouging,” officials should take care to ensure that they don’t prevent these three necessary responses from occurring. 

Rationing by consumers

During a crisis, if prices for goods that are in high demand but short supply are forced to stay at pre-crisis levels, the informational signal of a shortage isn’t given — at least by the market directly. This encourages consumers to buy more than is rationally justified under the circumstances. This stockpiling leads to shortages. 

Companies respond by rationing in various ways, like instituting shorter hours or placing limits on how much of certain high-demand goods can be bought by any one consumer. Lines (and unavailability), instead of price, become the primary cost borne by consumers trying to obtain the scarce but underpriced goods. 

If, instead, prices rise in light of the short supply and high demand, price-elastic consumers will buy less, freeing up supply for others. And, critically, price-inelastic consumers (i.e. those who most need the good) will be provided a better shot at purchase.

According to the New York Times story on Mr. Colvin, he focused on buying out the hand sanitizer in rural areas of Tennessee and Kentucky, since the major metro areas were already cleaned out. His goal was to then sell these hand sanitizers (and other high-demand goods) online at market prices. He was essentially acting as a speculator and bringing information to the market (much like an insider trader). If successful, he would be coordinating supply and demand between geographical areas by successfully arbitraging. This often occurs when emergencies are localized, like post-Katrina New Orleans or post-Irma Florida. In those cases, higher prices induced suppliers to shift goods and services from around the country to the affected areas. Similarly, here Mr. Colvin was arguably providing a beneficial service, by shifting the supply of high-demand goods from low-demand rural areas to consumers facing localized shortages. 

For those who object to Mr. Colvin’s bulk purchasing-for-resale scheme, the answer is similar to those who object to ticket resellers: the retailer should raise the price. If the Walmarts, Targets, and Dollar Trees raised prices or rationed supply like the supermarket in Denmark, Mr. Colvin would not have been able to afford nearly as much hand sanitizer. (Of course, it’s also possible — had those outlets raised prices — that Mr. Colvin would not have been able to profitably re-route the excess local supply to those in other parts of the country most in need.)

The role of “price gouging” laws and social norms

A common retort, of course, is that Colvin was able to profit from the pandemic precisely because he was able to purchase a large amount of stock at normal retail prices, even after the pandemic began. Thus, he was not a producer who happened to have a restricted amount of supply in the face of new demand, but a mere reseller who exacerbated the supply shortage problems.

But such an observation truncates the analysis and misses the crucial role that social norms against “price gouging” and state “price gouging” laws play in facilitating shortages during a crisis.

Under these laws, typically retailers may raise prices by at most 10% during a declared state of emergency. But even without such laws, brick-and-mortar businesses are tied to a location in which they are repeat players, and they may not want to take a reputational hit by raising prices during an emergency and violating the “price gouging” norm. By contrast, individual sellers, especially pseudonymous third-party sellers using online platforms, do not rely on repeat interactions to the same degree, and may be harder to track down for prosecution. 

Thus, the social norms and laws exacerbate the conditions that create the need for emergency pricing, and lead to outsized arbitrage opportunities for those willing to violate norms and the law. But, critically, this violation is only a symptom of the larger problem that social norms and laws stand in the way, in the first instance, of retailers using emergency pricing to ration scarce supplies.

Normally, third-party sales sites have much more dynamic pricing than brick and mortar outlets, which just tend to run out of underpriced goods for a period of time rather than raise prices. This explains why Mr. Colvin was able to sell hand sanitizer for prices much higher than retail on Amazon before the site suspended his ability to do so. On the other hand, in response to public criticism, Amazon, Walmart, eBay, and other platforms continue to crack down on third party “price-gouging” on their sites

But even PR-centric anti-gouging campaigns are not ultimately immune to the laws of supply and demand. Even Amazon.com, as a first party seller, ends up needing to raise prices, ostensibly as the pricing feedback mechanisms respond to cost increases up and down the supply chain. 

But without a willingness to allow retailers and producers to use the informational signal of higher prices, there will continue to be more extreme shortages as consumers rush to stockpile underpriced resources.

The desire to help the poor who cannot afford higher priced essentials is what drives the policy responses, but in reality no one benefits from shortages. Those who stockpile the in-demand goods are unlikely to be poor because doing so entails a significant upfront cost. And if they are poor, then the potential for resale at a higher price would be a benefit.

Increased production and distribution

During a crisis, it is imperative that spiking demand is met by increased production. Prices are feedback mechanisms that provide realistic estimates of demand to producers. Even if good-hearted producers forswearing the profit motive want to increase production as an act of charity, they still need to understand consumer demand in order to produce the correct amount. 

Of course, prices are not the only source of information. Producers reading the news that there is a shortage undoubtedly can ramp up their production. But even still, in order to optimize production (i.e., not just blindly increase output and hope they get it right), they need a feedback mechanism. Prices are the most efficient mechanism available for quickly translating the amount of social need (demand) for a given product to guarantee that producers do not undersupply the product  (leaving more people without than need the good), or oversupply the product (consuming more resources than necessary in a time of crisis). Prices, when allowed to adjust to actual demand, thus allow society to avoid exacerbating shortages and misallocating resources.

The opportunity to earn more profit incentivizes distributors all along the supply chain. Amazon is hiring 100,000 workers to help ship all the products which are being ordered right now. Grocers and retailers are doing their best to line the shelves with more in-demand food and supplies

Distributors rely on more than just price signals alone, obviously, such as information about how quickly goods are selling out. But even as retail prices stay low for consumers for many goods, distributors often are paying more to producers in order to keep the shelves full, as in the case of eggs. These are the relevant price signals for producers to increase production to meet demand.

For instance, hand sanitizer companies like GOJO and EO Products are ramping up production in response to known demand (so much that the price of isopropyl alcohol is jumping sharply). Farmers are trying to produce as much as is necessary to meet the increased orders (and prices) they are receiving. Even previously low-demand goods like beans are facing a boom time. These instances are likely caused by a mix of anticipatory response based on general news, as well as the slightly laggier price signals flowing through the supply chain. But, even with an “early warning” from the media, the manufacturers still need to ultimately shape their behavior with more precise information. This comes in the form of orders from retailers at increased frequencies and prices, which are both rising because of insufficient supply. In search of the most important price signal, profits, manufacturers and farmers are increasing production.

These responses to higher prices have the salutary effect of making available more of the products consumers need the most during a crisis. 

Entrepreneurs innovate around bottlenecks 

But the most interesting thing that occurs when prices rise is that entrepreneurs create new substitutes for in-demand products. For instance, distillers have started creating their own hand sanitizers

Unfortunately, however, government regulations on sales of distilled products and concerns about licensing have led distillers to give away those products rather than charge for them. Thus, beneficial as this may be, without the ability to efficiently price such products, not nearly as much will be produced as would otherwise be. The non-emergency price of zero effectively guarantees continued shortages because the demand for these free alternatives will far outstrip supply.

Another example is car companies in the US are now producing ventilators. The FDA waived regulations on the production of new ventilators after General Motors, Ford, and Tesla announced they would be willing to use idle production capacity for the production of ventilators.

As consumers demand more toilet paper, bottled water, and staple foods than can be produced quickly, entrepreneurs respond by refocusing current capabilities on these goods. Examples abound:

Without price signals, entrepreneurs would have far less incentive to shift production and distribution to the highest valued use. 

Conclusion

While stories like that of Mr. Colvin buying all of the hand sanitizer in Tennessee understandably bother people, government efforts to prevent prices from adjusting only impede the information sharing processes inherent in markets. 

If the concern is to help the poor, it would be better to pursue less distortionary public policy than arbitrarily capping prices. The US government, for instance, is currently considering a progressively tiered one-time payment to lower income individuals. 

Moves to create new and enforce existing “price-gouging” laws are likely to become more prevalent the longer shortages persist. Platforms will likely continue to receive pressure to remove “price-gougers,” as well. These policies should be resisted. Not only will these moves not prevent shortages, they will exacerbate them and push the sale of high-demand goods into grey markets where prices will likely be even higher. 

Prices are an important source of information not only for consumers, but for producers, distributors, and entrepreneurs. Short circuiting this signal will only be to the detriment of society.  

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Justin “Gus” Hurwitz, (Associate Professor of Law & Co-director, Space, Cyber, and Telecom Law Program, University of Nebraska; Director of Law & Economics Programs, ICLE).]

I’m a big fan of APM Marketplace, including Molly Wood’s tech coverage. But they tend to slip into advocacy mode—I think without realizing it—when it comes to telecom issues. This was on full display earlier this week in a story on widespread decisions by ISPs to lift data caps during the ongoing COVID-19 crisis (available here, the segment runs from 4:30-7:30). 

As background, all major ISPs have lifted data caps on their Internet service offerings. This is in recognition of the fact that most Americans are spending more time at home right now. During this time, many of us are teleworking, so making more intensive use of our Internet connections during the day; many have children at home during the day who are using the Internet for both education and entertainment; and we are going out less in the evening so making more use of services like streaming video for evening entertainment. All of these activities require bandwidth—and, like many businesses around the country, ISPs are taking steps (such as eliminating data caps) that will prevent undue consumer harm as we work to cope with COVID-19.

The Marketplace take on data caps

After introducing the segment, Wood and Marketplace host Kai Ryssdal turn to a misinformation and insinuation-laden discussion of telecommunications policy. Wood asserts that one of the ISPs’ “big arguments against net neutrality regulation” was that they “need [data] caps to prevent congestion on networks.” Ryssdal responds by asking, coyly, “so were they just fibbing? I mean … ya know …”

Wood responds that “there have been times when these arguments were very legitimate,” citing the early days of 4G networks. She then asserts that the United States has “some of the most expensive Internet speeds in the developed world” before jumping to the assertion that advocates will now have the “data to say that [data] caps are unnecessary.” She then goes on to argue—and here she loses any pretense of reporter neutrality—that “we are seeing that the Internet really is a utility” and that “frankly, there’s no, uhm, ongoing economic argument for [data caps].” She even notes that we can “hear [her] trying to be professional” in the discussion.

Unpacking that mess

It’s hard to know where to start with Wood & Ryssdal discussion, such a muddled mess it is. Needless to say, it is unfortunate to see tech reporters doing what tech reporters seem to do best: confusing poor and thinly veiled policy arguments for news.

Let’s start with Wood’s first claim, that ISPs (and, for that matter, others) have long argued that data caps are required to manage congestion and that this has been one of their chief arguments against net neutrality regulations. This is simply not true. 

Consider the 2015 Open Internet Order (OIO)—the net neutrality regulations adopted by the FCC under President Obama. The OIO discusses data caps (“usage allowances”) in paragraphs 151-153. It explains:

The record also reflects differing views over some broadband providers’ practices with respect to usage allowances (also called “data caps”). … Usage allowances may benefit consumers by offering them more choices over a greater range of service options, and, for mobile broadband networks, such plans are the industry norm today, in part reflecting the different capacity issues on mobile networks. Conversely, some commenters have expressed concern that such practices can potentially be used by broadband providers to disadvantage competing over-the-top providers. Given the unresolved debate concerning the benefits and drawbacks of data allowances and usage-based pricing plans,[FN373] we decline to make blanket findings about these practices and will address concerns under the no-unreasonable interference/disadvantage on a case-by-case basis. 

[FN373] Regarding usage-based pricing plans, there is similar disagreement over whether these practices are beneficial or harmful for promoting an open Internet. Compare Bright House Comments at 20 (“Variable pricing can serve as a useful technique for reducing prices for low usage (as Time Warner Cable has done) as well as for fairly apportioning greater costs to the highest users.”) with Public Knowledge Comments at 58 (“Pricing connectivity according to data consumption is like a return to the use of time. Once again, it requires consumers keep meticulous track of what they are doing online. With every new web page, new video, or new app a consumer must consider how close they are to their monthly cap. . . . Inevitably, this type of meter-watching freezes innovation.”), and ICLE & TechFreedom Policy Comments at 32 (“The fact of the matter is that, depending on background conditions, either usage-based pricing or flat-rate pricing could be discriminatory.”). 

The 2017 Restoring Internet Freedom Order (RIFO), which rescinded much of the OIO, offers little discussion of data caps—its approach to them follows that of the OIO, requiring that ISPs are free to adopt but must disclose data cap policies. It does, however, note that small ISPs expressed concern, and provided evidence, that fear of lawsuits had forced small ISPs to abandon policies like data caps, “which would have benefited its customers by lowering its cost of Internet transport.” (See paragraphs 104 and 249.) The 2010 OIO makes no reference to data caps or usage allowances. 

What does this tell us about Wood’s characterization of policy debates about data caps? The only discussion of congestion as a basis for data caps comes in the context of mobile networks. Wood gets this right: data caps have been, and continue to be, important for managing data use on mobile networks. But most people would be hard pressed to argue that these concerns are not still valid: the only people who have not experienced congestion on their mobile devices are those who do not use mobile networks.

But the discussion of data caps on broadband networks has nothing to do with congestion management. The argument against data caps is that they can be used anticompetitively. Cable companies, for instance, could use data caps to harm unaffiliated streaming video providers (that is, Netflix) in order to protect their own video services from competition; or they could exclude preferred services from data caps in order to protect them from competitors.

The argument for data caps, on the other hand, is about the cost of Internet service. Data caps are a way of offering lower priced service to lower-need users. Or, conversely, they are a way of apportioning the cost of those networks in proportion to the intensity of a given user’s usage.  Higher-intensity users are more likely to be Internet enthusiasts; lower-intensity users are more likely to use it for basic tasks, perhaps no more than e-mail or light web browsing. What’s more, if all users faced the same prices regardless of their usage, there would be no marginal cost to incremental usage: users (and content providers) would have no incentive not to use more bandwidth. This does not mean that users would face congestion without data caps—ISPs may, instead, be forced to invest in higher capacity interconnection agreements. (Importantly, interconnection agreements are often priced in terms of aggregate data transfered, not the speeds of those data transfers—that is, they are written in terms of data caps!—so it is entirely possible that an ISP would need to pay for greater interconnection capacity despite not experiencing any congestion on its network!)

In other words, the economic argument for data caps, recognized by the FCC under both the Obama and Trump administrations, is that they allow more people to connect to the Internet by allowing a lower-priced access tier, and that they keep average prices lower by creating incentives not to consume bandwidth merely because you can. In more technical economic terms, they allow potentially beneficial price discrimination and eliminate a potential moral hazard. Contrary to Wood’s snarky, unprofessional, response to Ryssdal’s question, there is emphatically not “no ongoing economic argument” for data caps.

Why lifting data caps during this crisis ain’t no thing

Even if the purpose of data caps were to manage congestion, Wood’s discussion again misses the mark. She argues that the ability to lift caps during the current crisis demonstrates that they are not needed during non-crisis periods. But the usage patterns that we are concerned about facilitating during this period are not normal, and cannot meaningfully be used to make policy decisions relevant to normal periods. 

The reason for this is captured in the below image from a recent Cloudflare discussion of how Internet usage patterns are changing during the crisis:

This image shows US Internet usage as measured by Cloudflare. The red line is the usage on March 13 (the peak is President Trump’s announcement of a state of emergency). The grey lines are the preceding several days of traffic. (The x-axis is UTC time; ET is UCT-4.) Although this image was designed to show the measurable spike in traffic corresponding to the President’s speech, it also shows typical weekday usage patterns. The large “hump” on the left side shows evening hours in the United States. The right side of the graph shows usage throughout the day. (This chart shows nation-wide usage trends, which span multiple time zones. If it were to focus on a single time zone, there would be a clear dip between daytime “business” and evening “home” hours, as can be seen here.)

More important, what this chart demonstrates is that the “peak” in usage occurs in the evening, when everyone is at home watching their Netflix. It does not occur during the daytime hours—the hours during which telecommuters are likely to be video conferencing or VPN’ing in to their work networks, or during which students are likely to be doing homework or conferencing into their meetings. And, to the extent that there will be an increase in daytime usage, it will be somewhat offset by (likely significantly) decreased usage due to coming economic lethargy. (For Kai Ryssdal, lethargy is synonymous with recession; for Aaron Sorkin fans, it is synonymous with bagel). 

This illustrates one of the fundamental challenges with pricing access to networks. Networks are designed to carry their peak load capacity. When they are operating below capacity, the marginal cost of additional usage is extremely low; once they exceed that capacity, the marginal cost of additional usage is extremely high. If you price network access based upon the average usage, you are going to get significant usage during peak hours; if you price access based upon the peak-hour marginal cost, you are going to get significant deadweight loss (under-use) during non-peak hours). 

Data caps are one way to deal with this issue. Since most users making the most intensive use of the network are all doing so at the same time (at peak hour), this incremental cost either discourages this use or provides the revenue necessary to expand capacity to accommodate their use. But data caps do not make sense during non-peak hours, when marginal cost is nearly zero. Indeed, imposing increased costs on users during non-peak hours is regressive. It creates deadweight losses during those hours (and, in principle, also during peak hours: ideally, we would price non-peak-hour usage less than peak-hour usage in order to “shave the peak” (a synonym, I kid you not, for “flatten the curve”)). 

What this all means

During the current crisis, we are seeing a significant increase in usage during non-peak hours. This imposes nearly zero incremental cost on ISPs. Indeed, it is arguably to their benefit to encourage use during this time, to “flatten the curve” of usage in the evening, when networks are, in fact, likely to experience congestion.

But there is a flipside, which we have seen develop over the past few days: how do we manage peak-hour traffic? On Thursday, the EU asked Netflix to reduce the quality of its streaming video in order to avoid congestion. Netflix is the single greatest driver of consumer-focused Internet traffic. And while being able to watch the Great British Bake Off in ultra-high definition 3D HDR 4K may be totally awesome, its value pales in comparison to keeping the American economy functioning.

Wood suggests that ISPs’ decision to lift data caps is of relevance to the network neutrality debate. It isn’t. But the impact of Netflix traffic on competing applications may be. The net neutrality debate created unmitigated hysteria about prioritizing traffic on the Internet. Many ISPs have said outright that they won’t even consider investing in prioritization technologies because of the uncertainty around the regulatory treatment of such technologies. But such technologies clearly have uses today. Video conferencing and Voice over IP protocols should be prioritized over streaming video. Packets to and from government, healthcare, university, and other educational institutions should be prioritized over Netflix traffic. It is hard to take anyone who would disagree with this proposition seriously. Yet the net neutrality debate almost entirely foreclosed development of these technologies. While they may exist, they are not in widespread deployment, and are not familiar to consumers or consumer-facing network engineers.

To the very limited extent that data caps are relevant to net neutrality policy, it is about ensuring that millions of people binge watching Bojack Horseman (seriously, don’t do it!) don’t interfere with children Skyping with their grandparents, a professor giving a lecture to her class, or a sales manager coordinating with his team to try to keep the supply chain moving.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Luke Froeb, (William C. Oehmig Chair in Free Enterprise and Entrepreneurship, Owen Graduate School of Management, Vanderbilt University; former Chief Economist at the US DOJ Antitrust Division and US FTC).]

Policy makers are using the term to describe the effects of social distancing and travel restrictions.  In this post, we use a cellular automata model of infection to show how they might do this.

DISCLAIMER:  THIS IS AN UNREALISTIC MODEL, FOR TEACHING PURPOSES ONLY.

The images below are from a cellular automata model of the spread of a disease on a 100×100 grid.  White dots represent uninfected; red dots, infectedgreen dots, survivors; black dots, deaths.  The key parameters are:

  • death rate=1%, given that a person has been infected.  
  • r0 = 2 is the basic reproduction number, the number of people infected by each infected person, e.g., here are estimates for corona virus.   We model social distancing as reducing this number.  
  • mean distance of infection = 5.0 cells away from an infected cell, modeled as a standard normal distribution over unit distance.  We model travel restrictions as reducing this number

In the video above the infected cells (red) spread slowly out from the center, where the outbreak began.  Most infections are on the “border” of the infected area because that is where the infected cells are more likely to infect uninfected ones.

Infections eventually die out because many of the people who come in contact with the infection have already developed an immunity (green) or are dead (black).  This is what Boris Johnson referred to as “Herd Immunity.

We graph the spread of the infection above.  The vertical axis represents people on the grid (10,000=100×100) and the horizontal axis represents time, denoted in periods (the life span of an infection virus).   The blue line represents the uninfected population, the green line the infected population, and the orange line, the infection rate.

In the simulation and graph below, we increase r0 (the infection ratio) from 2 to 3, and mean travel distance from 5 to 25.   We see that more people get infected (higher green line), and much more quickly (peak infections occur at period 11, instead of period 15).

What policy makers mean by “flattening the curve” is flattening the orange infection curve (compare the high orange peak in the bottom graph to the smaller, flatter peak in the one above) with social distancing and travel restrictions so that our hospital system does not get overwhelmed by infected patients.

HT:  Colleague Steven Tschantz designed and wrote the code.

This post originally appeared on the Managerial Econ Blog

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Robert Litan, (Non-resident Senior Fellow, Economic Studies, The Brookings Institution; former Associate Director, Office of Management and Budget).]

We have moved well beyond testing as the highest priority for responding to the COVID disaster – although it remains important – to meeting the immediate peak demand for hospital equipment and ICU beds outside hospitals in most urban areas. President Trump recognized this being the case when he declared on March 18 that was acting as a “wartime President.”

While the President invoked the Defense Production Act to have the private sector produce more ventilators and other necessary medical equipment, such as respirators and hospital gowns, that Act principally provides for government purchases and the authority to allocate scarce supplies. 

As part of this effort, if it is not already in the works, the President should require manufacturers of such equipment – especially ventilators – to license at low or no royalties any and all intellectual property rights required for such production to as many other manufacturers that are willing and capable of making this equipment as rapidly as possible, 24/7. The President should further direct FDA to surge its inspector force to ensure that the processes and output of these other manufacturers are in compliance with applicable FDA requirements. The same IP licensing requirement should extend to manufacturers of any other medical supplies expected to be in short supply. 

To avoid price gouging – yes, this is one instance where market principles should be suspended – the declaration should cap the prices of future ventilators, including those manufactured by current suppliers, to the price pre-crisis. 

Second, to solve the bed shortage problem, some states (such New York) are already investigating the use of existing facilities – schools, university dorms, hotel rooms, and the like. This idea should be mandated immediately, as part of the emergency declaration, nationwide. The President has ordered a Navy hospital ship to help out with extra beds in New York, which is a good idea that should be extended to other coastal cities where this is possible. But he should also order the military, as needed, to assist with the conversion efforts of land-based facilities – which require infection-free environments, special filtration systems and the like – where private contractors are not available. 

The costs for all this should be borne by the federal government, using the Disaster Relief Fund, authorized by the Stafford Act. As of year-end FY 2019, the balance in this fund was approximately $30 billion. It is not clear what the balance is expected to be after the outlays that have recently been ordered by the President, as relief for states and localities. If the DRF needs topping up, this should be urgently provided by the Congress, ideally as part of the third round of fiscal stimulus being considered this week. 

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Eric Fruits, (Chief Economist, International Center for Law & Economics).]




Wells Fargo faces billions of dollars of fines for creating millions of fraudulent savings, checking, credit, and insurance accounts on behalf of their clients without their customers’ consent. Last weekend, tens of thousands of travelers were likely exposed to coronavirus while waiting hours for screening at crowded airports. Consumers and businesses around the world pay higher energy prices as their governments impose costly programs to reduce carbon emissions.

These seemingly unrelated observations have something in common: They are all victims of some version of Goodhart’s Law.

Being a central banker, Charles Goodhart’s original statement was a bit more dense: “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.”

The simple version of the law is: “When a measure becomes a target it ceases to be a good measure.”

Economist Charles Munger puts it more succinctly: “Show me the incentive and I’ll show you the outcome.”

The Wells Fargo scandal is a case study in Goodhart’s Law. It came from a corporate culture pushed by the CEO, Dick Kovacevich, that emphasized “cross-selling” products to existing customers, as related in a Vanity Fair profile.

As Kovacevich told me in a 1998 profile of him I wrote for Fortune magazine, the key question facing banks was “How do you sell money?” His answer was that financial instruments—A.T.M. cards, checking accounts, credit cards, loans—were consumer products, no different from, say, screwdrivers sold by Home Depot. In Kovacevich’s lingo, bank branches were “stores,” and bankers were “salespeople” whose job was to “cross-sell,” which meant getting “customers”—not “clients,” but “customers”—to buy as many products as possible. “It was his business model,” says a former Norwest executive. “It was a religion. It very much was the culture.”

It was underpinned by the financial reality that customers who had, say, lines of credit and savings accounts with the bank were far more profitable than those who just had checking accounts. In 1997, prior to Norwest’s merger with Wells Fargo, Kovacevich launched an initiative called “Going for Gr-Eight,” which meant getting the customer to buy eight products from the bank. The reason for eight? “It rhymes with GREAT!” he said.

The concept makes sense. It’s easier to get sales from existing customers than trying to find new customers. Also, if revenues are rising, there’s less pressure to reduce costs. 

Kovacevich came to Wells Fargo in the late 1990s by way of its merger with Norwest, where he was CEO. After the merger, he noticed that the Wells unit was dragging down the merged firm’s sales-per-customer numbers. So, Wells upped the pressure. 

One staffer reported that every morning, they’d have a conference call with their managers. Staff were supposed to to explain how they’d make their sales goal for the day. If the goal wasn’t hit at the end of the day, staff had to explain why they missed the goal and how they planned to fix it. Bonuses were offered for hitting their targets, and staffers were let go for missing their targets.

Wells Fargo had rules against “gaming” the system. Yes, it was called “gaming.” But the incentives were so strongly aligned in favor of gaming, that the rules were ignored.

Wells Fargo’s internal investigation estimated between 2011 and 2015 its employees had opened more than 1.5 million deposit accounts and more than 565,000 credit-card accounts that may not have been authorized. Customers were charged fees on the accounts, some accounts were sent to collections over unpaid fees, cars were repossessed, and homes went into foreclosure.

Some customers were charged fees on accounts they didn’t know they had, and some customers had collection agencies calling them due to unpaid fees on accounts they didn’t know existed.

Goodhart’s Law hit Wells Fargo hard. Cross-selling was the bank’s target. Once management placed pressure to hit the target, cross-selling became not just a bad target, it corrupted the entire retail side of the business.

Last Friday, my son came home from his study abroad in Spain. He landed less than eight hours before the travel ban went into effect. He was lucky–he got out of the airport less than an hour after landing. 

The next day was pandemonium. In addition to the travel ban, the U.S. imposed health screening on overseas arrivals. Over the weekend, travelers reported being forced into crowded terminals for up to eight hours to go through customs and receive screening. 

The screening process resulted in exactly the opposite of what health officials are advising, to avoid close contact and large crowds. We still don’t know if the screenings have helped reduce the spread of the coronavirus or if the forced crowding fostered the spread.

The government seemed to forget Goodhart’s Law. Public demand for enhanced screenings made screening the target. Screenings were implemented hastily without any thought of the consequences of clustering potentially infected flyers with the uninfected. Someday, we may learn that a focus on screening came at the expense of slowing the spread.

More and more we’re being told climate change presents an existential threat to our planet. We’re told the main culprit is carbon emissions from economic activity. Toward that end, governments around the world are trying to take extraordinary measures to reduce carbon emissions. 

In Oregon, the legislature has been trying for more than a decade to implement a cap-and-trade program to reduce carbon emissions in the state. A state that accounts for less than one-tenth of one percent of global greenhouse gas emissions. Even if Oregon went to zero GHG emissions, the world would never know.

Legislators pushing cap-and-trade want the state to address climate change immediately. But, when the microphones are turned off, they admit their cap-and-trade program would do nothing to slow global climate change.

In yet another case of Goodhart’s Law, Oregon and other jurisdictions have made carbon emissions the target. As a consequence, if cap-and-trade were ever to become law in the state, businesses and consumers would be paying hundreds or thousands of dollars of dollars a year more in energy prices, with zero effect on global temperatures. Those dollars could be better spent on acknowledging the consequences of climate change and making investments to deal with those consequences.

The funny thing about Goodhart’s Law is that once you know about it, you see it everywhere. And, it’s not just some quirky observation. It’s a failure that can have serious consequences on our health, our livelihoods, and our economy.

In antitrust lore, mavericks are magical creatures that bring order to a world on the verge of monopoly. Because they are so hard to find in the wild, some researchers have attempted to create them in the laboratory. While the alchemists couldn’t turn lead into gold, they did discover zinc. Similarly, although modern day researchers can’t turn students into mavericks, they have created a useful classroom exercise.

In a Cambridge University working paper, Donja Darai, Catherine Roux, and Frédéric Schneider develop a simple experiment to model merger activity in the face of price competition. Based on their observations they conclude (1) firms are more likely to make merger offers when prices are closer to marginal cost and (2) “maverick firms” – firms who charge a lower price – are more likely to be on the receiving end of those merger offers. Based on these conclusions, they suggest “mergers may be used to eliminate mavericks from the market and thus substitute for failed attempts at collusion between firms.”

The experiment is a set of games broken up into “market” phases and “merger” phases.

  • Each experiment has four subjects, with each subject representing a firm.
  • Each firm has marginal cost of zero and no capacity constraints.
  • Each experiment has nine phases: five “market” phases of 10 trading periods and a four “merger” phases.
  • During a trading period, firms simultaneously post their asking prices, ranging from 0 to 100 “currency units.” Subjects cannot communicate their prices to each other.
  • A computerized “buyer” purchases 300 units of the good at the lowest posted price. In the case of identical lowest prices, the sales are split equally among the firms with the lowest posted price.
  • At the end of the market phase, the firms enter a merger phase in which any firm can offer to merge with any other firm. Firms being made an offer to merge can accept or reject the offer. There are no price terms for the merger. Instead, the subject controlling the acquired firm receives an equal share of the acquiring firm’s profits in subsequent trading periods. Each firm can acquire only one other firm in each merger round.
  • The market-merger phases repeat, ending with a final market phase.
  • Subjects receive cash compensation related to the the “profits” their firm earned over the course of the experiment.

Merger to monopoly is a dominant strategy: It is the clearest path to maximizing individual and joint profits. In that way it’s a pretty boring game. Bid low, merge toward monopoly, then bid 100 every turn after that. The only real “trick” is convincing the other players to merge.

The authors attempt to make the paper more interesting by introducing the idea of the “maverick” bidder who bids low. They find that the lowest bidders are more likely to receive merger offers than the other subjects. They also find that these so-called mavericks are more reluctant to accept a merger offer. 

I noted in my earlier post that modeling the “maverick” seems to be a fool’s errand. If firms are assumed to face the same cost and demand conditions, why would any single firm play the role of the maverick? In the standard prisoner’s dilemma problem, every firm has the incentive to be the maverick. If everyone’s a maverick, then no one’s a maverick. On the other hand, if one firm has unique cost or demand conditions or is assumed to have some preference for “mavericky” behavior, then the maverick model is just an ad hoc model where the conclusions are baked into the assumptions.

Darai, et al.’s experiment suffers from these same criticisms. They define the “maverick” as a low bidder who does not accept merger offers. But, they don’t have a model for why they behave the way they do. Some observations:

  • Another name for “low bidder” is “winner.” If the low bidders consistently win in the market phase, then they may believe that they have some special skill or luck that the other subjects don’t have. Why would a winner accept a merger bid from – and share his or her profits with – one or more “losers.”  
  • Another name for “low bidder” could be “newbie.” The low bidder may be the subject who doesn’t understand that the dominant strategy is to merge to monopoly as fast as possible and charge the maximum price. The other players conclude the low bidder doesn’t know how to play the game. In other words, the merger might be viewed more as a hostile takeover to replace “bad” management. Because even bad managers won’t admit they’re bad, they make another bad decision and resist the merger.
  • About 80% of the time, the experiment ends with a monopoly, indicating that even the mavericks eventually merge. 

See what I just did? I created my own ad hoc theories of the maverick. In one theory, the maverick thinks he or she has some unique ability to pick the winning asking price. In the other, the maverick is making decisions counter to its own – and other players’ – long term self-interest. 

Darai, et al. have created a fun game. I played a truncated version of it with my undergraduate class earlier this week and it generated a good discussion about pricing and coordination. But, please don’t call it a model of the maverick.

On Monday evening, around 6:00 PM Eastern Standard Time, news leaked that the United States District Court for the Southern District of New York had decided to allow the T-Mobile/Sprint merger to go through, giving the companies a victory over a group of state attorneys general trying to block the deal.

Thomas Philippon, a professor of finance at NYU, used this opportunity to conduct a quick-and-dirty event study on Twitter:

Short thread on T-Mobile/Sprint merger. There were 2 theories:

(A) It’s a 4-to-3 merger that will lower competition and increase markups.

(B) The new merged entity will be able to take on the industry leaders AT&T and Verizon.

(A) and (B) make clear predictions. (A) predicts the merger is good news for AT&T and Verizon’s shareholders. (B) predicts the merger is bad news for AT&T and Verizon’s shareholders. The news leaked at 6pm that the judge would approve the merger. Sprint went up 60% as expected. Let’s test the theories. 

Here is Verizon’s after trading price: Up 2.5%.

Here is ATT after hours: Up 2%.

Conclusion 1: Theory B is bogus, and the merger is a transfer of at least 2%*$280B (AT&T) + 2.5%*$240B (Verizon) = $11.6 billion from the pockets of consumers to the pockets of shareholders. 

Conclusion 2: I and others have argued for a long time that theory B was bogus; this was anticipated. But lobbying is very effective indeed… 

Conclusion 3: US consumers already pay two or three times more than those of other rich countries for their cell phone plans. The gap will only increase.

And just a reminder: these firms invest 0% of the excess profits. 

Philippon published his thread about 40 minutes prior to markets opening for regular trading on Tuesday morning. The Court’s official decision was published shortly before markets opened as well. By the time regular trading began at 9:30 AM, Verizon had completely reversed its overnight increase and opened down from the previous day’s close. While AT&T opened up slightly, it too had given back most of its initial gains. By 11:00 AM, AT&T was also in the red. When markets closed at 4:00 PM on Tuesday, Verizon was down more than 2.5 percent and AT&T was down just under 0.5 percent.

Does this mean that, in fact, theory A is the “bogus” one? Was the T-Mobile/Sprint merger decision actually a transfer of “$7.4 billion from the pockets of shareholders to the pockets of consumers,” as I suggested in my own tongue-in-cheek thread later that day? In this post, I will look at the factors that go into conducting a proper event study.  

What’s the appropriate window for a merger event study?

In a response to my thread, Philippon said, “I would argue that an event study is best done at the time of the event, not 16 hours after. Leak of merger approval 6 pm Monday. AT&T up 2 percent immediately. AT&T still up at open Tuesday. Then comes down at 10am.” I don’t disagree that “an event study is best done at the time of the event.” In this case, however, we need to consider two important details: When was the “event” exactly, and what were the conditions in the financial markets at that time?

This event did not begin and end with the leak on Monday night. The official announcement came Tuesday morning when the full text of the decision was published. This additional information answered a few questions for market participants: 

  • Were the initial news reports true?
  • Based on the text of the decision, what is the likelihood it gets reversed on appeal?
    • Wall Street: “Not all analysts are convinced this story is over just yet. In a note released immediately after the judge’s verdict, Nomura analyst Jeff Kvaal warned that ‘we expect the state AGs to appeal.’ RBC Capital analyst Jonathan Atkin noted that such an appeal, if filed, could delay closing of the merger by ‘an additional 4-5’ months — potentially delaying closure until September 2020.”
  • Did the Court impose any further remedies or conditions on the merger?

As stock traders digested all the information from the decision, Verizon and AT&T quickly went negative. There is much debate in the academic literature about the appropriate window for event studies on mergers. But the range in question is always one of days or weeks — not a couple hours in after hours markets. A recent paper using the event study methodology analyzed roughly 5,000 mergers and found abnormal returns of about positive one percent for competitors in the relevant market following a merger announcement. Notably for our purposes, this small abnormal return builds in the first few days following a merger announcement and persists for up to 30 days, as shown in the chart below:

As with the other studies the paper cites in its literature review, this particular research design included a window of multiple weeks both before and after the event occured. When analyzing the T-Mobile/Sprint merger decision, we should similarly expand the window beyond just a few hours of after hours trading.

How liquid is the after hours market?

More important than the length of the window, however, is the relative liquidity of the market during that time. The after hours market is much thinner than the regular hours market and may not reflect all available information. For some rough numbers, let’s look at data from NASDAQ. For the last five after hours trading sessions, total volume was between 80 and 100 million shares. Let’s call it 90 million on average. By contrast, the total volume for the last five regular trading hours sessions was between 2 and 2.5 billion shares. Let’s call it 2.25 billion on average. So, the regular trading hours have roughly 25 times as much liquidity as the after hours market

We could also look at relative liquidity for a single company as opposed to the total market. On Wednesday during regular hours (data is only available for the most recent day), 22.49 million shares of Verizon stock were traded. In after hours trading that same day, fewer than a million shares traded hands. You could change some assumptions and account for other differences in the after market and the regular market when analyzing the data above. But the conclusion remains the same: the regular market is at least an order of magnitude more liquid than the after hours market. This is incredibly important to keep in mind as we compare the after hours price changes (as reported by Philippon) to the price changes during regular trading hours.

What are Wall Street analysts saying about the decision?

To understand the fundamentals behind these stock moves, it’s useful to see what Wall Street analysts are saying about the merger decision. Prior to the ruling, analysts were already worried about Verizon’s ability to compete with the combined T-Mobile/Sprint entity in the short- and medium-term:

Last week analysts at LightShed Partners wrote that if Verizon wins most of the first available tranche of C-band spectrum, it could deploy 60 MHz in 2022 and see capacity and speed benefits starting in 2023.

With that timeline, C-Band still does not answer the questions of what spectrum Verizon will be using for the next three years,” wrote LightShed’s Walter Piecyk and Joe Galone at the time.

Following the news of the decision, analysts were clear in delivering their own verdict on how the decision would affect Verizon:

Verizon looks to us to be a net loser here,” wrote the MoffettNathanson team led by Craig Moffett.

…  

Approval of the T-Mobile/Sprint deal takes not just one but two spectrum options off the table,” wrote Moffett. “Sprint is now not a seller of 2.5 GHz spectrum, and Dish is not a seller of AWS-4. More than ever, Verizon must now bet on C-band.”

LightShed also pegged Tuesday’s merger ruling as a negative for Verizon.

“It’s not great news for Verizon, given that it removes Sprint and Dish’s spectrum as an alternative, created a new competitor in Dish, and has empowered T-Mobile with the tools to deliver a superior network experience to consumers,” wrote LightShed.

In a note following news reports that the court would side with T-Mobile and Sprint, New Street analyst Johnathan Chaplin wrote, “T-Mobile will be far more disruptive once they have access to Sprint’s spectrum than they have been until now.”

However, analysts were more sanguine about AT&T’s prospects:

AT&T, though, has been busy deploying additional spectrum, both as part of its FirstNet build and to support 5G rollouts. This has seen AT&T increase its amount of deployed spectrum by almost 60%, according to Moffett, which takes “some of the pressure off to respond to New T-Mobile.”

Still, while AT&T may be in a better position on the spectrum front compared to Verizon, it faces the “same competitive dynamics,” Moffett wrote. “For AT&T, the deal is probably a net neutral.”

The quantitative evidence from the stock market seems to agree with the qualitative analysis from the Wall Street research firms. Let’s look at the five-day window of trading from Monday morning to Friday (today). Unsurprisingly, Sprint, T-Mobile, and Dish have reacted very favorably to the news:

Consistent with the Wall Street analysis, Verizon stock remains down 2.5 percent over a five-day window while AT&T has been flat over the same period:

How do you separate beta from alpha in an event study?

Philippon argued that after market trading may be more efficient because it is dominated by hedge funds and includes less “noise trading.” In my opinion, the liquidity effect likely outweighs this factor. Also, it’s unclear why we should assume “smart money” is setting the price in the after hours market but not during regular trading when hedge funds are still active. Sophisticated professional traders often make easy profits by picking off panicked retail investors who only read the headlines. When you see a wild swing in the markets that moderates over time, the wild swing is probably the noise and the moderation is probably the signal.

And, as Karl Smith noted, since the aftermarket is thin, price moves in individual stocks might reflect changes in the broader stock market (“beta”) more than changes due to new company-specific information (“alpha”). Here are the last five days for e-mini S&P 500 futures, which track the broader market and are traded after hours:

The market trended up on Monday night and was flat on Tuesday. This slightly positive macro environment means we would need to adjust the returns downward for AT&T and Verizon. Of course, this is counter to Philippon’s conjecture that the merger decision would increase their stock prices. But to be clear, these changes are so minuscule in percentage terms, this adjustment wouldn’t make much of a difference in this case.

Lastly, let’s see what we can learn from a similar historical episode in the stock market.

The parallel to the 2016 presidential election

The type of reversal we saw in AT&T and Verizon is not unprecedented. Some commenters said the pattern reminded them of the market reaction to Trump’s election in 2016:

Much like the T-Mobile/Sprint merger news, the “event” in 2016 was not a single moment in time. It began around 9 PM Tuesday night when Trump started to overperform in early state results. Over the course of the next three hours, S&P 500 futures contracts fell about 5 percent — an enormous drop in such a short period of time. If Philippon had tried to estimate the “Trump effect” in the same manner he did the T-Mobile/Sprint case, he would have concluded that a Trump presidency would reduce aggregate future profits by about 5 percent relative to a Clinton presidency.

But, as you can see in the chart above, if we widen the aperture of the event study to include the hours past midnight, the story flips. Markets started to bounce back even before Trump took the stage to make his victory speech. The themes of his speech were widely regarded as reassuring for markets, which further pared losses from earlier in the night. When regular trading hours resumed on Wednesday, the markets decided a Trump presidency would be very good for certain sectors of the economy, particularly finance, energy, biotech, and private prisons. By the end of the day, the stock market finished up about a percentage point from where it closed prior to the election — near all time highs.

Maybe this is more noise than signal?

As a few others pointed out, these relatively small moves in AT&T and Verizon (less than 3 percent in either direction) may just be noise. That’s certainly possible given the magnitude of the changes. Contra Philippon, I think the methodology in question is too weak to rule out the pro-competitive theory of the case, i.e., that the new merged entity would be a stronger competitor to take on industry leaders AT&T and Verizon. We need much more robust and varied evidence before we can call anything “bogus.” Of course, that means this event study is not sufficient to prove the pro-competitive theory of the case, either.

Olivier Blanchard, a former chief economist of the IMF, shared Philippon’s thread on Twitter and added this comment above: “The beauty of the argument. Simple hypothesis, simple test, clear conclusion.”

If only things were so simple.