Archives For economics

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Geoffrey A. Manne, (President, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics); and Dirk Auer, (Senior Fellow of Law & Economics, ICLE)]

Back in 2012, Covidien, a large health care products company and medical device manufacturer, purchased Newport Medical Instruments, a small ventilator developer and manufacturer. (Covidien itself was subsequently purchased by Medtronic in 2015).

Eight years later, in the midst of the coronavirus pandemic, the New York Times has just published an article revisiting the Covidien/Newport transaction, and questioning whether it might have contributed to the current shortage of ventilators.

The article speculates that Covidien’s purchase of Newport, and the subsequent discontinuation of Newport’s “Aura” ventilator — which was then being developed by Newport under a government contract — delayed US government efforts to procure mechanical ventilators until the second half of 2020 — too late to treat the first wave of COVID-19 patients:

And then things suddenly veered off course. A multibillion-dollar maker of medical devices bought the small California company that had been hired to design the new machines. The project ultimately produced zero ventilators.

That failure delayed the development of an affordable ventilator by at least half a decade, depriving hospitals, states and the federal government of the ability to stock up.

* * *

Today, with the coronavirus ravaging America’s health care system, the nation’s emergency-response stockpile is still waiting on its first shipment.

The article has generated considerable interest not so much for what it suggests about government procurement policies or for its relevance to the ventilator shortages associated with the current pandemic, but rather for its purported relevance to ongoing antitrust debates and the arguments put forward by “antitrust populists” and others that merger enforcement in the US is dramatically insufficient. 

Only a single sentence in the article itself points to a possible antitrust story — and it does nothing more than report unsubstantiated speculation from unnamed “government officials” and rival companies: 

Government officials and executives at rival ventilator companies said they suspected that Covidien had acquired Newport to prevent it from building a cheaper product that would undermine Covidien’s profits from its existing ventilator business.

Nevertheless, and right on cue, various antitrust scholars quickly framed the deal as a so-called “killer acquisition” (see also here and here):

Unsurprisingly, politicians were also quick to jump on the bandwagon. David Cicilline, the powerful chairman of the House Antitrust Subcommittee, opined that:

And FTC Commissioner Rebecca Kelly Slaughter quickly called for a retrospective review of the deal:

The public reporting on this acquisition raises important questions about the review of this deal. We should absolutely be looking back to figure out what happened.

These “hot takes” raise a crucial issue. The New York Times story opened the door to a welter of hasty conclusions offered to support the ongoing narrative that antitrust enforcement has failed us — in this case quite literally at the cost of human lives. But are any of these claims actually supportable?

Unfortunately, the competitive realities of the mechanical ventilator industry, as well as a more clear-eyed view of what was likely going on with the failed government contract at the heart of the story, simply do not support the “killer acquisition” story.

What is a “killer acquisition”…?

Let’s take a step back. Because monopoly profits are, by definition, higher than joint duopoly profits (all else equal), economists have long argued that incumbents may find it profitable to acquire smaller rivals in order to reduce competition and increase their profits. More specifically, incumbents may be tempted to acquire would-be entrants in order to prevent them from introducing innovations that might hurt the incumbent’s profits.

For this theory to have any purchase, however, a number of conditions must hold. Most importantly, as Colleen Cunningham, Florian Ederer, and Song Ma put it in an influential paper

“killer acquisitions” can only occur when the entrepreneur’s project overlaps with the acquirer’s existing product…. [W]ithout any product market overlap, the acquirer never has a strictly positive incentive to acquire the entrepreneur… because, without overlap, acquiring the project does not give the acquirer any gains resulting from reduced competition, and the two bargaining entities have exactly the same value for the project.

Moreover, the authors add that:

Successfully developing a new product draws consumer demand and profits away equally from all existing products. An acquiring incumbent is hurt more by such cannibalization when he is a monopolist (i.e., the new product draws demand away only from his own existing product) than when he already faces many other existing competitors (i.e., cannibalization losses are spread over many firms). As a result, as the number of existing competitors increases, the replacement effect decreases and the acquirer’s development decisions become more similar to those of the entrepreneur

Finally, the “killer acquisition” terminology is appropriate only when the incumbent chooses to discontinue its rival’s R&D project:

If incumbents face significant existing competition, acquired projects are not significantly more frequently discontinued than independent projects. Thus, more competition deters incumbents from acquiring and terminating the projects of potential future competitors, which leads to more competition in the future.

…And what isn’t a killer acquisition?

What is left out of this account of killer acquisitions is the age-old possibility that an acquirer purchases a rival precisely because it has superior know-how or a superior governance structure that enables it to realize greater return and more productivity than its target. In the case of a so-called killer acquisition, this means shutting down a negative ROI project and redeploying resources to other projects or other uses — including those that may not have any direct relation to the discontinued project. 

Such “synergistic” mergers are also — like allegedly “killer” mergers — likely to involve acquirers and targets in the same industry and with technological overlap between their R&D projects; it is in precisely these situations that the acquirer is likely to have better knowledge than the target’s shareholders that the target is undervalued because of poor governance rather than exogenous, environmental factors.  

In other words, whether an acquisition is harmful or not — as the epithet “killer” implies it is — depends on whether it is about reducing competition from a rival, on the one hand, or about increasing the acquirer’s competitiveness by putting resources to more productive use, on the other.

As argued below, it is highly unlikely that Covidien’s acquisition of Newport could be classified as a “killer acquisition.” There is thus nothing to suggest that the merger materially impaired competition in the mechanical ventilator market, or that it measurably affected the US’s efforts to fight COVID-19.

The market realities of the ventilator market and its implications for the “killer acquisition” story

1. The mechanical ventilator market is highly competitive

As explained above, “killer acquisitions” are less likely to occur in competitive markets. Yet the mechanical ventilator industry is extremely competitive. 

A number of reports conclude that there is significant competition in the industry. One source cites at least seven large producers. Another report cites eleven large players. And, in the words of another report:

Medical ventilators market competition is intense. 

The conclusion that the mechanical ventilator industry is highly competitive is further supported by the fact that the five largest producers combined reportedly hold only 50% of the market. In other words, available evidence suggests that none of these firms has anything close to a monopoly position. 

This intense competition, along with the small market shares of the merging firms, likely explains why the FTC declined to open an in-depth investigation into Covidien’s acquisition of Newport.

Similarly, following preliminary investigations, neither the FTC nor the European Commission saw the need for an in-depth look at the ventilator market when they reviewed Medtronic’s subsequent acquisition of Covidien (which closed in 2015). Although Medtronic did not produce any mechanical ventilators before the acquisition, authorities (particularly the European Commission) could nevertheless have analyzed that market if Covidien’s presumptive market share was particularly high. The fact that they declined to do so tends to suggest that the ventilator market was relatively unconcentrated.

2. The value of the merger was too small

A second strong reason to believe that Covidien’s purchase of Newport wasn’t a killer acquisition is the acquisition’s value of $103 million

Indeed, if it was clear that Newport was about to revolutionize the ventilator market, then Covidien would likely have been made to pay significantly more than $103 million to acquire it. 

As noted above, the crux of the “killer acquisition” theory is that incumbents can induce welfare-reducing acquisitions by offering to acquire their rivals for significantly more than the present value of their rivals’ expected profits. Because an incumbent undertaking a “killer” takeover expects to earn monopoly profits as a result of the transaction, it can offer a substantial premium and still profit from its investment. It is this basic asymmetry that drives the theory.

Indeed, as a recent article by Kevin Bryan and Erik Hovenkamp notes, an acquisition value out of line with current revenues may be an indicator of the significance of a pending acquisition in which enforcers may not actually know the value of the target’s underlying technology: 

[Where] a court may lack the expertise to [assess the commercial significance of acquired technology]…, the transaction value… may provide a reasonable proxy. Intuitively, if the startup is a relatively small company with relatively few sales to its name, then a very high acquisition price may reasonably suggest that the startup technology has significant promise.

The strategy only works, however, if the target firm’s shareholders agree that share value properly reflects only “normal” expected profits, and not that the target is poised to revolutionize its market with a uniquely low-cost or high-quality product. Relatively low acquisition prices relative to market size, therefore, tend to reflect low (or normal) expected profits, and a low perceived likelihood of radical innovations occurring.

We can apply this reasoning to Covidien’s acquisition of Newport: 

  • Precise and publicly available figures concerning the mechanical ventilator market are hard to come by. Nevertheless, one estimate finds that the global ventilator market was worth $2.715 billion in 2012. Another report suggests that the global market was worth $4.30 billion in 2018; still another that it was worth $4.58 billion in 2019.
  • As noted above, Covidien reported to the SEC that it paid $103 million to purchase Newport (a firm that produced only ventilators and apparently had no plans to branch out). 
  • For context, at the time of the acquisition Covidien had annual sales of $11.8 billion overall, and $743 million in sales of its existing “Airways and Ventilation Products.”

If the ventilator market was indeed worth billions of dollars per year, then the comparatively small $108 million paid by Covidien — small even relative to Covidien’s own share of the market — suggests that, at the time of the acquisition, it was unlikely that Newport was poised to revolutionize the market for mechanical ventilators (for instance, by successfully bringing its Aura ventilator to market). 

The New York Times article claimed that Newport’s ventilators would be sold (at least to the US government) for $3,000 — a substantial discount from the reportedly then-going rate of $10,000. If selling ventilators at this price seemed credible at the time, then Covidien — as well as Newport’s shareholders — knew that Newport was about to achieve tremendous cost savings, enabling it to offer ventilators not only to the the US government, but to purchasers around the world, at an irresistibly attractive — and profitable — price.

Ventilators at the time typically went for about $10,000 each, and getting the price down to $3,000 would be tough. But Newport’s executives bet they would be able to make up for any losses by selling the ventilators around the world.

“It would be very prestigious to be recognized as a supplier to the federal government,” said Richard Crawford, who was Newport’s head of research and development at the time. “We thought the international market would be strong, and there is where Newport would have a good profit on the product.”

If achievable, Newport thus stood to earn a substantial share of the profits in a multi-billion dollar industry. 

Of course, it is necessary to apply a probability to these numbers: Newport’s ventilator was not yet on the market, and had not yet received FDA approval. Nevertheless, if the Times’ numbers seemed credible at the time, then Covidien would surely have had to offer significantly more than $108 million in order to induce Newport’s shareholders to part with their shares.

Given the low valuation, however, as well as the fact that Newport produced other ventilators — and continues to do so to this day, there is no escaping the fact that everyone involved seemed to view Newport’s Aura ventilator as nothing more than a moonshot with, at best, a low likelihood of success. 

Curically, this same reasoning explains why it shouldn’t surprise anyone that the project was ultimately discontinued; recourse to a “killer acquisition” theory is hardly necessary.

3. Lessons from Covidien’s ventilator product decisions  

The killer acquisition claims are further weakened by at least four other important pieces of information: 

  1.  Covidien initially continued to develop Newport’s Aura ventilator, and continued to develop and sell Newport’s other ventilators.
  2. There was little overlap between Covidien and Newport’s ventilators — or, at the very least, they were highly differentiated
  3. Covidien appears to have discontinued production of its own portable ventilator in 2014
  4. The Newport purchase was part of a billion dollar series of acquisitions seemingly aimed at expanding Covidien’s in-hospital (i.e., not-portable) device portfolio

Covidien continued to develop and sell Newport’s ventilators

For a start, while the Aura line was indeed discontinued by Covidien, the timeline is important. The acquisition of Newport by Covidien was announced in March 2012, approved by the FTC in April of the same year, and the deal was closed on May 1, 2012.

However, as the FDA’s 510(k) database makes clear, Newport submitted documents for FDA clearance of the Aura ventilator months after its acquisition by Covidien (June 29, 2012, to be precise). And the Aura received FDA 510(k) clearance on November 9, 2012 — many months after the merger.

It would have made little sense for Covidien to invest significant sums in order to obtain FDA clearance for a project that it planned to discontinue (the FDA routinely requires parties to actively cooperate with it, even after 510(k) applications are submitted). 

Moreover, if Covidien really did plan to discreetly kill off the Aura ventilator, bungling the FDA clearance procedure would have been the perfect cover under which to do so. Yet that is not what it did.

Covidien continued to develop and sell Newport’s other ventilators

Second, and just as importantly, Covidien (and subsequently Medtronic) continued to sell Newport’s other ventilators. The Newport e360 and HT70 are still sold today. Covidien also continued to improve these products: it appears to have introduced an improved version of the Newport HT70 Plus ventilator in 2013.

If eliminating its competitor’s superior ventilators was the only goal of the merger, then why didn’t Covidien also eliminate these two products from its lineup, rather than continue to improve and sell them? 

At least part of the answer, as will be seen below, is that there was almost no overlap between Covidien and Newport’s product lines.

There was little overlap between Covidien’s and Newport’s ventilators

Third — and perhaps the biggest flaw in the killer acquisition story — is that there appears to have been very little overlap between Covidien and Newport’s ventilators. 

This decreases the likelihood that the merger was a killer acquisition. When two products are highly differentiated (or not substitutes at all), sales of the first are less likely to cannibalize sales of the other. As Florian Ederer and his co-authors put it:

Importantly, without any product market overlap, the acquirer never has a strictly positive incentive to acquire the entrepreneur, neither to “Acquire to Kill” nor to “Acquire to Continue.” This is because without overlap, acquiring the project does not give the acquirer any gains resulting from reduced competition, and the two bargaining entities have exactly the same value for the project.

A quick search of the FDA’s 510(k) database reveals that Covidien has three approved lines of ventilators: the Puritan Bennett 980, 840, and 540 (apparently essentially the same as the PB560, the plans to which Medtronic recently made freely available in order to facilitate production during the current crisis). The same database shows that these ventilators differ markedly from Newport’s ventilators (particularly the Aura).

In particular, Covidien manufactured primarily traditional, invasive ICU ventilators (except for the PB540, which is potentially a substitute for the Newport HT70), while Newport made much-more-portable ventilators, suitable for home use (notably the Aura, HT50 and HT70 lines). 

Under normal circumstances, critical care and portable ventilators are not substitutes. As the WHO website explains, portable ventilators are:

[D]esigned to provide support to patients who do not require complex critical care ventilators.

A quick glance at Medtronic’s website neatly illustrates the stark differences between these two types of devices:

This is not to say that these devices do not have similar functionalities, or that they cannot become substitutes in the midst of a coronavirus pandemic. However, in normal times (as was the case when Covidien acquired Newport), hospitals likely did not view these devices as substitutes.

The conclusion that Covidien and Newport’s ventilator were not substitutes finds further support in documents and statements released at the time of the merger. For instance, Covidien’s CEO explained that:

This acquisition is consistent with Covidien’s strategy to expand into adjacencies and invest in product categories where it can develop a global competitive advantage.

And that:

Newport’s products and technology complement our current portfolio of respiratory solutions and will broaden our ventilation platform for patients around the world, particularly in emerging markets.

In short, the fact that almost all of Covidien and Newport’s products were not substitutes further undermines the killer acquisition story. It also tends to vindicate the FTC’s decision to rapidly terminate its investigation of the merger.

Covidien appears to have discontinued production of its own portable ventilator in 2014

Perhaps most tellingly: It appears that Covidien discontinued production of its own competing, portable ventilator, the Puritan Bennett 560, in 2014.

The product is reported on the company’s 2011, 2012 and 2013 annual reports:

Airway and Ventilation Products — airway, ventilator, breathing systems and inhalation therapy products. Key products include: the Puritan Bennett™ 840 line of ventilators; the Puritan Bennett™ 520 and 560 portable ventilator….

(The PB540 was launched in 2009; the updated PB560 in 2010. The PB520 was the EU version of the device, launched in 2011).

But in 2014, the PB560 was no longer listed among the company’s ventilator products:  

Airway & Ventilation, which primarily includes sales of airway, ventilator and inhalation therapy products and breathing systems.

Key airway & ventilation products include: the Puritan Bennett™ 840 and 980 ventilators, the Newport™ e360 and HT70 ventilators….

Nor — despite its March 31 and April 1 “open sourcing” of the specifications and software necessary to enable others to produce the PB560 — did Medtronic appear to have restarted production, and the company did not mention the device in its March 18 press release announcing its own, stepped-up ventilator production plans.

Surely if Covidien had intended to capture the portable ventilator market by killing off its competition it would have continued to actually sell its own, competing device. The fact that the only portable ventilators produced by Covidien by 2014 were those it acquired in the Newport deal strongly suggests that its objective in that deal was the acquisition and deployment of Newport’s viable and profitable technologies — not the abandonment of them. This, in turn, suggests that the Aura was not a viable and profitable technology.

(Admittedly we are unable to determine conclusively that either Covidien or Medtronic stopped producing the PB520/540/560 series of ventilators. But our research seems to indicate strongly that this is indeed the case).

Putting the Newport deal in context

Finally, although not dispositive, it seems important to put the Newport purchase into context. In the same year as it purchased Newport, Covidien paid more than a billion dollars to acquire five other companies, as well — all of them primarily producing in-hospital medical devices. 

That 2012 spending spree came on the heels of a series of previous medical device company acquisitions, apparently totally some four billion dollars. Although not exclusively so, the acquisitions undertaken by Covidien seem to have been primarily targeted at operating room and in-hospital monitoring and treatment — making the putative focus on cornering the portable (home and emergency) ventilator market an extremely unlikely one. 

By the time Covidien was purchased by Medtronic the deal easily cleared antitrust review because of the lack of overlap between the company’s products, with Covidien’s focusing predominantly on in-hospital, “diagnostic, surgical, and critical care” and Medtronic’s on post-acute care.

Newport misjudged the costs associated with its Aura project; Covidien was left to pick up the pieces

So why was the Aura ventilator discontinued?

Although it is almost impossible to know what motivated Covidien’s executives, the Aura ventilator project clearly suffered from many problems. 

The Aura project was intended to meet the requirements of the US government’s BARDA program (under the auspices of the U.S. Department of Health and Human Services’ Biomedical Advanced Research and Development Authority). In short, the program sought to create a stockpile of next generation ventilators for emergency situations — including, notably, pandemics. The ventilator would thus have to be designed for events where

mass casualties may be expected, and when shortages of experienced health care providers with respiratory support training, and shortages of ventilators and accessory components may be expected.

The Aura ventilator would thus sit somewhere between Newport’s two other ventilators: the e360 which could be used in pediatric care (for newborns smaller than 5kg) but was not intended for home care use (or the extreme scenarios envisioned by the US government); and the more portable HT70 which could be used in home care environments, but not for newborns. 

Unfortunately, the Aura failed to achieve this goal. The FDA’s 510(k) clearance decision clearly states that the Aura was not intended for newborns:

The AURA family of ventilators is applicable for infant, pediatric and adult patients greater than or equal to 5 kg (11 lbs.).

A press release issued by Medtronic confirms that

the company was unable to secure FDA approval for use in neonatal populations — a contract requirement.

And the US Government RFP confirms that this was indeed an important requirement:

The device must be able to provide the same standard of performance as current FDA pre-market cleared portable ventilators and shall have the following additional characteristics or features: 

Flexibility to accommodate a wide patient population range from neonate to adult.

Newport also seems to have been unable to deliver the ventilator at the low price it had initially forecasted — a common problem for small companies and/or companies that undertake large R&D programs. It also struggled to complete the project within the agreed-upon deadlines. As the Medtronic press release explains:

Covidien learned that Newport’s work on the ventilator design for the Government had significant gaps between what it had promised the Government and what it could deliverboth in terms of being able to achieve the cost of production specified in the contract and product features and performance. Covidien management questioned whether Newport’s ability to complete the project as agreed to in the contract was realistic.

As Jason Crawford, an engineer and tech industry commentator, put it:

Projects fail all the time. “Supplier risk” should be a standard checkbox on anyone’s contingency planning efforts. This is even more so when you deliberately push the price down to 30% of the market rate. Newport did not even necessarily expect to be profitable on the contract.

The above is mostly Covidien’s “side” of the story, of course. But other pieces of evidence lend some credibility to these claims:

  • Newport agreed to deliver its Aura ventilator at a per unit cost of less than $3000. But, even today, this seems extremely ambitious. For instance, the WHO has estimated that portable ventilators cost between $3,300 and $13,500. If Newport could profitably sell the Aura at such a low price, then there was little reason to discontinue it (readers will recall the development of the ventilator was mostly complete when Covidien put a halt to the project).
  • Covidien/Newport is not the only firm to have struggled to offer suitable ventilators at such a low price. Philips (which took Newport’s place after the government contract fell through) also failed to achieve this low price. Rather than the $2,000 price sought in the initial RFP, Philips ultimately agreed to produce the ventilators for $3,280. But it has not yet been able to produce a single ventilator under the government contract at that price.
  • Covidien has repeatedly been forced to recall some of its other ventilators ( here, here and here) — including the Newport HT70. And rival manufacturers have also faced these types of issues (for example, here and here). 

Accordingly, Covidien may well have preferred to cut its losses on the already problem-prone Aura project, before similar issues rendered it even more costly. 

In short, while it is impossible to prove that these development issues caused Covidien to pull the plug on the Aura project, it is certainly plausible that they did. This further supports the hypothesis that Covidien’s acquisition of Newport was not a killer acquisition. 

Ending the Aura project might have been an efficient outcome

As suggested above, moreover, it is entirely possible that Covidien was better able to realize the poor prospects of Newport’s Aura project and also better organized to enable it to make the requisite decision to abandon the project.

A small company like Newport faces greater difficulties abandoning entrepreneurial projects because doing so can impair a privately held firm’s ability to raise funds for subsequent projects.

Moreover, the relatively large share of revue and reputation that Newport — worth $103 million in 2012, versus Covidien’s $11.8 billion — would have realized from fulfilling a substantial US government project could well have induced it to overestimate the project’s viability and to undertake excessive risk in the (vain) hope of bringing the project to fruition.  

While there is a tendency among antitrust scholars, enforcers, and practitioners to look for (and find…) antitrust-related rationales for mergers and other corporate conduct, it remains the case that most corporate control transactions (such as mergers) are driven by the acquiring firm’s expectation that it can manage more efficiently. As Henry G. Manne put it in his seminal article, Mergers and the Market for Corporate Control (1965): 

Since, in a world of uncertainty, profitable transactions will be entered into more often by those whose information is relatively more reliable, it should not surprise us that mergers within the same industry have been a principal form of changing corporate control. Reliable information is often available to suppliers and customers as well. Thus many vertical mergers may be of the control takeover variety rather than of the “foreclosure of competitors” or scale-economies type.

Of course, the same information that renders an acquiring firm in the same line of business knowledgeable enough to operate a target more efficiently could also enable it to effect a “killer acquisition” strategy. But the important point is that a takeover by a firm with a competing product line, after which the purchased company’s product line is abandoned, is at least as consistent with a “market for corporate control” story as with a “killer acquisition” story.

Indeed, as Florian Ederer himself noted with respect to the Covidien/Newport merger, 

“Killer acquisitions” can have a nefarious image, but killing off a rival’s product was probably not the main purpose of the transaction, Ederer said. He raised the possibility that Covidien decided to kill Newport’s innovation upon realising that the development of the devices would be expensive and unlikely to result in profits.

Concluding remarks

In conclusion, Covidien’s acquisition of Newport offers a cautionary tale about reckless journalism, “blackboard economics,” and government failure.

Reckless journalism because the New York Times clearly failed to do the appropriate due diligence for its story. Its journalists notably missed (or deliberately failed to mention) a number of critical pieces of information — such as the hugely important fact that most of Covidien’s and Newport’s products did not overlap, or the fact that there were numerous competitors in the highly competitive mechanical ventilator industry. 

And yet, that did not stop the authors from publishing their extremely alarming story, effectively suggesting that a small medical device merger materially contributed to the loss of many American lives.

The story also falls prey to what Ronald Coase called “blackboard economics”:

What is studied is a system which lives in the minds of economists but not on earth. 

Numerous commentators rushed to fit the story to their preconceived narratives, failing to undertake even a rudimentary examination of the underlying market conditions before they voiced their recriminations. 

The only thing that Covidien and Newport’s merger ostensibly had in common with the killer acquisition theory was the fact that a large firm purchased a small rival, and that the one of the small firm’s products was discontinued. But this does not even begin to meet the stringent conditions that must be fulfilled for the theory to hold water. Unfortunately, critics appear to have completely ignored all contradicting evidence. 

Finally, what the New York Times piece does offer is a chilling tale of government failure.

The inception of the US government’s BARDA program dates back to 2008 — twelve years before the COVID-19 pandemic hit the US. 

The collapse of the Aura project is no excuse for the fact that, more than six years after the Newport contract fell through, the US government still has not obtained the necessary ventilators. Questions should also be raised about the government’s decision to effectively put all of its eggs in the same basket — twice. If anything, it is thus government failure that was the real culprit. 

And yet the New York Times piece and the critics shouting “killer acquisition!” effectively give the US government’s abject failure here a free pass — all in the service of pursuing their preferred “killer story.”

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Daniel Takash,(Regulatory policy fellow at the Niskanen Center. He is the manager of Niskanen’s Captured Economy Project, https://capturedeconomy.com, and you can follow him @danieltakash or @capturedecon).]

The pharmaceutical industry should be one of the most well-regarded industries in America. It helps bring drugs to market that improve, and often save, people’s lives. Yet last year a Gallup poll found that of 25 major industries, the pharmaceutical industry was the most unpopular– trailing behind fossil fuels, lawyers, and even the federal government. The opioid crisis dominated the headlines for the past few years, but the high price of drugs is a top-of-mind issue that generates significant animosity toward the pharmaceutical industry. The effects of high drug prices are felt not just at every trip to the pharmacy, but also by those who are priced out of life-saving treatments. Many Americans simply can’t afford what their doctors prescribe. The pharmaceutical industry helps save lives, but it’s also been credibly accused of anticompetitive behavior–not just from generics, but even other brand manufacturers.

These extraordinary times are an opportunity to right the ship. AbbVie, roundly criticized for building a patent thicket around Humira, has donated its patent rights to a promising COVID-19 treatment. This is to be celebrated– yet pharma’s bad reputation is defined by its worst behaviors and the frequent apologetics for overusing the patent system. Hopefully corporate social responsibility will prevail, and such abuses will cease in the future.

The most effective long-term treatment for COVID-19 will be a vaccine. We also need drugs to treat those afflicted with COVID-19 to improve recovery and lower mortality rates for those that get sick before a vaccine is developed and widely available. This requires rapid drug development through effective public-private partnerships to bring these treatments to market.

Without a doubt, these solutions will come from the pharmaceutical industry. Increased funding for the National Institutes for Health, nonprofit research institutions, and private pharmaceutical researchers are likely needed to help accelerate the development of these treatments. But we must be careful to ensure whatever necessary upfront public support is given to these entities results in a fair trade-off for Americans. The U.S. taxpayer is one of the largest investors in early to mid-stage drug research, and we need to make sure that we are a good investor.

Basic research into the costs of drug development, especially when taxpayer subsidies are involved, is a necessary start. This is a feature of the We PAID Act, introduced by Senators Rick Scott (R-FL) and Chris Van Hollen (D-MD), which requires the Department of Health and Human Services to enter into a contract with the National Academy of Medicine to figure the reasonable price of drugs developed with taxpayer support. This reasonable price would include a suitable reward to the private companies that did the important work of finishing drug development and gaining FDA approval. This is important, as setting a price too low would reduce investments in indispensable research and development. But this must be balanced with the risk of using patents to charge prices above and beyond those necessary to finance research, development, and commercialization.

A little sunshine can go a long way. We should trust that pharmaceutical companies will develop a vaccine and treatments or coronavirus, but we must also verify these are affordable and accessible through public scrutiny. Take the drug manufacturer Gilead Science’s about-face on its application for orphan drug status on the possible COVID-19 treatment remdesivir. Remedesivir, developed in part with public funds and already covered by three Gilead patents, technically satisfied the definition of “orphan drug,” as COVID-19 (at the time of the application) afflicted fewer than 200,000 patents. In a pandemic that could infect tens of millions of Americans, this designation is obviously absurd, and public outcry led to Gilead to ask the FDA to rescind the application. Gilead claimed it sought the designation to speed up FDA review, and that might be true. Regardless, public attention meant that the FDA will give Gilead’s drug Remdesivir expedited review without Gilead needing a designation that looks unfair to the American people.

The success of this isolated effort is absolutely worth celebrating. But we need more research to better comprehend the pharmaceutical industry’s needs, and this is just what the study provisions of We PAID would provide.

There is indeed some existing research on this front. For example,the Pharmaceutical Researchers and Manufacturers of America (PhRMA) estimates it costs an average of $2.6 billion to bring a new drug to market, and research from the Journal of the American Medical Association finds this average to be closer to $1.3 billion, with the median cost of development to be $985 million.

But a thorough analysis provided under We PAID is the best way for us to fully understand just how much support the pharmaceutical industry needs, and just how successful it has been thus far. The NIH, one of the major sources of publicly funded research, invests about $41.7 billion annually in medical research. We need to better understand how these efforts link up, and how the torch is passed from public to private efforts.

Patents are essential to the functioning of the pharmaceutical industry by incentivizing drug development through temporary periods of exclusivity. But it is equally essential, in light of the considerable investment already made by taxpayers in drug research and development, to make sure we understand the effects of these incentives and calibrate them to balance the interests of patients and pharmaceutical companies. Most drugs require research funding from both public and private sources as well as patent protection. And the U.S. is one of the biggest investors of drug research worldwide (even compared to drug companies), yet Americans pay the highest prices in the world. Are these prices justified, and can we improve patent policy to bring these costs down without harming innovation?

Beyond a thorough analysis of drug pricing, what makes We PAID one of the most promising solutions to the problem of excessively high drug prices are the accountability mechanisms included. The bill, if made law, would establish a Drug Access and Affordability Committee. The Committee would use the methodology from the joint HHS and NAM study to determine a reasonable price for affected drugs (around 20 percent of drugs currently on the market, if the bill were law today). Any companies that price drugs granted exclusivity by a patent above the reasonable price would lose their exclusivity.

This may seem like a price control at first blush, but it isn’t–for two reasons. First, this only applies to drugs developed with taxpayer dollars, which any COVID-19 treatments or cures almost certainly would be considering the $785 million spent by the NIH since 2002 researching coronaviruses. It’s an accountability mechanism that would ensure the government is getting its money’s worth. This tool is akin to ensuring that a government contractor is not charging more than would be reasonable, lest it loses its contract.

Second, it is even less stringent than pulling a contract with a private firm overcharging the government for the services provided. Why? Losing a patent does not mean losing the ability to make a drug, or any other patented invention for that matter.This basic fact is often lost in the patent debate, but it cannot be stressed enough.

If patents functioned as licenses, then every patent expiration would mean another product going off the market. In reality, that means that any other firm can compete and use the patented design. Even if a firm violated the price regulations included in the bill and lost its patent, it could continue manufacturing the drug. And so could any other firm, bringing down prices for all consumers by opening up market competition.

The We PAID Act could be a dramatic change for the drug industry, and because of that many in Congress may want to first debate the particulars of the bill. This is fine, assuming  this promising legislation isn’t watered down beyond recognition. But any objections to the Drug Affordability and Access Committee and reasonable pricing regulations aren’t an excuse to not, at a bare minimum, pass the study included in the bill as part of future coronavirus packages, if not sooner. It is an inexpensive way to get good information in a single, reputable source that would allow us to shape good policy.

Good information is needed for good policy. When the government lays the groundwork for future innovations by financing research and development, it can be compared to a venture capitalist providing the financing necessary for an innovative product or service. But just like in the private sector, the government should know what it’s getting for its (read: taxpayers’) money and make recipients of such funding accountable to investors.

The COVID-19 outbreak will be the most pressing issue for the foreseeable future, but determining how pharmaceuticals developed with public research are priced is necessary in good times and bad. The final prices for these important drugs might be fair, but the public will never know without a trusted source examining this information. Trust, but verify. The pharmaceutical industry’s efforts in fighting the COVID-19 pandemic might be the first step to improving Americans’ relationship with the industry. But we need good information to make that happen. Americans need to know when they are being treated fairly, and that policymakers are able to protect them when they are treated unfairly. The government needs to become a better-informed investor, and that won’t happen without something like the We PAID Act.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Tim Brennan, (Professor, Economics & Public Policy, University of Maryland; former FCC; former FTC).]

Thinking about how to think about the coronavirus situation I keep coming back to three economic ideas that seem distinct but end up being related. First, a back of the envelope calculation suggests shutting down the economy for a while to reduce the spread of Covid-19. This leads to my second point, that political viability, if not simple fairness, dictates that the winners compensate the losers. The extent of both of these forces my main point, to understand why we can’t just “get the prices right” and let the market take care of it. Insisting that the market works in this situation could undercut the very strong arguments for why we should defer to markets in the vast majority of circumstances.

Is taking action worth it?

The first question is whether shutting down the economy to reduce the spread of Covid-19 is a good bet. Being an economist, I turn to benefit-cost analysis (BCA). All I can offer here is a back-of-the-envelope calculation, which may be an insult to envelopes. (This paper has a more serious calculation with qualitatively similar findings.) With all caveats recognized, the willingness to pay of an average person in the US to social distancing and closure policies, WTP, is

        WTP = X% times Y% times VSL,

where X% is the fraction of the population that might be seriously affected, Y% is the reduction in the likelihood of death for this population from these policies, and VSL is the “value of statistical life” used in BCA calculations, in the ballpark of $9.5M.

For X%, take the percentage of the population over 65 (a demographic including me). This is around 16%. I’m not an epidemiologist, so for Y%, the reduced likelihood of death (either from reduced transmission or reduced hospital overload), I can only speculate. Say it’s 1%, which naively seems pretty small. Even with that, the average willingness to pay would be

        WTP = 16% times 1% times $9.5M = $15,200.

Multiply that by a US population of roughly 330M gives a total national WTP of just over $5 trillion, or about 23% of GDP. Using conventional measures, this looks like a good trade in an aggregate benefit-cost sense, even leaving out willingness to pay to reduce the likelihood of feeling sick and the benefits to those younger than 65. Of course, among the caveats is not just whether to impose distancing and closures, but how long to have them (number of weeks), how severe they should be (gathering size limits, coverage of commercial establishments), and where they should be imposed (closing schools, colleges).  

Actual, not just hypothetical, compensation

The justification for using BCA is that the winners could compensate the losers. In the coronavirus setting, the equity considerations are profound. Especially when I remember that GDP is not a measure of consumer surplus, I ask myself how many months of the disruption (and not just lost wages) from unemployment should low-income waiters, cab drivers, hotel cleaners, and the like bear to reduce my over-65 likelihood of dying. 

Consequently, an important component of this policy to respect equity and quite possibly obtaining public acceptance is that the losers be compensated. In that respect, the justification for packages such as the proposal working (as I write) through Congress is not stimulus—after all, it’s  harder to spend money these days—as much as compensating those who’ve lost jobs as a result of this policy. Stimulus can come when the economy is ready to be jump-started.

Markets don’t always work, perhaps like now 

This brings me to a final point—why is this a public policy matter? My answer to almost any policy question is the glib “just get the prices right and the market will take care of it.” That doesn’t seem all that popular now. Part of that is the politics of fairness: Should the wealthy get the ventilators? Should hoarding of hand sanitizer be rewarded? But much of it may be a useful reminder that markets do not work seamlessly and instantaneously, and may not be the best allocation mechanism in critical times.

That markets are not always best should be a familiar theme to TOTM readers. The cost of using markets is the centerpiece for Ronald Coase’s 1937 Nature of the Firm and 1960 Problem of Social Cost justification for allocation through the courts. Many of us, including me on TOTM, have invoked these arguments to argue against public interventions in the structure of firms, particularly antitrust actions regarding vertical integration. Another common theme is that the common law tends toward efficiency because of the market-like evolutionary processes in property, tort, and contract case law.

This perspective is a useful reminder that the benefits of markets should always be “compared to what?” In one familiar case, the benefits of markets are clear when compared to the snail’s pace, limited information, and political manipulability of administrative price setting. But when one is talking about national emergencies and the inelastic demands, distributional consequences, and the lack of time for the price mechanism to work its wonders, one can understand and justify the use of the plethora of mandates currently imposed or contemplated. 

The common law also appears not to be a good alternative. One can imagine the litigation nightmare if everyone who got the virus attempted to identify and sue some defendant for damages. A similar nightmare awaits if courts were tasked with determning how the risk of a pandemic would have been allocated were contracts ideal.

Much of this may be belaboring the obvious. My concern is that if those of us who appreciate the virtues of markets exaggerate their applicability, those skeptical of markets may use this episode to say that markets inherently fail and more of the economy should be publicly administered. Better to rely on facts rather than ideology, and to regard the current situation as the awful but justifiable exception that proves the general rule.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Geoffrey A. Manne, (President, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics).]

There has been much (admittedly important) discussion of the economic woes of mass quarantine to thwart the spread and “flatten the curve” of the virus and its health burdens — as well as some extremely interesting discussion of the long-term health woes of quarantine and the resulting economic downturn: see, e.g., previous work by Christopher Ruhm suggesting mortality rates may improve during economic downturns, and this thread on how that might play out differently in the current health crisis.

But there is perhaps insufficient attention being paid to the more immediate problem of medical resource scarcity to treat large, localized populations of acutely sick people — something that will remain a problem for some time in places like New York, no matter how successful we are at flattening the curve. 

Yet the fact that we may have failed to prepare adequately for the current emergency does not mean that we can’t improve our ability to respond to the current emergency and build up our ability to respond to subsequent emergencies — both in terms of future, localized outbreaks of COVID-19, as well as for other medical emergencies more broadly.

In what follows I lay out the outlines of a proposal for an OPTN (Organ Procurement and Transplantation Network) analogue for allocating emergency medical resources. In order to make the idea more concrete (and because no doubt there is a limit to the types of medical resources for which such a program would be useful or necessary), let’s call it the VPAN — Ventilator Procurement and Allocation Network.

As quickly as possible in order to address the current crisis — and definitely with enough speed to address the next crisis — we should develop a program to collect relevant data and enable deployment of medical resources where they are most needed, using such data, wherever possible, to enable deployment before shortages become the enormous problem they are today

Data and information are important tools for mitigating emergencies

Hal’s post, especially in combination with Julian’s, offers a really useful suggestion for using modern information technology to help mitigate one of the biggest problems of the current crisis: The ability to return to economic activity (and a semblance of normalcy) as quickly as possible.

What I like most about his idea (and, again, Julian’s) is its incremental approach: We don’t have to wait until it’s safe for everyone to come outside in order for some people to do so. And, properly collected, assessed, and deployed, information is a key part of making that possible for more and more people every day.

Here I want to build on Hal’s idea to suggest another — perhaps even more immediately crucial — use of data to alleviate the COVID-19 crisis: The allocation of scarce medical resources.

In the current crisis, the “what” of this data is apparent: it is the testing data described by Julian in his post, and implemented in digital form by Hal in his. Thus, whereas Hal’s proposal contemplates using this data solely to allow proprietors (public transportation, restaurants, etc.) to admit entry to users, my proposal contemplates something more expansive: the provision of Hal’s test-verification vendors’ data to a centralized database in order to use it to assess current medical resource needs and to predict future needs.

The apparent ventilator availability crisis

As I have learned at great length from a friend whose spouse is an ICU doctor on the front lines, the current ventilator scarcity in New York City is worrisome (from a personal email, edited slightly for clarity):

When doctors talk about overwhelming a medical system, and talk about making life/death decisions, often they are talking about ventilators. A ventilator costs somewhere between $25K to $50K. Not cheap, but not crazy expensive. Most of the time these go unused, so hospitals have not stocked up on them, even in first-rate medical systems. Certainly not in the US, where equipment has to get used or the hospital does not get reimbursed for the purchase.

With a bad case of this virus you can put somebody — the sickest of the sickest — on one of those for three days and many of them don’t die. That frames a brutal capacity issue in a local area. And that is what has happened in Italy. They did not have enough ventilators in specific cities where the cases spiked. The mortality rates were much higher solely due to lack of these machines. Doctors had to choose who got on the machine and who did not. When you read these stories about a choice of life and death, that could be one reason for it.

Now the brutal part: This is what NYC might face soon. Faster than expected, by the way. Maybe they will ship patients to hospitals in other parts of NY state, and in NJ and CT. Maybe they can send them to the V.A. hospitals. Those are the options for how they hope to avoid this particular capacity issue. Maybe they will flatten the curve just enough with all the social distancing. Hard to know just now. But right now the doctors are pretty scared, and they are planning for the worst.

A recent PBS Report describes the current ventilator situation in the US:

A 2018 analysis from the Johns Hopkins University Center for Health Security estimated we have around 160,000 ventilators in the U.S. If the “worst-case scenario” were to come to pass in the U.S., “there might not be” enough ventilators, Dr. Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases, told CNN on March 15.

“If you don’t have enough ventilators, that means [obviously] that people who need it will not be able to get it,” Fauci said. He stressed that it was most important to mitigate the virus’ spread before it could overwhelm American health infrastructure.

Reports say that the American Hospital Association believes almost 1 million COVID-19 patients in the country will require a ventilator. Not every patient will require ventilation at the same time, but the numbers are still concerning. Dr. Daniel Horn, a physician at Massachusetts General Hospital in Boston, warned in a March 22 editorial in The New York Times that “There simply will not be enough of these machines, especially in major cities.”

The recent report of 9,000 COVID-19-related deaths in Italy brings the ventilator scarcity crisis into stark relief: There is little doubt that a substantial number of these deaths stem from the unavailability of key medical resources, including, most importantly, ventilators.  

Medical resource scarcity in the current crisis is a drastic problem. And without significant efforts to ameliorate it it is likely to get worse before it gets better. 

Using data to allocate scarce resources: The basic outlines of a proposed “Ventilator Procurement and Allocation Network”

But that doesn’t mean that the scarce resources we do have can’t be better allocated. As the PBS story quoted above notes, there are some 160,000 ventilators in the US. While that may not be enough in the aggregate, it’s considerably more than are currently needed in, say, New York City — and a great number of them are surely not currently being used, nor likely immediately to need to be used. 

The basic outline of the idea for redistributing these resources is fairly simple: 

  1. First, register all of the US’s existing ventilators in a centralized database. 
  2. Second (using a system like the one Hal describes), collect and update in real time the relevant test results, contact tracing, demographic, and other epidemiological data and input it into a database.
  3. Third, analyze this data using one or more compartmental models (or more targeted, virus-specific models) — (NB: I am the furthest thing from an epidemiologist, so I make no claims about how best to do this; the link above, e.g., is merely meant to be illustrative and not a recommendation) — to predict the demand for ventilators at various geographic levels, ranging from specific hospitals to counties or states. In much the same way, allocation of organs in the OPTN is based on a set of “allocation calculators” (which in turn are intended to implement the “Final Rule” adopted by HHS to govern transplant organ allocation decisions).   
  4. Fourth, ask facilities in low-expected-demand areas to send their unused (or excess above the level required to address “normal” demand) ventilators to those in high-expected-demand areas, with the expectation that they will be consistently reallocated across all hospitals and emergency care facilities according to the agreed-upon criteria. Of course, the allocation “algorithm” would be more complicated than this (as is the HHS Final Rule for organ allocation). But in principle this would be the primary basis for allocation. 

Not surprisingly, some guidelines for the allocation of ventilators in such emergencies already exist — like New York’s Ventilator Allocation Guidelines for triaging ventilators during an influenza pandemic. But such guidelines address the protocols for each facility to use in determining how to allocate its own scarce resources; they do not contemplate the ability to alleviate shortages in the first place by redistributing ventilators across facilities (or cities, states, etc.).

I believe that such a system — like the OPTN — could largely work on a voluntary basis. Of course, I’m quick to point out that the OPTN is a function of a massive involuntary and distortionary constraint: the illegality of organ sales. But I suspect that a crisis like the one we’re currently facing is enough to engender much the same sort of shortage (as if such a constraint were in place with respect to the use of ventilators), and thus that a similar system would be similarly useful. If not, of course, it’s possible that the government could, in emergency situations, actually commandeer privately-owned ventilators in order to effectuate the system. I leave for another day the consideration of the merits and defects of such a regime.

Of course, it need not rely on voluntary participation. There could be any number of feasible means of inducing hospitals that have unused ventilators to put their surpluses into the allocation network, presumably involving some sort of cash or other compensation. Or perhaps, if and when such a system were expanded to include other medical resources, it might involve moving donor hospitals up the queue for some other scarce resources they need that don’t face a current crisis. Surely there must be equipment that a New York City hospital has in relative surplus that a small town hospital covets.

But the key point is this: It doesn’t make sense to produce and purchase enough ventilators so that every hospital in the country can simultaneously address extremely rare peak demands. Doing so would be extraordinarily — and almost always needlessly — expensive. And emergency preparedness is never about ensuring that there are no shortages in the worst-case scenario; it’s about making a minimax calculation (as odious as those are) — i.e., minimizing the maximal cost/risk, not mitigating risk entirely. (For a literature review of emergency logistics in the context of large-scale disasters, see, e.g., here)

But nor does it make sense — as a policy matter — to allocate the new ventilators that will be produced in response to current demand solely on the basis of current demand. The epidemiological externalities of the current pandemic are substantial, and there is little reason to think that currently over-taxed emergency facilities — or even those preparing for their own expected demand — will make procurement decisions that reflect the optimal national (let alone global) allocation of such resources. A system like the one I outline here would effectively enable the conversion of private, constrained decisions to serve the broader demands required for optimal allocation of scarce resources in the face of epidemiological externalities

Indeed — and importantly — such a program allows the government to supplement existing and future public and private procurement decisions to ensure an overall optimal level of supply (and, of course, government-owned ventilators — 10,000 of which already exist in the Strategic National Stockpile — would similarly be put into the registry and deployed using the same criteria). Meanwhile, it would allow private facilities to confront emergency scenarios like the current one with far more resources than it would ever make sense for any given facility to have on hand in normal times.

Some caveats

There are, as always, caveats. First, such a program relies on the continued, effective functioning of transportation networks. If any given emergency were to disrupt these — and surely some would — the program would not necessarily function as planned. Of course, some of this can be mitigated by caching emergency equipment in key locations, and, over the course of an emergency, regularly redistributing those caches to facilitate expected deployments as the relevant data comes in. But, to be sure, at the end of the day such a program depends on the ability to transport ventilators.

In addition, there will always be the risk that emergency needs swamp even the aggregate available resources simultaneously (as may yet occur during the current crisis). But at the limit there is nothing that can be done about such an eventuality: Short of having enough ventilators on hand so that every needy person in the country can use one essentially simultaneously, there will always be the possibility that some level of demand will outpace our resources. But even in such a situation — where allocation of resources is collectively guided by epidemiological (or, in the case of other emergencies, other relevant) criteria — the system will work to mitigate the likely overburdening of resources, and ensure that overall resource allocation is guided by medically relevant criteria, rather than merely the happenstance of geography, budget constraints, storage space, or the like.     

Finally, no doubt a host of existing regulations make such a program difficult or impossible. Obviously, these should be rescinded. One set of policy concerns is worth noting: privacy concerns. There is an inherent conflict between strong data privacy, in which decisions about the sharing of information belong to each individual, and the data needs to combat an epidemic, in which each person’s privately optimal level of data sharing may result in a socially sub-optimal level of shared data. To the extent that HIPAA or other privacy regulations would stand in the way of a program like this, it seems singularly important to relax them. Much of the relevant data cannot be efficiently collected on an opt-in basis (as is easily done, by contrast, for the OPTN). Certainly appropriate safeguards should be put in place (particularly with respect to the ability of government agencies/law enforcement to access the data). But an individual’s idiosyncratic desire to constrain the sharing of personal data in this context seems manifestly less important than the benefits of, at the very least, a default rule that the relevant data be shared for these purposes.

Appropriate standards for emergency preparedness policy generally

Importantly, such a plan would have broader applicability beyond ventilators and the current crisis. And this is a key aspect of addressing the problem: avoiding a myopic focus on the current emergency in lieu of more clear-eyed emergency preparedness plan

It’s important to be thinking not only about the current crisis but also about the next emergency. But it’s equally important not to let political point-scoring and a bias in favor of focusing on the seen over the unseen coopt any such efforts. A proper assessment entails the following considerations (surely among others) (and hat tip to Ron Cass for bringing to my attention most of the following insights):

  1. Arguably we are overweighting health and safety concerns with respect to COVID-19 compared to our assessments in other areas (such as ordinary flu (on which see this informative thread by Anup Malani), highway safety, heart & coronary artery diseases, etc.). That’s inevitable when one particular concern is currently so omnipresent and so disruptive. But it is important that we not let our preparations for future problems focus myopically on this cause, because the next crisis may be something entirely different. 
  2. Nor is it reasonable to expect that we would ever have been (or be in the future) fully prepared for a global pandemic. It may not be an “unknown unknown,” but it is impossible to prepare for all possible contingencies, and simply not sensible to prepare fully for such rare and difficult-to-predict events.
  3. That said, we also shouldn’t be surprised that we’re seeing more frequent global pandemics (a function of broader globalization), and there’s little reason to think that we won’t continue to do so. It makes sense to be optimally prepared for such eventualities, and if this one has shown us anything, it’s that our ability to allocate medical resources that are made suddenly scarce by a widespread emergency is insufficient. 
  4. But rather than overreact to such crises — which is difficult, given that overreaction typically aligns with the private incentives of key decision makers, the media, and many in the “chattering class” — we should take a broader, more public-focused view of our response. Moreover, political and bureaucratic incentives not only produce overreactions to visible crises, they also undermine the appropriate preparation for such crises in the future.
  5. Thus, we should create programs that identify and mobilize generically useful emergency equipment not likely to be made obsolete within a short period and likely to be needed whatever the source of the next emergency. In other words, we should continue to focus the bulk of our preparedness on things like quickly deployable ICU facilities, ventilators, and clean blood supplies — not, as we may be wrongly inclined to do given the salience of the current crisis, primarily on specially targeted drugs and test kits. Our predictive capacity for our future demand of more narrowly useful products is too poor to justify substantial investment.
  6. Given the relative likelihood of another pandemic, generic preparedness certainly includes the ability to inhibit overly fast spread of a disease that can clog critical health care facilities. This isn’t disease-specific (or, that is, while the specific rate and contours of infection are specific to each disease, relatively fast and widespread contagion is what causes any such disease to overtax our medical resources, so if we’re preparing for a future virus-related emergency, we’re necessarily preparing for a disease that spreads quickly and widely).

Because the next emergency isn’t necessarily going to be — and perhaps isn’t even likely to be — a pandemic, our preparedness should not be limited to pandemic preparedness. This means, as noted above, overcoming the political and other incentives to focus myopically on the current problem even when nominally preparing for the next one. But doing so is difficult, and requires considerable political will and leadership. It’s hard to conceive of our current federal leadership being up to the task, but it’s certainly not the case that our current problems are entirely the makings of this administration. All governments spend too much time and attention solving — and regulating — the most visible problems, whether doing so is socially optimal or not.   

Thus, in addition to (1) providing for the efficient and effective use of data to allocate emergency medical resources (e.g., as described above), and (2) ensuring that our preparedness centers primarily on generically useful emergency equipment, our overall response should also (3) recognize and correct the way current regulatory regimes also overweight visible adverse health effects and inhibit competition and adaptation by industry and those utilizing health services, and (4) make sure that the economic and health consequences of emergency and regulatory programs (such as the current quarantine) are fully justified and optimized.

A proposal like the one I outline above would, I believe, be consistent with these considerations and enable more effective medical crisis response in general.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Sam Bowman, (Director of Competition Policy, ICLE).]

No support package for workers and businesses during the coronavirus shutdown can be comprehensive. In the UK, for example, the government is offering to pay 80% of the wages of furloughed workers, but this will not apply to self-employed people or many gig economy workers, and so far it’s been hard to think of a way of giving them equivalent support. It’s likely that the bill going through Congress will have similar issues.

Whether or not solutions are found for these problems, it may be worth putting in place what you might call a ‘backstop’ policy that allows people to access money in case they cannot access it through the other policies that are being put into place. This doesn’t need to provide equivalent support to other packages, just to ensure that everyone has access to the money they need during the shutdown to pay their bills and rent, and cover other essential costs. The aim here is just to keep everyone afloat.

One mechanism for doing this might be to offer income-contingent loans to anyone currently resident in the country during the shutdown period. These are loans whose repayment is determined by the borrower’s income later on, and are how students in the UK and Australia pay for university. 

In the UK, for example, under the current student loan repayment terms, once a student has graduated, their earnings above a certain income threshold (currently £25,716/year) are taxed at 9% to repay the loan. So, if I earn £30,000/year and have a loan to repay, I pay an additional £385.56/year to repay the loan (9% of the £4,284 I’m earning above the income threshold); if I earn £40,000/year, I pay an additional £1,285.56/year. The loan incurs an annual interest rate equal to an annual measure of inflation plus 3%. Once you have paid off the loan, no more repayments are taken, and any amount still unpaid thirty years after the loan was first taken out is written off.

In practice, these terms mean that there is a significant subsidy to university students, most of whom never pay off the full amount. Under a less generous repayment scheme that was in place until recently, with a lower income threshold for repayment, out of every £1 borrowed by students the long-run cost to the government was 43.3p. This is regarded by many as a feature of the system rather than a bug, because of the belief that university education has positive externalities, and because this approach pools some of the risk associated with pursuing a graduate-level career (the risk of ending up with a low-paid job despite having spent a lot on your education, for example).

For loans available to the wider public, a different set of repayment criteria could apply. We could allow anyone who has filed a W-2 or 1099 tax statement in the past eighteen months (or filed a self-assessment tax return in the UK) to borrow up to something around 20% of median national annual income, to be paid back via an extra few percentage points on their federal income tax or, in the UK, National Insurance contributions over the following ten years, with the rate returning to normal after they have paid off the loan. Some other provision may have to be made for people approaching retirement.

With a low, inflation-indexed interest rate, this would allow people who need funds to access them, but make it mostly pointless for anyone who did not need to borrow. 

If, like student tuition fees, loans were written off after a certain period, low earners would probably never pay back the entirety of the ‘loan’ – as a one-off transfer (ie, one that does not distort work or savings incentives for recipients) to low paid people, this is probably not a bad thing. Most people, though, would pay back as and when they were able to. For self-employed people in particular, it could be a valuable source of liquidity during an unexpected period where they cannot work. Overall, it would function as a cash transfer to lower earners, and a liquidity injection for everyone else who takes advantage of the scheme.

This would have advantages over money being given to every US or UK citizen, as some have proposed, because most of the money being given out would be repaid, so the net burden on taxpayers would be lower and so the deadweight losses created by the additional tax needed to pay for it would be smaller. But you would also eliminate the need for means-testing, relying on self-selection instead.

The biggest obstacle to rolling something like this out may be administrative. However, if the government committed to setting up something like this, banks and credit card companies may be willing to step in in the short-run to issue short-term loans in the knowledge that people could be able to repay them once the government scheme was set up. To facilitate this, the government could guarantee the loans made by banks and credit card companies now, then allow people to opt into the income-contingent loans later, so there was no need for legislation immediately.

Speed is extremely important in helping people plug the gaps in their finances. As a complement to the government’s other plans, income-contingent loans to groups like self-employed people may be a useful way of catching people who would otherwise fall through the cracks.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Mark Jamison, (Director and Gunter Professor, Public Utility Research Center, University of Florida and Visiting Scholar with the American Enterprise Institute.).]

The economic impacts of the coronavirus pandemic, and of the government responses to it, are significant and could be staggering, especially for small businesses. Goldman Sachs estimates a potential 24% drop in US GDP for the second quarter of 2020 and a 4% decline for the year. Its small business survey found that a little over half of small businesses might last for less than three months in this economic downturn. Small business employs nearly 60 million people in the US. How many will be out of work this year is anyone’s guess, but the number will be large.

What should small businesses do? First, focus on staying in business because their customers and employees need them to be healthy when the economy begins to recover. That will certainly mean slowing down business activity and decreasing payroll to manage losses, and managing liquidity.

Second, look for opportunities in the present crisis. Consumers are slowing their spending, but they will spend for things they still need and need now. And there will be new demand for things they didn’t need much before, like more transportation of food, support for health needs, and crisis management. Which business sectors will recover first? Those whose downturns represented delayed demand, such as postponed repairs and business travel, rather than evaporated demand, such as luxury items.

Third, they can watch for and take advantage of government support programs. Many programs simply provide low-cost loans, which do not solve the small-business problem of customers not buying: Borrowing money to meet payroll for idle workers simply delays business closure and makes bankruptcy more likely. But some grants and tax breaks are under discussion (see below).

Fourth, they can renegotiate loans and contracts. One of the mistakes lenders made in the past is holding stressed borrowers’ feet to the fire, which only led to more, and more costly loan defaults. At least some lenders have learned. So lenders and some suppliers might be willing to receive some payments rather than none.

What should government do? Unfortunately, Washington seems to think that so-called stimulus spending is the cure for any economic downturn. This isn’t true. I’ll explain why below, but let me first get to what is more productive. 

The major problem is that customers are unable to buy and businesses are unable to produce because of the responses to the coronavirus. Sometimes transactions are impossible, but there are times where buying and selling is simply made more costly by the pandemic and the government responses. So government support for the economy should address these problems directly.

For buyers, government officials should recognize that buying is hard and costly for them. So policies should include improving their abilities to buy during this time. Sales tax holidays, especially on healthcare, food, and transportation would be helpful. 

Waivers of postal fees would make e-commerce cheaper. And temporary support for fixed costs, such as mortgages, would free money for other things. Tax breaks for the gig economy would lower service costs and provide new employment opportunities. And tax credits for durables like home improvements would lower costs of social distancing.

But the better opportunities for government impact are on the business side because small business affects both the supply of services and the incomes of consumers.

For small business policy, my American Enterprise Institute colleagues Glenn Hubbard and Michael Strain have done the most thoughtful work that I have seen. They note that the problems for small businesses are that they do not have enough business activity to meet payroll and other bills. This means that “(t)he goal should be to replace a large portion of the revenue (not just the payroll expenses) those businesses would have generated in the absence of being shut down due to the coronavirus.” 

They suggest policies to replace 80 percent of the small business revenue loss. How? By providing grants in the form of government-backed commercial loans that are forgiven if the business continues and maintains payroll, subject to workers being allowed to quit if they find better opportunities. 

What else might work? Tax breaks that lower business costs. These can be breaks in payroll taxes, marginal income tax rates, equipment purchases, permitting, etc., including tax holidays. Rollback of current business losses would trigger tax refunds that improve businesses finances. 

One of the least useful ideas for small businesses is interest-free loans. These might be great for large businesses who are largely managing their financial positions. But such loans fail to address the basic small business problem of keeping the doors open when customers aren’t buying.

Finally, why doesn’t traditional stimulus work, even in other times of economic downturn? Traditional spending-based stimulus assumes that the economic problem is that people want to build things, but not buy them. That’s not a very good assumption. Especially today, where the problems are the higher cost of buying, or perhaps the impossibility of buying with social distancing, and the higher costs of doing businesses. Keeping businesses in business is the key to supporting the economy. 

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Brent Skorup, (Senior Research Fellow, Mercatus Center, George Mason University).]

One of the most visible economic effects of the COVID-19 spread is the decrease in airline customers. Alec Stapp alerted me to the recent outrage over “ghost flights,” where airlines fly nearly empty planes to maintain their “slots.” 

The airline industry is unfortunately in economic freefall as governments prohibit and travelers pull back on air travel. When the health and industry crises pass, lawmakers will have an opportunity to evaluate the mistakes of the past when it comes to airport congestion and airspace design.

This issue of ghost flights pops up occasionally and offers a lesson in the problems with government rationing of public resources. In this case, the public resource are airport slots: designated times, say, 15 or 30 minutes, a plane may takeoff or land at an airport. (Last week US and EU regulators temporarily waived the use-it-or-lose it rule for slots to mitigate the embarrassing cost and environmental damage caused by forcing airlines to fly empty planes.)

The slots at major hubs at peak times of day are extremely scarce–there’s only so many hours in a day. Today, slot assignment are administratively rationed in a way that favors large, incumbent airlines. As the Wall Street Journal summarized last year,

For decades, airlines have largely divided runway access between themselves at twice-yearly meetings run by the IATA (an airline trade group).

Airport slots are property. They’re valuable. They can be defined, partitioned, leased, put up as collateral, and, in the US, they can be sold and transferred within or between airports.

You just can’t call slots property. Many lawmakers, regulators, and airline representatives refuse to acknowledge the obvious. Stating that slots are valuable public property would make clear the anticompetitive waste that the 40-year slot assignment experiment generates. 

Like many government programs, the slot rationing began in the US as a temporary program decades ago as a response to congestion at New York airports. Slots are currently used to ration access at LGA, JFK, and DCA. And while they don’t use formal slot rationing, the FAA also rations access at four other busy airports: ORD, Newark, LAX, and SFO.

Fortunately, cracks are starting to form. In 2008, at the tailend of the Bush administration, the FAA proposed to auction some slots in New York City’s three airports. The plan was delayed by litigation from incumbent airlines and an adverse finding from the GAO. With a change in administration, the Obama FAA rescinded the plan in 2009.

Before the Obama FAA recission, the mask slipped a bit in the GAO’s criticism of the slot auction plan: 

FAA’s argument that slots are property proves too much—it suggests that the agency has been improperly giving away potentially millions of dollars of federal property, for no compensation, since it created the slot system in 1968.

Gulp.

Though the GAO helped scuttle the plan, the damage has been done. The idea has now entered public policy discourse: giving away valuable public property is precisely what’s going on. 

The implicit was made explicit in 2011 when, despite spiking the Bush FAA plan, the Obama FAA auctioned two dozen high-value slots. (The reversal and lack of controversy is puzzling to me.) Delta and US Airways wanted to swap some 160 slots at New York and DC airports. As a condition of the mega-swap, the Obama FAA required they divest 24 slots at those popular airports, which the agency auctioned to new entrants. Seven low-fare airlines bid in the auction and Jetblue and WestJet won the divested slots, paying about $90 million combined

The older fictions are rapidly eroding. There is an active secondary market in slots in some nations and when prices are released it becomes clear that the legacy rationing amounts to public property setasides to insiders. In 2016 it leaked, for instance, that an airline paid £58 million for a pair of take-off and landing slots at Heathrow. Other slot sales are in the tens of millions of dollars.

The 2011 FAA auctions and the loosening of rules globally around slot sales signal that the competition benefits from slot markets are too obvious to ignore. Competition from new entry drives down airfare and increases the number of flights.

For instance, a few months ago researchers used a booking app to scour 50 trillion flight itineraries to see new entrants’ effect on airline ticket prices between 2017 and 2019. As the Wall Street Journal reported, the entry of a low-fare carrier reduced ticket prices by 17% on average. The bigger effect was on output–new entry led to a 30% YoY increase in flights.

It’s becoming harder to justify the legacy view, which allow incumbent airlines to dominate the slot allocations via international conferences and national regulations that require “grandfather” slot usage. In a separate article last year, the Wall Street Journal reported that airlines are reluctantly ceding more power to airports in the assignment of slots. This is another signal in the long-running tug-of-war between airports and airlines. Airports generally want to open slots for new competitors–incumbent airlines do not.

The reason for the change of heart? The Journal says,

Airlines and airports reached the deal in part because of concerns governments should start to sell slots.

Gulp. Ghost flights are a government failure but a rational response to governments withholding the benefits of property from airlines. The slot rationing system encourages flying uneconomical flights, smaller planes, and excess carbon emissions. The COVID-19 crisis allowed the public a glimpse at the dysfunctional system. It won’t be easy, but aviation regulators worldwide need to assess slots policy and airspace access before the administrative rationing system spreads to the emerging urban air mobility and drone delivery markets.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Luke Froeb, (William C. Oehmig Chair in Free Enterprise and Entrepreneurship, Owen Graduate School of Management, Vanderbilt University; former Chief Economist at the US DOJ Antitrust Division and US FTC).]

Summary: Trying to quarantine everyone until a vaccine is available doesn’t seem feasible. In addition, restrictions mainly delay when the epidemic explodes, e.g., see previous post on Flattening the Curve. In this paper, we propose subsidies to both individuals and businesses, to better align private incentives with social goals, while leaving it up to individuals and businesses to decide for themselves which risks to take.

For example, testing would give individuals the information necessary to make the best decision about whether to shelter in place or, if they have recovered and are now immune, to come out.  But, the negative consequences of a positive test, e.g., quarantine, can deter people from getting tested. Rewards for those who present for a test and submit to isolation when they have active disease could offset such externalities.

Another problem is that many people aren’t free on their own to implement protective measures related to work. Some form of incentive for work from home, closing down production in some part, or extra protection for workers could be imagined for employers. Businesses that offer worker health care might be incentivized by sharing in the extra virus health care costs realized by workers in exchange for a health care subsidy.

Essay: In the midst of an epidemic it is evident that social policy must adjust in furtherance of the public good. Institutions of all sorts, not the least of which government, will have to take extraordinary actions. People should expect their relationships with these institutions to change, at least for some time. These adjustments will need to be informed by applicable epidemiological data and models, subject to the usual uncertainties. But the problems to be faced are not only epidemiological but economic. There will be tradeoffs to be made between safer, restrictive rules and riskier, unconstrained behaviors. Costs to be faced are both social and individual.  As such, we should not expect a uniform public policy to make suitable choices for all individuals, nor assume that individuals making good decisions for themselves will combine for a good social outcome.  Imagine instead an alternative, where social costs are evaluated and appropriate individual incentives are devised, allowing individuals to make informed decisions with respect to their own circumstances and the social externalities reflected in those incentives.

We are currently in the US at the beginning of the coronavirus epidemic.  This is not the flu. It is maybe ten times as lethal as the flu, perhaps a little more lethal proportionally in the most susceptible populations. It is new, so there is little or no natural immunity, and no vaccine available for maybe 18 months. Like the flu, there is no really effective treatment yet for those that become sickest, particularly because the virus is most deadly through the complications it causes with existing conditions, so treatment options should not perhaps be expected to help with epidemic spread or to reduce lethality. It is spread relatively easily from person to person, though not as easily as the measles, perhaps significantly before the infected person shows symptoms. And it may be that people can get the virus, become contagious and spread the disease, while never showing symptoms themselves. We now have a test for active coronavirus, though it is still somewhat hard to get in the US, and we can expect at some point in the near future to have an antibody test that will show when people either have or have had and recovered from the virus.

There are some obvious social and individual costs to people catching this virus. First there are the deaths from the disease. Then there are the costs of treating those ill. Finally, there are costs from the lost productivity of those fallen ill. If there is a sudden and extreme increase in the numbers of sick people, all of these costs can be expected to rise, perhaps significantly. When hospitals have patients in excess of existing capacity, expanding capacity will be difficult and expensive, and death rates can be expected to rise.

An ideal public health strategy in the face of an epidemic is to keep people from falling sick. At the beginning of the epidemic, the few people with the disease need to be found and quarantined, and those with whom they have had contact need to be traced and isolated so that any carrying the disease can be stopped from passing it on. If there is no natural reservoir of disease that reintroduces the disease, it may be possible to eradicate the disease. When there were few cases, this might have been practical, but that effort has clearly failed, and there are far too many carriers of the disease now to track. 

Now the emphasis must be on measures to reduce transmission of the disease. This entails modifying behaviors that facilitate the disease passing from person to person. If the rate of infection can be reduced enough, to the point where the number of people each infected person can be expected to infect is less than one on average, then the disease will naturally die out. Once most people have had the disease, or have been vaccinated, most of the people an infected person would have infected are immune so the rate of new infections will naturally fall to less than one and the disease will die out. Because so many people have immunity to many varieties of the flu, its spread can be controlled in particular through vaccination, the only difficulty being that new strains are appearing all of the time. The difficulty with coronavirus is that simple measures for reducing the spread of the disease do not seem to be effective enough and extreme measures will be much more expensive. Moreover, because the coronavirus is a pandemic, even if one region succeeds in reducing transmission and has the disease fade, reintroduction from other regions can be expected to relight the fire of epidemic. Measures for reducing transmission will need to be maintained for some time, likely until a vaccine is available or natural heard immunity is established through the majority of the population having had the disease.

The flu strikes every year and we seem to tolerate it without extreme measures of social distancing. Perhaps there’s nothing that needs to be done now, nothing worth doing now, to slow the coronavirus epidemic. But what would the cost of such an attitude be? The virus would spread like wildfire, infecting in a matter of months perhaps the majority of the population. Even with an estimate of 70 to 150 million Americans, at a 1% death rate that means 0.7 to 1.5 million would die. But that many cases all at once would overwhelm the medical system, and the intensive care required to keep the death rate even this low. A surge in cases might mean an increase in death rate.

At the other extreme, we seem to be heading into a period where everyone is urged to shelter-in-place, or required to be locked down, so as to reduce social contacts to near zero and thereby interrupt the spread of the virus. This may be effective, perhaps even necessary to prevent an immediate surge of demand on hospitals. But it is also expensive in the disruptions it entails. The number of active infections can be drastically reduced over a time scale corresponding to an individual’s course of the disease. Removing the restrictions would mean then that the epidemic resumes from the new lower level with somewhat more of the population already immune. It seems unlikely the disease can be eradicated by such measures because of the danger of reintroduction from other regions where the virus is active. The strategy of holding everyone in this isolation until a vaccine becomes available isn’t likely to be palatable. Releasing restrictions slowly so as to keep the level of the disease at an acceptable level would likely mean that most of the population would get the disease before the vaccine became available. Even if the most at risk population remained isolated, the estimated death rate over the majority of the population implies a nontrivial number of deaths. How do we decide how many and who to risk in order to get the economy functioning?

Consider then a system of incentives to individuals to help communicate the social externalities and guide their decisions. If there is a high prevalence of active disease in the general population, then hospitals will see excessive demand and it will be unsafe for high risk individuals to expose themselves to even minimal social interactions. A low prevalence of active disease can be more easily tolerated by hospitals, with a lower resulting death rate, and higher risk individuals may be more able to interact and provide for themselves. To promote a lower level of disease, individuals should be incentivized to delay getting sick, practicing social distancing and reducing contacts in a trade-off with ordinary necessary activity and respecting their personal risk category and risk tolerance. This lower level of disease is the “flattening of the curve”, but it also imagines the most at risk segment of the population might choose to isolate for a longer term, hoping to hold out for a vaccine.

If later disease or no disease is preferable, how do we incentivize it? Can we at the same time incentivize more usual infection control measures? Eventually everyone will either need to take an antibody test, to determine that they have had the disease and developed immunity and so are safe to resume all normal activities, or else need the vaccination. People may also be tested for active disease. We can’t penalize people for showing up with active disease, as this would mean they would skip the test and likely continue infecting other people. We should reward those who present for a test and submit to isolation when they have active disease. We can reward also those who submit to the antibody test and test positive (for the first time) who can then resume normal activities. On the other hand, we want people to delay when they get sick through prudent measures. Thus it would be a good idea to increase over time the reward for first showing up with the disease. To avoid incentivizing delay in testing, the reward for a positive test should increase as a function of the last antibody test that was negative, i.e., the reward is more if you can prove you had avoided the disease as of your last antibody test. The size of the rewards should be significant enough to cause a change of behavior but commensurate with the social cost savings induced. If we are planning on giving Americans multiple $1000 checks to get the economy going anyway, then such monies could be spent on incentives alternatively. This imagines antibody testing will be available, relatively easy and inexpensive in maybe three months, and antibody tests might be repeated maybe every three months. And of course this assumes the trajectory of the epidemic can be controlled well enough in the short term and predicted well enough in the long term to make such a scheme possible.

HT:  Colleague Steven Tschantz

This post originally appeared on the Managerial Econ Blog

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Ben Sperry, (Associate Director, Legal Research, International Center for Law & Economics).]

The visceral reaction to the New York Times’ recent story on Matt Colvin, the man who had 17,700 bottles of hand sanitizer with nowhere to sell them, shows there is a fundamental misunderstanding of the importance of prices and the informational function they serve in the economy. Calls to enforce laws against “price gouging” may actually prove more harmful to consumers and society than allowing prices to rise (or fall, of course) in response to market conditions. 

Nobel-prize winning economist Friedrich Hayek explained how price signals serve as information that allows for coordination in a market society:

We must look at the price system as such a mechanism for communicating information if we want to understand its real function… The most significant fact about this system is the economy of knowledge with which it operates, or how little the individual participants need to know in order to be able to take the right action. In abbreviated form, by a kind of symbol, only the most essential information is passed on and passed on only to those concerned. It is more than a metaphor to describe the price system as a kind of machinery for registering change, or a system of telecommunications which enables individual producers to watch merely the movement of a few pointers, as an engineer might watch the hands of a few dials, in order to adjust their activities to changes of which they may never know more than is reflected in the price movement.

Economic actors don’t need a PhD in economics or even to pay attention to the news about the coronavirus to change their behavior. Higher prices for goods or services alone give important information to individuals — whether consumers, producers, distributors, or entrepreneurs — to conserve scarce resources, produce more, and look for (or invest in creating!) alternatives.

Prices are fundamental to rationing scarce resources, especially during an emergency. Allowing prices to rapidly rise has three salutary effects (as explained by Professor Michael Munger in his terrific twitter thread):

  1. Consumers ration how much they really need;
  2. Producers respond to the rising prices by ramping up supply and distributors make more available; and
  3. Entrepreneurs find new substitutes in order to innovate around bottlenecks in the supply chain. 

Despite the distaste with which the public often treats “price gouging,” officials should take care to ensure that they don’t prevent these three necessary responses from occurring. 

Rationing by consumers

During a crisis, if prices for goods that are in high demand but short supply are forced to stay at pre-crisis levels, the informational signal of a shortage isn’t given — at least by the market directly. This encourages consumers to buy more than is rationally justified under the circumstances. This stockpiling leads to shortages. 

Companies respond by rationing in various ways, like instituting shorter hours or placing limits on how much of certain high-demand goods can be bought by any one consumer. Lines (and unavailability), instead of price, become the primary cost borne by consumers trying to obtain the scarce but underpriced goods. 

If, instead, prices rise in light of the short supply and high demand, price-elastic consumers will buy less, freeing up supply for others. And, critically, price-inelastic consumers (i.e. those who most need the good) will be provided a better shot at purchase.

According to the New York Times story on Mr. Colvin, he focused on buying out the hand sanitizer in rural areas of Tennessee and Kentucky, since the major metro areas were already cleaned out. His goal was to then sell these hand sanitizers (and other high-demand goods) online at market prices. He was essentially acting as a speculator and bringing information to the market (much like an insider trader). If successful, he would be coordinating supply and demand between geographical areas by successfully arbitraging. This often occurs when emergencies are localized, like post-Katrina New Orleans or post-Irma Florida. In those cases, higher prices induced suppliers to shift goods and services from around the country to the affected areas. Similarly, here Mr. Colvin was arguably providing a beneficial service, by shifting the supply of high-demand goods from low-demand rural areas to consumers facing localized shortages. 

For those who object to Mr. Colvin’s bulk purchasing-for-resale scheme, the answer is similar to those who object to ticket resellers: the retailer should raise the price. If the Walmarts, Targets, and Dollar Trees raised prices or rationed supply like the supermarket in Denmark, Mr. Colvin would not have been able to afford nearly as much hand sanitizer. (Of course, it’s also possible — had those outlets raised prices — that Mr. Colvin would not have been able to profitably re-route the excess local supply to those in other parts of the country most in need.)

The role of “price gouging” laws and social norms

A common retort, of course, is that Colvin was able to profit from the pandemic precisely because he was able to purchase a large amount of stock at normal retail prices, even after the pandemic began. Thus, he was not a producer who happened to have a restricted amount of supply in the face of new demand, but a mere reseller who exacerbated the supply shortage problems.

But such an observation truncates the analysis and misses the crucial role that social norms against “price gouging” and state “price gouging” laws play in facilitating shortages during a crisis.

Under these laws, typically retailers may raise prices by at most 10% during a declared state of emergency. But even without such laws, brick-and-mortar businesses are tied to a location in which they are repeat players, and they may not want to take a reputational hit by raising prices during an emergency and violating the “price gouging” norm. By contrast, individual sellers, especially pseudonymous third-party sellers using online platforms, do not rely on repeat interactions to the same degree, and may be harder to track down for prosecution. 

Thus, the social norms and laws exacerbate the conditions that create the need for emergency pricing, and lead to outsized arbitrage opportunities for those willing to violate norms and the law. But, critically, this violation is only a symptom of the larger problem that social norms and laws stand in the way, in the first instance, of retailers using emergency pricing to ration scarce supplies.

Normally, third-party sales sites have much more dynamic pricing than brick and mortar outlets, which just tend to run out of underpriced goods for a period of time rather than raise prices. This explains why Mr. Colvin was able to sell hand sanitizer for prices much higher than retail on Amazon before the site suspended his ability to do so. On the other hand, in response to public criticism, Amazon, Walmart, eBay, and other platforms continue to crack down on third party “price-gouging” on their sites

But even PR-centric anti-gouging campaigns are not ultimately immune to the laws of supply and demand. Even Amazon.com, as a first party seller, ends up needing to raise prices, ostensibly as the pricing feedback mechanisms respond to cost increases up and down the supply chain. 

But without a willingness to allow retailers and producers to use the informational signal of higher prices, there will continue to be more extreme shortages as consumers rush to stockpile underpriced resources.

The desire to help the poor who cannot afford higher priced essentials is what drives the policy responses, but in reality no one benefits from shortages. Those who stockpile the in-demand goods are unlikely to be poor because doing so entails a significant upfront cost. And if they are poor, then the potential for resale at a higher price would be a benefit.

Increased production and distribution

During a crisis, it is imperative that spiking demand is met by increased production. Prices are feedback mechanisms that provide realistic estimates of demand to producers. Even if good-hearted producers forswearing the profit motive want to increase production as an act of charity, they still need to understand consumer demand in order to produce the correct amount. 

Of course, prices are not the only source of information. Producers reading the news that there is a shortage undoubtedly can ramp up their production. But even still, in order to optimize production (i.e., not just blindly increase output and hope they get it right), they need a feedback mechanism. Prices are the most efficient mechanism available for quickly translating the amount of social need (demand) for a given product to guarantee that producers do not undersupply the product  (leaving more people without than need the good), or oversupply the product (consuming more resources than necessary in a time of crisis). Prices, when allowed to adjust to actual demand, thus allow society to avoid exacerbating shortages and misallocating resources.

The opportunity to earn more profit incentivizes distributors all along the supply chain. Amazon is hiring 100,000 workers to help ship all the products which are being ordered right now. Grocers and retailers are doing their best to line the shelves with more in-demand food and supplies

Distributors rely on more than just price signals alone, obviously, such as information about how quickly goods are selling out. But even as retail prices stay low for consumers for many goods, distributors often are paying more to producers in order to keep the shelves full, as in the case of eggs. These are the relevant price signals for producers to increase production to meet demand.

For instance, hand sanitizer companies like GOJO and EO Products are ramping up production in response to known demand (so much that the price of isopropyl alcohol is jumping sharply). Farmers are trying to produce as much as is necessary to meet the increased orders (and prices) they are receiving. Even previously low-demand goods like beans are facing a boom time. These instances are likely caused by a mix of anticipatory response based on general news, as well as the slightly laggier price signals flowing through the supply chain. But, even with an “early warning” from the media, the manufacturers still need to ultimately shape their behavior with more precise information. This comes in the form of orders from retailers at increased frequencies and prices, which are both rising because of insufficient supply. In search of the most important price signal, profits, manufacturers and farmers are increasing production.

These responses to higher prices have the salutary effect of making available more of the products consumers need the most during a crisis. 

Entrepreneurs innovate around bottlenecks 

But the most interesting thing that occurs when prices rise is that entrepreneurs create new substitutes for in-demand products. For instance, distillers have started creating their own hand sanitizers

Unfortunately, however, government regulations on sales of distilled products and concerns about licensing have led distillers to give away those products rather than charge for them. Thus, beneficial as this may be, without the ability to efficiently price such products, not nearly as much will be produced as would otherwise be. The non-emergency price of zero effectively guarantees continued shortages because the demand for these free alternatives will far outstrip supply.

Another example is car companies in the US are now producing ventilators. The FDA waived regulations on the production of new ventilators after General Motors, Ford, and Tesla announced they would be willing to use idle production capacity for the production of ventilators.

As consumers demand more toilet paper, bottled water, and staple foods than can be produced quickly, entrepreneurs respond by refocusing current capabilities on these goods. Examples abound:

Without price signals, entrepreneurs would have far less incentive to shift production and distribution to the highest valued use. 

Conclusion

While stories like that of Mr. Colvin buying all of the hand sanitizer in Tennessee understandably bother people, government efforts to prevent prices from adjusting only impede the information sharing processes inherent in markets. 

If the concern is to help the poor, it would be better to pursue less distortionary public policy than arbitrarily capping prices. The US government, for instance, is currently considering a progressively tiered one-time payment to lower income individuals. 

Moves to create new and enforce existing “price-gouging” laws are likely to become more prevalent the longer shortages persist. Platforms will likely continue to receive pressure to remove “price-gougers,” as well. These policies should be resisted. Not only will these moves not prevent shortages, they will exacerbate them and push the sale of high-demand goods into grey markets where prices will likely be even higher. 

Prices are an important source of information not only for consumers, but for producers, distributors, and entrepreneurs. Short circuiting this signal will only be to the detriment of society.  

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Corbin Barthold, (Senior Litigation Counsel, Washington Legal Foundation).]

The pandemic is serious. COVID-19 will overwhelm our hospitals. It might break our entire healthcare system. To keep the number of deaths in the low hundreds of thousands, a study from Imperial College London finds, we will have to shutter much of our economy for months. Small wonder the markets have lost a third of their value in a relentless three-week plunge. Grievous and cruel will be the struggle to come.

“All men of sense will agree,” Hamilton wrote in Federalist No. 70, “in the necessity of an energetic Executive.” In an emergency, certainly, that is largely true. In the midst of this crisis even a staunch libertarian can applaud the government’s efforts to maintain liquidity, and can understand its urge to start dispersing helicopter money. By at least acting like it knows what it’s doing, the state can lessen many citizens’ sense of panic. Some of the emergency measures might even work.

Of course, many of them won’t. Even a trillion-dollar stimulus package might be too small, and too slowly dispersed, to do much good. What’s worse, that pernicious line, “Don’t let a crisis go to waste,” is in the air. Much as price gougers are trying to arbitrage Purell, political gougers, such as Senator Elizabeth Warren, are trying to cram woke diktats into disaster-relief bills. Even now, especially now, it is well to remember that government is not very good at what it does.

But dreams of dirigisme die hard, especially at the New York Times. “During the Great Depression,” Farhad Manjoo writes, “Franklin D. Roosevelt assembled a mighty apparatus to rebuild a broken economy.” Government was great at what it does, in Manjoo’s view, until neoliberalism arrived in the 1980s and ruined everything. “The incompetence we see now is by design. Over the last 40 years, America has been deliberately stripped of governmental expertise.” Manjoo implores us to restore the expansive state of yesteryear—“the sort of government that promised unprecedented achievement, and delivered.”

This is nonsense. Our government is not incompetent because Grover Norquist tried (and mostly failed) to strangle it. Our government is incompetent because, generally speaking, government is incompetent. The keystone of the New Deal, the National Industrial Recovery Act of 1933, was an incoherent mess. Its stated goals were at once to “reduce and relieve unemployment,” “improve standards of labor,” “avoid undue restriction of production,” “induce and maintain united action of labor and management,” “organiz[e] . . . co-operative action among trade groups,” and “otherwise rehabilitate industry.” The law empowered trade groups to create their own “codes of unfair competition,” a privilege they quite predictably used to form anticompetitive cartels.

At no point in American history has the state, with all its “governmental expertise,” been adept at spending money, stimulus or otherwise. A law supplying funds for the Transcontinental Railroad offered to pay builders more for track laid in the mountains, but failed to specify where those mountains begin. Leland Stanford commissioned a study finding that, lo and behold, the Sierra Nevada begins deep in the Sacramento Valley. When “the federal Interior Department initially challenged [his] innovative geology,” reports the historian H.W. Brands, Stanford sent an agent directly to President Lincoln, a politician who “didn’t know much geology” but “preferred to keep his allies happy.” “My pertinacity and Abraham’s faith moved mountains,” the triumphant lobbyist quipped after the meeting.

The supposed golden age of expert government, the time between the rise of FDR and the fall of LBJ, was no better. At the height of the Apollo program, it occurred to a physics professor at Princeton that if there were a small glass reflector on the Moon, scientists could use lasers to calculate the distance between it and Earth with great accuracy. The professor built the reflector for $5,000 and approached the government. NASA loved the idea, but insisted on building the reflector itself. This it proceeded to do, through its standard contracting process, for $3 million.

When the pandemic at last subsides, the government will still be incapable of setting prices, predicting industry trends, or adjusting to changed circumstances. What F.A. Hayek called the knowledge problem—the fact that useful information is dispersed throughout society—will be as entrenched and insurmountable as ever. Innovation will still have to come, if it is to come at all, overwhelmingly from extensive, vigorous, undirected trial and error in the private sector.

When New York Times columnists are not pining for the great government of the past, they are surmising that widespread trauma will bring about the great government of the future. “The outbreak,” Jamelle Bouie proposes in an article entitled “The Era of Small Government is Over,” has “made our mutual interdependence clear. This, in turn, has made it a powerful, real-life argument for the broadest forms of social insurance.” The pandemic is “an opportunity,” Bouie declares, to “embrace direct state action as a powerful tool.”

It’s a bit rich for someone to write about the coming sense of “mutual interdependence” in the pages of a publication so devoted to sowing grievance and discord. The New York Times is a totem of our divisions. When one of its progressive columnists uses the word “unity,” what he means is “submission to my goals.”

In any event, disunity in America is not a new, or even necessarily a bad, thing. We are a fractious, almost ungovernable people. The colonists rebelled against the British government because they didn’t want to pay it back for defending them from the French during the Seven Years’ War. When Hamilton, champion of the “energetic Executive,” pushed through a duty on liquor, the frontier settlers of western Pennsylvania tarred and feathered the tax collectors. In the Astor Place Riot of 1849, dozens of New Yorkers died in a brawl over which of two men was the better Shakespearean actor. Americans are not housetrained.

True enough, if the virus takes us to the kind of depths not seen in these parts since the Great Depression, all bets are off. Short of that, however, no one should lightly assume that Americans will long tolerate a statist revolution imposed on their fears. And thank goodness for that. Our unruliness, our unwillingness to do what we’re told, is part of what makes our society so dynamic and prosperous.

COVID-19 will shake the world. When it has gone, a new scene will open. We can say very little now about what is going to change. But we can hope that Americans will remain a creative, opinionated, fiercely independent lot. And we can be confident that, come what may, planned administration will remain a source of problems, while unplanned free enterprise will remain the surest source of solutions.


[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Justin “Gus” Hurwitz, (Associate Professor of Law & Co-director, Space, Cyber, and Telecom Law Program, University of Nebraska; Director of Law & Economics Programs, ICLE).]

I’m a big fan of APM Marketplace, including Molly Wood’s tech coverage. But they tend to slip into advocacy mode—I think without realizing it—when it comes to telecom issues. This was on full display earlier this week in a story on widespread decisions by ISPs to lift data caps during the ongoing COVID-19 crisis (available here, the segment runs from 4:30-7:30). 

As background, all major ISPs have lifted data caps on their Internet service offerings. This is in recognition of the fact that most Americans are spending more time at home right now. During this time, many of us are teleworking, so making more intensive use of our Internet connections during the day; many have children at home during the day who are using the Internet for both education and entertainment; and we are going out less in the evening so making more use of services like streaming video for evening entertainment. All of these activities require bandwidth—and, like many businesses around the country, ISPs are taking steps (such as eliminating data caps) that will prevent undue consumer harm as we work to cope with COVID-19.

The Marketplace take on data caps

After introducing the segment, Wood and Marketplace host Kai Ryssdal turn to a misinformation and insinuation-laden discussion of telecommunications policy. Wood asserts that one of the ISPs’ “big arguments against net neutrality regulation” was that they “need [data] caps to prevent congestion on networks.” Ryssdal responds by asking, coyly, “so were they just fibbing? I mean … ya know …”

Wood responds that “there have been times when these arguments were very legitimate,” citing the early days of 4G networks. She then asserts that the United States has “some of the most expensive Internet speeds in the developed world” before jumping to the assertion that advocates will now have the “data to say that [data] caps are unnecessary.” She then goes on to argue—and here she loses any pretense of reporter neutrality—that “we are seeing that the Internet really is a utility” and that “frankly, there’s no, uhm, ongoing economic argument for [data caps].” She even notes that we can “hear [her] trying to be professional” in the discussion.

Unpacking that mess

It’s hard to know where to start with Wood & Ryssdal discussion, such a muddled mess it is. Needless to say, it is unfortunate to see tech reporters doing what tech reporters seem to do best: confusing poor and thinly veiled policy arguments for news.

Let’s start with Wood’s first claim, that ISPs (and, for that matter, others) have long argued that data caps are required to manage congestion and that this has been one of their chief arguments against net neutrality regulations. This is simply not true. 

Consider the 2015 Open Internet Order (OIO)—the net neutrality regulations adopted by the FCC under President Obama. The OIO discusses data caps (“usage allowances”) in paragraphs 151-153. It explains:

The record also reflects differing views over some broadband providers’ practices with respect to usage allowances (also called “data caps”). … Usage allowances may benefit consumers by offering them more choices over a greater range of service options, and, for mobile broadband networks, such plans are the industry norm today, in part reflecting the different capacity issues on mobile networks. Conversely, some commenters have expressed concern that such practices can potentially be used by broadband providers to disadvantage competing over-the-top providers. Given the unresolved debate concerning the benefits and drawbacks of data allowances and usage-based pricing plans,[FN373] we decline to make blanket findings about these practices and will address concerns under the no-unreasonable interference/disadvantage on a case-by-case basis. 

[FN373] Regarding usage-based pricing plans, there is similar disagreement over whether these practices are beneficial or harmful for promoting an open Internet. Compare Bright House Comments at 20 (“Variable pricing can serve as a useful technique for reducing prices for low usage (as Time Warner Cable has done) as well as for fairly apportioning greater costs to the highest users.”) with Public Knowledge Comments at 58 (“Pricing connectivity according to data consumption is like a return to the use of time. Once again, it requires consumers keep meticulous track of what they are doing online. With every new web page, new video, or new app a consumer must consider how close they are to their monthly cap. . . . Inevitably, this type of meter-watching freezes innovation.”), and ICLE & TechFreedom Policy Comments at 32 (“The fact of the matter is that, depending on background conditions, either usage-based pricing or flat-rate pricing could be discriminatory.”). 

The 2017 Restoring Internet Freedom Order (RIFO), which rescinded much of the OIO, offers little discussion of data caps—its approach to them follows that of the OIO, requiring that ISPs are free to adopt but must disclose data cap policies. It does, however, note that small ISPs expressed concern, and provided evidence, that fear of lawsuits had forced small ISPs to abandon policies like data caps, “which would have benefited its customers by lowering its cost of Internet transport.” (See paragraphs 104 and 249.) The 2010 OIO makes no reference to data caps or usage allowances. 

What does this tell us about Wood’s characterization of policy debates about data caps? The only discussion of congestion as a basis for data caps comes in the context of mobile networks. Wood gets this right: data caps have been, and continue to be, important for managing data use on mobile networks. But most people would be hard pressed to argue that these concerns are not still valid: the only people who have not experienced congestion on their mobile devices are those who do not use mobile networks.

But the discussion of data caps on broadband networks has nothing to do with congestion management. The argument against data caps is that they can be used anticompetitively. Cable companies, for instance, could use data caps to harm unaffiliated streaming video providers (that is, Netflix) in order to protect their own video services from competition; or they could exclude preferred services from data caps in order to protect them from competitors.

The argument for data caps, on the other hand, is about the cost of Internet service. Data caps are a way of offering lower priced service to lower-need users. Or, conversely, they are a way of apportioning the cost of those networks in proportion to the intensity of a given user’s usage.  Higher-intensity users are more likely to be Internet enthusiasts; lower-intensity users are more likely to use it for basic tasks, perhaps no more than e-mail or light web browsing. What’s more, if all users faced the same prices regardless of their usage, there would be no marginal cost to incremental usage: users (and content providers) would have no incentive not to use more bandwidth. This does not mean that users would face congestion without data caps—ISPs may, instead, be forced to invest in higher capacity interconnection agreements. (Importantly, interconnection agreements are often priced in terms of aggregate data transfered, not the speeds of those data transfers—that is, they are written in terms of data caps!—so it is entirely possible that an ISP would need to pay for greater interconnection capacity despite not experiencing any congestion on its network!)

In other words, the economic argument for data caps, recognized by the FCC under both the Obama and Trump administrations, is that they allow more people to connect to the Internet by allowing a lower-priced access tier, and that they keep average prices lower by creating incentives not to consume bandwidth merely because you can. In more technical economic terms, they allow potentially beneficial price discrimination and eliminate a potential moral hazard. Contrary to Wood’s snarky, unprofessional, response to Ryssdal’s question, there is emphatically not “no ongoing economic argument” for data caps.

Why lifting data caps during this crisis ain’t no thing

Even if the purpose of data caps were to manage congestion, Wood’s discussion again misses the mark. She argues that the ability to lift caps during the current crisis demonstrates that they are not needed during non-crisis periods. But the usage patterns that we are concerned about facilitating during this period are not normal, and cannot meaningfully be used to make policy decisions relevant to normal periods. 

The reason for this is captured in the below image from a recent Cloudflare discussion of how Internet usage patterns are changing during the crisis:

This image shows US Internet usage as measured by Cloudflare. The red line is the usage on March 13 (the peak is President Trump’s announcement of a state of emergency). The grey lines are the preceding several days of traffic. (The x-axis is UTC time; ET is UCT-4.) Although this image was designed to show the measurable spike in traffic corresponding to the President’s speech, it also shows typical weekday usage patterns. The large “hump” on the left side shows evening hours in the United States. The right side of the graph shows usage throughout the day. (This chart shows nation-wide usage trends, which span multiple time zones. If it were to focus on a single time zone, there would be a clear dip between daytime “business” and evening “home” hours, as can be seen here.)

More important, what this chart demonstrates is that the “peak” in usage occurs in the evening, when everyone is at home watching their Netflix. It does not occur during the daytime hours—the hours during which telecommuters are likely to be video conferencing or VPN’ing in to their work networks, or during which students are likely to be doing homework or conferencing into their meetings. And, to the extent that there will be an increase in daytime usage, it will be somewhat offset by (likely significantly) decreased usage due to coming economic lethargy. (For Kai Ryssdal, lethargy is synonymous with recession; for Aaron Sorkin fans, it is synonymous with bagel). 

This illustrates one of the fundamental challenges with pricing access to networks. Networks are designed to carry their peak load capacity. When they are operating below capacity, the marginal cost of additional usage is extremely low; once they exceed that capacity, the marginal cost of additional usage is extremely high. If you price network access based upon the average usage, you are going to get significant usage during peak hours; if you price access based upon the peak-hour marginal cost, you are going to get significant deadweight loss (under-use) during non-peak hours). 

Data caps are one way to deal with this issue. Since most users making the most intensive use of the network are all doing so at the same time (at peak hour), this incremental cost either discourages this use or provides the revenue necessary to expand capacity to accommodate their use. But data caps do not make sense during non-peak hours, when marginal cost is nearly zero. Indeed, imposing increased costs on users during non-peak hours is regressive. It creates deadweight losses during those hours (and, in principle, also during peak hours: ideally, we would price non-peak-hour usage less than peak-hour usage in order to “shave the peak” (a synonym, I kid you not, for “flatten the curve”)). 

What this all means

During the current crisis, we are seeing a significant increase in usage during non-peak hours. This imposes nearly zero incremental cost on ISPs. Indeed, it is arguably to their benefit to encourage use during this time, to “flatten the curve” of usage in the evening, when networks are, in fact, likely to experience congestion.

But there is a flipside, which we have seen develop over the past few days: how do we manage peak-hour traffic? On Thursday, the EU asked Netflix to reduce the quality of its streaming video in order to avoid congestion. Netflix is the single greatest driver of consumer-focused Internet traffic. And while being able to watch the Great British Bake Off in ultra-high definition 3D HDR 4K may be totally awesome, its value pales in comparison to keeping the American economy functioning.

Wood suggests that ISPs’ decision to lift data caps is of relevance to the network neutrality debate. It isn’t. But the impact of Netflix traffic on competing applications may be. The net neutrality debate created unmitigated hysteria about prioritizing traffic on the Internet. Many ISPs have said outright that they won’t even consider investing in prioritization technologies because of the uncertainty around the regulatory treatment of such technologies. But such technologies clearly have uses today. Video conferencing and Voice over IP protocols should be prioritized over streaming video. Packets to and from government, healthcare, university, and other educational institutions should be prioritized over Netflix traffic. It is hard to take anyone who would disagree with this proposition seriously. Yet the net neutrality debate almost entirely foreclosed development of these technologies. While they may exist, they are not in widespread deployment, and are not familiar to consumers or consumer-facing network engineers.

To the very limited extent that data caps are relevant to net neutrality policy, it is about ensuring that millions of people binge watching Bojack Horseman (seriously, don’t do it!) don’t interfere with children Skyping with their grandparents, a professor giving a lecture to her class, or a sales manager coordinating with his team to try to keep the supply chain moving.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Luke Froeb, (William C. Oehmig Chair in Free Enterprise and Entrepreneurship, Owen Graduate School of Management, Vanderbilt University; former Chief Economist at the US DOJ Antitrust Division and US FTC).]

Policy makers are using the term to describe the effects of social distancing and travel restrictions.  In this post, we use a cellular automata model of infection to show how they might do this.

DISCLAIMER:  THIS IS AN UNREALISTIC MODEL, FOR TEACHING PURPOSES ONLY.

The images below are from a cellular automata model of the spread of a disease on a 100×100 grid.  White dots represent uninfected; red dots, infectedgreen dots, survivors; black dots, deaths.  The key parameters are:

  • death rate=1%, given that a person has been infected.  
  • r0 = 2 is the basic reproduction number, the number of people infected by each infected person, e.g., here are estimates for corona virus.   We model social distancing as reducing this number.  
  • mean distance of infection = 5.0 cells away from an infected cell, modeled as a standard normal distribution over unit distance.  We model travel restrictions as reducing this number

In the video above the infected cells (red) spread slowly out from the center, where the outbreak began.  Most infections are on the “border” of the infected area because that is where the infected cells are more likely to infect uninfected ones.

Infections eventually die out because many of the people who come in contact with the infection have already developed an immunity (green) or are dead (black).  This is what Boris Johnson referred to as “Herd Immunity.

We graph the spread of the infection above.  The vertical axis represents people on the grid (10,000=100×100) and the horizontal axis represents time, denoted in periods (the life span of an infection virus).   The blue line represents the uninfected population, the green line the infected population, and the orange line, the infection rate.

In the simulation and graph below, we increase r0 (the infection ratio) from 2 to 3, and mean travel distance from 5 to 25.   We see that more people get infected (higher green line), and much more quickly (peak infections occur at period 11, instead of period 15).

What policy makers mean by “flattening the curve” is flattening the orange infection curve (compare the high orange peak in the bottom graph to the smaller, flatter peak in the one above) with social distancing and travel restrictions so that our hospital system does not get overwhelmed by infected patients.

HT:  Colleague Steven Tschantz designed and wrote the code.

This post originally appeared on the Managerial Econ Blog