Archives For

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Luke Froeb, (William C. Oehmig Chair in Free Enterprise and Entrepreneurship, Owen Graduate School of Management, Vanderbilt University; former Chief Economist at the US DOJ Antitrust Division and US FTC).]

Summary: Trying to quarantine everyone until a vaccine is available doesn’t seem feasible. In addition, restrictions mainly delay when the epidemic explodes, e.g., see previous post on Flattening the Curve. In this paper, we propose subsidies to both individuals and businesses, to better align private incentives with social goals, while leaving it up to individuals and businesses to decide for themselves which risks to take.

For example, testing would give individuals the information necessary to make the best decision about whether to shelter in place or, if they have recovered and are now immune, to come out.  But, the negative consequences of a positive test, e.g., quarantine, can deter people from getting tested. Rewards for those who present for a test and submit to isolation when they have active disease could offset such externalities.

Another problem is that many people aren’t free on their own to implement protective measures related to work. Some form of incentive for work from home, closing down production in some part, or extra protection for workers could be imagined for employers. Businesses that offer worker health care might be incentivized by sharing in the extra virus health care costs realized by workers in exchange for a health care subsidy.

Essay: In the midst of an epidemic it is evident that social policy must adjust in furtherance of the public good. Institutions of all sorts, not the least of which government, will have to take extraordinary actions. People should expect their relationships with these institutions to change, at least for some time. These adjustments will need to be informed by applicable epidemiological data and models, subject to the usual uncertainties. But the problems to be faced are not only epidemiological but economic. There will be tradeoffs to be made between safer, restrictive rules and riskier, unconstrained behaviors. Costs to be faced are both social and individual.  As such, we should not expect a uniform public policy to make suitable choices for all individuals, nor assume that individuals making good decisions for themselves will combine for a good social outcome.  Imagine instead an alternative, where social costs are evaluated and appropriate individual incentives are devised, allowing individuals to make informed decisions with respect to their own circumstances and the social externalities reflected in those incentives.

We are currently in the US at the beginning of the coronavirus epidemic.  This is not the flu. It is maybe ten times as lethal as the flu, perhaps a little more lethal proportionally in the most susceptible populations. It is new, so there is little or no natural immunity, and no vaccine available for maybe 18 months. Like the flu, there is no really effective treatment yet for those that become sickest, particularly because the virus is most deadly through the complications it causes with existing conditions, so treatment options should not perhaps be expected to help with epidemic spread or to reduce lethality. It is spread relatively easily from person to person, though not as easily as the measles, perhaps significantly before the infected person shows symptoms. And it may be that people can get the virus, become contagious and spread the disease, while never showing symptoms themselves. We now have a test for active coronavirus, though it is still somewhat hard to get in the US, and we can expect at some point in the near future to have an antibody test that will show when people either have or have had and recovered from the virus.

There are some obvious social and individual costs to people catching this virus. First there are the deaths from the disease. Then there are the costs of treating those ill. Finally, there are costs from the lost productivity of those fallen ill. If there is a sudden and extreme increase in the numbers of sick people, all of these costs can be expected to rise, perhaps significantly. When hospitals have patients in excess of existing capacity, expanding capacity will be difficult and expensive, and death rates can be expected to rise.

An ideal public health strategy in the face of an epidemic is to keep people from falling sick. At the beginning of the epidemic, the few people with the disease need to be found and quarantined, and those with whom they have had contact need to be traced and isolated so that any carrying the disease can be stopped from passing it on. If there is no natural reservoir of disease that reintroduces the disease, it may be possible to eradicate the disease. When there were few cases, this might have been practical, but that effort has clearly failed, and there are far too many carriers of the disease now to track. 

Now the emphasis must be on measures to reduce transmission of the disease. This entails modifying behaviors that facilitate the disease passing from person to person. If the rate of infection can be reduced enough, to the point where the number of people each infected person can be expected to infect is less than one on average, then the disease will naturally die out. Once most people have had the disease, or have been vaccinated, most of the people an infected person would have infected are immune so the rate of new infections will naturally fall to less than one and the disease will die out. Because so many people have immunity to many varieties of the flu, its spread can be controlled in particular through vaccination, the only difficulty being that new strains are appearing all of the time. The difficulty with coronavirus is that simple measures for reducing the spread of the disease do not seem to be effective enough and extreme measures will be much more expensive. Moreover, because the coronavirus is a pandemic, even if one region succeeds in reducing transmission and has the disease fade, reintroduction from other regions can be expected to relight the fire of epidemic. Measures for reducing transmission will need to be maintained for some time, likely until a vaccine is available or natural heard immunity is established through the majority of the population having had the disease.

The flu strikes every year and we seem to tolerate it without extreme measures of social distancing. Perhaps there’s nothing that needs to be done now, nothing worth doing now, to slow the coronavirus epidemic. But what would the cost of such an attitude be? The virus would spread like wildfire, infecting in a matter of months perhaps the majority of the population. Even with an estimate of 70 to 150 million Americans, at a 1% death rate that means 0.7 to 1.5 million would die. But that many cases all at once would overwhelm the medical system, and the intensive care required to keep the death rate even this low. A surge in cases might mean an increase in death rate.

At the other extreme, we seem to be heading into a period where everyone is urged to shelter-in-place, or required to be locked down, so as to reduce social contacts to near zero and thereby interrupt the spread of the virus. This may be effective, perhaps even necessary to prevent an immediate surge of demand on hospitals. But it is also expensive in the disruptions it entails. The number of active infections can be drastically reduced over a time scale corresponding to an individual’s course of the disease. Removing the restrictions would mean then that the epidemic resumes from the new lower level with somewhat more of the population already immune. It seems unlikely the disease can be eradicated by such measures because of the danger of reintroduction from other regions where the virus is active. The strategy of holding everyone in this isolation until a vaccine becomes available isn’t likely to be palatable. Releasing restrictions slowly so as to keep the level of the disease at an acceptable level would likely mean that most of the population would get the disease before the vaccine became available. Even if the most at risk population remained isolated, the estimated death rate over the majority of the population implies a nontrivial number of deaths. How do we decide how many and who to risk in order to get the economy functioning?

Consider then a system of incentives to individuals to help communicate the social externalities and guide their decisions. If there is a high prevalence of active disease in the general population, then hospitals will see excessive demand and it will be unsafe for high risk individuals to expose themselves to even minimal social interactions. A low prevalence of active disease can be more easily tolerated by hospitals, with a lower resulting death rate, and higher risk individuals may be more able to interact and provide for themselves. To promote a lower level of disease, individuals should be incentivized to delay getting sick, practicing social distancing and reducing contacts in a trade-off with ordinary necessary activity and respecting their personal risk category and risk tolerance. This lower level of disease is the “flattening of the curve”, but it also imagines the most at risk segment of the population might choose to isolate for a longer term, hoping to hold out for a vaccine.

If later disease or no disease is preferable, how do we incentivize it? Can we at the same time incentivize more usual infection control measures? Eventually everyone will either need to take an antibody test, to determine that they have had the disease and developed immunity and so are safe to resume all normal activities, or else need the vaccination. People may also be tested for active disease. We can’t penalize people for showing up with active disease, as this would mean they would skip the test and likely continue infecting other people. We should reward those who present for a test and submit to isolation when they have active disease. We can reward also those who submit to the antibody test and test positive (for the first time) who can then resume normal activities. On the other hand, we want people to delay when they get sick through prudent measures. Thus it would be a good idea to increase over time the reward for first showing up with the disease. To avoid incentivizing delay in testing, the reward for a positive test should increase as a function of the last antibody test that was negative, i.e., the reward is more if you can prove you had avoided the disease as of your last antibody test. The size of the rewards should be significant enough to cause a change of behavior but commensurate with the social cost savings induced. If we are planning on giving Americans multiple $1000 checks to get the economy going anyway, then such monies could be spent on incentives alternatively. This imagines antibody testing will be available, relatively easy and inexpensive in maybe three months, and antibody tests might be repeated maybe every three months. And of course this assumes the trajectory of the epidemic can be controlled well enough in the short term and predicted well enough in the long term to make such a scheme possible.

HT:  Colleague Steven Tschantz

This post originally appeared on the Managerial Econ Blog

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Luke Froeb, (William C. Oehmig Chair in Free Enterprise and Entrepreneurship, Owen Graduate School of Management, Vanderbilt University; former Chief Economist at the US DOJ Antitrust Division and US FTC).]

Policy makers are using the term to describe the effects of social distancing and travel restrictions.  In this post, we use a cellular automata model of infection to show how they might do this.

DISCLAIMER:  THIS IS AN UNREALISTIC MODEL, FOR TEACHING PURPOSES ONLY.

The images below are from a cellular automata model of the spread of a disease on a 100×100 grid.  White dots represent uninfected; red dots, infectedgreen dots, survivors; black dots, deaths.  The key parameters are:

  • death rate=1%, given that a person has been infected.  
  • r0 = 2 is the basic reproduction number, the number of people infected by each infected person, e.g., here are estimates for corona virus.   We model social distancing as reducing this number.  
  • mean distance of infection = 5.0 cells away from an infected cell, modeled as a standard normal distribution over unit distance.  We model travel restrictions as reducing this number

In the video above the infected cells (red) spread slowly out from the center, where the outbreak began.  Most infections are on the “border” of the infected area because that is where the infected cells are more likely to infect uninfected ones.

Infections eventually die out because many of the people who come in contact with the infection have already developed an immunity (green) or are dead (black).  This is what Boris Johnson referred to as “Herd Immunity.

We graph the spread of the infection above.  The vertical axis represents people on the grid (10,000=100×100) and the horizontal axis represents time, denoted in periods (the life span of an infection virus).   The blue line represents the uninfected population, the green line the infected population, and the orange line, the infection rate.

In the simulation and graph below, we increase r0 (the infection ratio) from 2 to 3, and mean travel distance from 5 to 25.   We see that more people get infected (higher green line), and much more quickly (peak infections occur at period 11, instead of period 15).

What policy makers mean by “flattening the curve” is flattening the orange infection curve (compare the high orange peak in the bottom graph to the smaller, flatter peak in the one above) with social distancing and travel restrictions so that our hospital system does not get overwhelmed by infected patients.

HT:  Colleague Steven Tschantz designed and wrote the code.

This post originally appeared on the Managerial Econ Blog

[TOTM: The following is the second in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The entire series of posts is available here.

This post is authored by Luke Froeb (William C. Oehmig Chair in Free Enterprise and Entrepreneurship at the Owen Graduate School of Management at Vanderbilt University; former chief economist at the Antitrust Division of the US Department of Justice and the Federal Trade Commission), Michael Doane (Competition Economics, LLC) & Mikhael Shor (Associate Professor of Economics, University of Connecticut).]

[Froeb, Doane & Shor: This post does not attempt to answer the question of what the court should decide in FTC v. Qualcomm because we do not have access to the information that would allow us to make such a determination. Rather, we focus on economic issues confronting the court by drawing heavily from our writings in this area: Gregory Werden & Luke Froeb, Why Patent Hold-Up Does Not Violate Antitrust Law; Luke Froeb & Mikhael Shor, Innovators, Implementors and Two-sided Hold-up; Bernard Ganglmair, Luke Froeb & Gregory Werden, Patent Hold Up and Antitrust: How a Well-Intentioned Rule Could Retard Innovation.]

Not everything is “hold-up”

It is not uncommon—in fact it is expected—that parties to a negotiation would have different opinions about the reasonableness of any deal. Every buyer asks for a price as low as possible, and sellers naturally request prices at which buyers (feign to) balk. A recent movement among some lawyers and economists has been to label such disagreements in the context of standard-essential patents not as a natural part of bargaining, but as dispositive proof of “hold-up,” or the innovator’s purported abuse of newly gained market power to extort implementers. We have four primary issues with this hold-up fad.

First, such claims of “hold-up” are trotted out whenever an innovator’s royalty request offends the commentator’s sensibilities, and usually with reference to a theoretical hold-up possibility rather than any matter-specific evidence that hold-up is actually present. Second, as we have argued elsewhere, such arguments usually ignore the fact that implementers of innovations often possess significant countervailing power to “hold-out as well. This is especially true as implementers have successfully pushed to curtail injunctive relief in standard-essential patent cases. Third, as Greg Werden and Froeb have recently argued, it is not clear why patent holdup—even where it might exist—need implicate antitrust law rather than be adequately handled as a contractual dispute. Lastly, it is certainly not the case that every disagreement over the value of an innovation is an exercise in hold-up, as even economists and lawyers have not reached anything resembling a consensus on the correct interpretation of a “fair” royalty.

At the heart of this case (and many recent cases) is (1) an indictment of Qualcomm’s desire to charge royalties to the maker of consumer devices based on the value of its technology and (2) a lack (to the best of our knowledge from public documents) of well vetted theoretical models that can provide the underpinning for the theory of the case. We discuss these in turn.

The smallest component “principle”

In arguing that “Qualcomm’s royalties are disproportionately high relative to the value contributed by its patented inventions,” (Complaint, ¶ 77) a key issue is whether Qualcomm can calculate royalties as a percentage of the price of a device, rather than a small percentage of the price of a chip. (Complaint, ¶¶ 61-76).

So what is wrong with basing a royalty on the price of the final product? A fixed portion of the price is not a perfect proxy for the value of embedded intellectual property, but it is a reasonable first approximation, much like retailers use fixed markups for products rather than optimizing the price of each SKU if the cost of individual determinations negate any benefits to doing so. The FTC’s main issue appears to be that the price of a smartphone reflects “many features in addition to the cellular connectivity and associated voice and text capabilities provided by early feature phones.” (Complaint, ¶ 26). This completely misses the point. What would the value of an iPhone be if it contained all of those “many features” but without the phone’s communication abilities? We have some idea, as Apple has for years marketed its iPod Touch for a quarter of the price of its iPhone line. Yet, “[f]or most users, the choice between an iPhone 5s and an iPod touch will be a no-brainer: Being always connected is one of the key reasons anyone owns a smartphone.”

What the FTC and proponents of the smallest component principle miss is that some of the value of all components of a smartphone are derived directly from the phone’s communication ability. Smartphones didn’t initially replace small portable cameras because they were better at photography (in fact, smartphone cameras were and often continue to be much worse than devoted cameras). The value of a smartphone camera is that it combines picture taking with immediate sharing over text or through social media. Thus, unlike the FTC’s claim that most of the value of a smartphone comes from features that are not communication, many features on a smartphone derive much of their value from the communication powers of the phone.

In the alternative, what the FTC wants is for the royalty not to reflect the value of the intellectual property but instead to be a small portion of the cost of some chipset—akin to an author of a paperback negotiating royalties based on the cost of plain white paper. As a matter of economics, a single chipset royalty cannot allow an innovator to capture the value of its innovation. This, in turn, implies that innovators underinvest in future technologies. As we have previously written:

For example, imagine that the same component (incorporating the same essential patent) is used to help stabilize flight of both commercial airplanes and toy airplanes. Clearly, these industries are likely to have different values for the patent. By negotiating over a single royalty rate based on the component price, the innovator would either fail to realize the added value of its patent to commercial airlines, or (in the case that the component is targeted primary to the commercial airlines) would not realize the incremental market potential from the patent’s use in toy airplanes. In either case, the innovator will not be negotiating over the entirety of the value it creates, leading to too little innovation.

The role of economics

Modern antitrust practice is to use economic models to explain how one gets from the evidence presented in a case to an anticompetitive conclusion. As Froeb, et al. have discussed, by laying out a mapping from the evidence to the effects, the legal argument is made clear, and gains credibility because it becomes falsifiable. The FTC complaint hypothesizes that “Qualcomm has excluded competitors and harmed competition through a set of interrelated policies and practices.” (Complaint, ¶ 3). Although Qualcomm explains how each of these policies and practices, by themselves, have clear business justifications, the FTC claims that combining them leads to an anticompetitive outcome.

Without providing a formal mapping from the evidence to an effect, it becomes much more difficult for a court to determine whether the theory of harm is correct or how to weigh the evidence that feeds the conclusion. Without a model telling it “what matters, why it matters, and how much it matters,” it is much more difficult for a tribunal to evaluate the “interrelated policies and practices.” In previous work, we have modeled the bilateral bargaining between patentees and licensees and have shown that when bilateral patent contracts are subject to review by an antitrust court, bargaining in the shadow of such a court can reduce the incentive to invest and thereby reduce welfare.

Concluding policy thoughts

What the FTC makes sound nefarious seems like a simple policy: requiring companies to seek licenses to Qualcomm’s intellectual property independent of any hardware that those companies purchase, and basing the royalty of that intellectual property on (an admittedly crude measure of) the value the IP contributes to that product. High prices alone do not constitute harm to competition. The FTC must clearly explain why their complaint is not simply about the “fairness” of the outcome or its desire that Qualcomm employ different bargaining paradigms, but rather how Qualcomm’s behavior harms the process of competition.

In the late 1950s, Nobel Laureate Robert Solow attributed about seven-eighths of the growth in U.S. GDP to technical progress. As Solow later commented: “Adding a couple of tenths of a percentage point to the growth rate is an achievement that eventually dwarfs in welfare significance any of the standard goals of economic policy.” While he did not have antitrust in mind, the import of his comment is clear: whatever static gains antitrust litigation may achieve, they are likely dwarfed by the dynamic gains represented by innovation.

Patent law is designed to maintain a careful balance between the costs of short-term static losses and the benefits of long-term gains that result from new technology. The FTC should present a sound theoretical or empirical basis for believing that the proposed relief sufficiently rewards inventors and allows them to capture a reasonable share of the whole value their innovations bring to consumers, lest such antitrust intervention deter investments in innovation.

The Horizontal Merger Guidelines have brought discipline to the unruly world of merger analysis; but have also accommodated advances in our understanding of the myriad ways in which firms compete and how mergers affect such competition.  However, in cases where there is better information about the effects of the merger than there is about the relevant market, I would change the Guidelines to allow analysis that bypasses market delineation.

My attorney colleagues would immediately point me to section 7 of the Clayton Act that seems to demand market definition because of its reference to a “line of commerce” and “section of the country.”  Indeed, Judge Brown in Whole Foods said that the FTC’s proposal to dispense with market definition was “in contravention of the statute itself.”

However, I would naively point them to section 1 of the Sherman Act that dispenses with market definition in establishing market power or monopoly power; and in establishing anticompetitive effects under the rule of reason.  Why should it be different for mergers?

For consummated mergers, like the FTC’s Evanston case, effects were proven directly; and in many unilateral effects cases, “more direct” proof of effects is possible. In the Oracle case, for example, the court encouraged the use of merger simulation instead of reliance on unreliable market share data.  If we view market delineation as a means to the end of predicting merger effects, and we have better information about the end, why bother with the means?