For a potential entrepreneur, just how much time it will take to compete, and the barrier to entry that time represents, will vary greatly depending on the market he or she wishes to enter. A would-be competitor to the likes of Subway, for example, might not find the time needed to open a sandwich shop to be a substantial hurdle. Even where it does take a long time to bring a product to market, it may be possible to accelerate the timeline if the potential profits are sufficiently high.
As Steven Salop notes in a recent paper, however, there may be cases where long periods of production time are intrinsic to a product:
If entry takes a long time, then the fear of entry may not provide a substantial constraint on conduct. The firm can enjoy higher prices and profits until the entry occurs. Even if a strong entrant into the 12-year-old scotch market begins the entry process immediately upon announcement of the merger of its rivals, it will not be able to constrain prices for a long time. [emphasis added]
Salop’s point relates to the supply-side substitutability of Scotch whisky (sic — Scotch whisky is spelt without an “e”). That is, to borrow from the European Commission’s definition, whether “suppliers are able to switch production to the relevant products and market them in the short term.” Scotch is aged in wooden barrels for a number of years (at least three, but often longer) before being bottled and sold, and the value of Scotch usually increases with age.
Due to this protracted manufacturing process, Salop argues, an entrant cannot compete with an incumbent dominant firm for however many years it would take to age the Scotch; they cannot produce the relevant product in the short term, no matter how high the profits collected by a monopolist are, and hence no matter how strong the incentive to enter the market. If I wanted to sell 12-year-old Scotch, to use Salop’s example, it would take me 12 years to enter the market. In the meantime, a dominant firm could extract monopoly rents, leading to higher prices for consumers.
But can a whisky producer “enjoy higher prices and profits until … entry occurs”? A dominant firm in the 12-year-old Scotch market will not necessarily be immune to competition for the entire 12-year period it would take to produce a Scotch of the same vintage. There are various ways, both on the demand and supply side, that pressure could be brought to bear on a monopolist in the Scotch market.
One way could be to bring whiskies that are being matured for longer-maturity bottles (like 16- or 18-year-old Scotches) into service at the 12-year maturity point, shifting this supply to a market in which profits are now relatively higher.
Experts explained that, for example, nine and 11-year-old whiskies—not yet ready for release under the ten and 12-year brands—could now be blended together to produce the “entry-level” Gold whisky immediately.
There are also whiskies matured outside of Scotland, in regions such as Taiwan and India, that can achieve flavor profiles akin to older whiskies more quickly, thanks to warmer climates and the faster chemical reactions inside barrels they cause. Further increases in maturation rate can be brought about by using smaller barrels with a higher surface-area-to-volume ratio. Whiskies matured in hotter climates and smaller barrels can be brought to market even more quickly than NAS Scotch matured in the cooler Scottish climate, and may well represent a more authentic replication of an older barrel.
“Whiskies” that can be manufactured even more quickly may also be on the horizon. Some startups in the United States are experimenting with rapid-aging technology which would allow them to produce a whisky-like spirit in a very short amount of time. As detailed in a recent article in The Economist, Endless West in California is using technology that ages spirits within 24 hours, with the resulting bottles selling for $40 – a bit less than many 12-year-old Scotches. Although attempts to break the conventional maturation process are nothing new, recent attempts have won awards in blind taste-test competitions.
None of this is to dismiss Salop’s underlying point. But it may suggest that, even for a product where time appears to be an insurmountable barrier to entry, there may be more ways to compete than we initially assume.
[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]
U.S. antitrust regulators have a history of narrowly defining relevant markets—often to the point of absurdity—in order to create market power out of thin air. The Federal Trade Commission (FTC) famously declared that Whole Foods and Wild Oats operated in the “premium natural and organic supermarkets market”—a narrowly defined market designed to exclude other supermarkets carrying premium natural and organic foods, such as Walmart and Kroger. Similarly, for the Staples-Office Depot merger, the FTC
narrowly defined the relevant market as “office superstore” chains, which excluded general merchandisers such as Walmart, K-Mart and Target, who at the time accounted for 80% of office supply sales.
Texas Attorney General Ken Paxton’s complaint against Google’s advertising business, joined by the attorneys general of nine other states, continues this tradition of narrowing market definition to shoehorn market dominance where it may not exist.
For example, one recent paper critical of Google’s advertising business narrows the relevant market first from media advertising to digital advertising, then to the “open” supply of display ads and, finally, even further to the intermediation of the open supply of display ads. Once the market has been sufficiently narrowed, the authors conclude Google’s market share is “perhaps sufficient to confer market power.”
While whittling down market definitions may achieve the authors’ purpose of providing a roadmap to prosecute Google, one byproduct is a mishmash of market definitions that generates as many as 16 relevant markets for digital display and video advertising, in many of which Google doesn’t have anything approaching market power (and in some of which, in fact, Facebook, and not Google, is the most dominant player).
The Texas complaint engages in similar relevant-market gerrymandering. It claims that, within digital advertising, there exist several relevant markets and that Google monopolizes four of them:
Publisher ad servers, which manage the inventory of a publisher’s (e.g., a newspaper’s website or a blog) space for ads;
Display ad exchanges, the “marketplace” in which auctions directly match publishers’ selling of ad space with advertisers’ buying of ad space;
Display ad networks, which are similar to exchanges, except a network acts as an intermediary that collects ad inventory from publishers and sells it to advertisers; and
Display ad-buying tools, which include demand-side platforms that collect bids for ad placement with publishers.
The complaint alleges, “For online publishers and advertisers alike, the different online advertising formats are not interchangeable.” But this glosses over a bigger challenge for the attorneys general: Is online advertising a separate relevant market from offline advertising?
Digital advertising, of which display advertising is a small part, is only one of many channels through which companies market their products. About half of today’s advertising spending in the United States goes to digital channels, up from about 10% a decade ago. Approximately 30% of ad spending goes to television, with the remainder going to radio, newspapers, magazines, billboards and other “offline” forms of media.
Physical newspapers now account for less than 10% of total advertising spending. Traditionally, newspapers obtained substantial advertising revenues from classified ads. As internet usage increased, newspaper classifieds have been replaced by less costly and more effective internet classifieds—such as those offered by Craigslist—or targeted ads on Google Maps or Facebook.
The price of advertising has fallen steadily over the past decade, while output has risen. Spending on digital advertising in the United States grew from $26 billion in 2010 to nearly $130 billion in 2019, an average increase of 20% a year. Over the same period, the producer price index (PPI) for internet advertising sales declined by nearly 40%. Rising spending in the face of falling prices indicates the number of ads bought and sold increased by approximately 27% a year.
Since 2000, advertising spending has been falling as a share of gross domestic product, with online advertising growing as a share of that. The combination of increasing quantity, decreasing cost and increasing total revenues are consistent with a growing and increasingly competitive market, rather than one of rising concentration and reduced competition.
There is little or no empirical data evaluating the extent to which online and offline advertising constitute distinct markets or the extent to which digital display is a distinct submarket of online advertising. As a result, analysis of adtech competition has relied on identifying several technical and technological factors—as well as the say-so of participants in the business—that the analysts assert distinguish online from offline and establish digital display (versus digital search) as a distinct submarket. This approach has been used and accepted, especially in cases in which pricing data has not been available.
But the pricing information that is available raises questions about the extent to which online advertising is a distinct market from offline advertising. For example, Avi Goldfarb and Catherine Tucker find that, when local regulations prohibit offline direct advertising, search advertising is more expensive, indicating that search and offline advertising are substitutes. In other research, they report that online display advertising circumvents, in part, local bans on offline billboard advertising for alcoholic beverages. In both studies, Goldfarb and Tucker conclude their results suggest online and offline advertising are substitutes. They also conclude this substitution suggests that online and offline markets should be considered together in the context of antitrust.
While this information is not sufficient to define a broader relevant market, it raises questions regarding solely relying on the technical or technological distinctions and the say-so of market participants.
In the United States, plaintiffs do not get to define the relevant market. That is up to the judge or the jury. Plaintiffs have the burden to convince the court that a proposed narrow market definition is the correct one. With strong evidence that online and offline ads are substitutes, the court should not blindly accept the gerrymandered market definitions posited by the attorneys general.
In his book, Nicolas Petit approaches antitrust issues by analyzing their economic foundations, and he aspires to bridge gaps between those foundations and the common points of view. In light of the divisiveness of today’s debates, I appreciate Petit’s calm and deliberate view of antitrust, and I respect his clear and engaging prose.
I spent a lot of time with this topic when writing a book (How the Internet Became Commercial, 2015, Princeton Press). If I have something unique to add to a review of Petit’s book, it comes from the role Microsoft played in the events in my book.
Many commentators have speculated on what precise charges could be brought against Facebook, Google/Alphabet, Apple, and Amazon. For the sake of simplicity, let’s call these the “big four.” While I have no special insight to bring to such speculation, for this post I can do something different, and look forward by looking back. For the time being, Microsoft has been spared scrutiny by contemporary political actors. (It seems safe to presume Microsoft’s managers prefer to be left out.) While it is tempting to focus on why this has happened, let’s focus on a related issue: What shadow did Microsoft’s trials cast on the antitrust issues facing the big four?
Two types of lessons emerged from Microsoft’s trials, and both tend to be less appreciated by economists. One set of lessons emerged from the media flood of the flotsam and jetsam of sensationalistic factoids and sound bites, drawn from Congressional and courtroom testimony. That yielded lessons about managing sound and fury – i.e., mostly about reducing the cringe-worthy quotes from CEOs and trial witnesses.
Another set of lessons pertained to the role and limits of economic reasoning. Many decision makers reasoned by analogy and metaphor. That is especially so for lawyers and executives. These metaphors do not make economic reasoning wrong, but they do tend to shape how an antitrust question takes center stage with a judge, as well as in the court of public opinion. These metaphors also influence the stories a CEO tells to employees.
If you asked me to forecast how things will go for the big four, based on what I learned from studying Microsoft’s trials, I forecast that the outcome depends on which metaphor and analogy gets the upper hand.
In that sense, I want to argue that Microsoft’s experience depended on “the fox and shepherd problem.” When is a platform leader better thought of as a shepherd, helping partners achieve a healthy outcome, or as a fox in charge of a henhouse, ready to sacrifice a partner for self-serving purposes? I forecast the same metaphors will shape experience of the big four.
Gaps and analysis
The fox-shepherd problem never shows up when a platform leader is young and its platform is small. As the platform reaches bigger scale, however, the problem becomes more salient. Conflicts of interests emerge and focus attention on platform leadership.
Petit frames these issues within a Schumpeterian vision. In this view, firms compete for dominant positions over time, potentially with one dominant firm replacing another. Potential competition has a salutary effect if established firms perceive a threat from the future shadow of such competitors, motivating innovation. In this view, antitrust’s role might be characterized as “keeping markets open so there is pressure on the dominant firm from potential competition.”
In the Microsoft trial economists framed the Schumpeterian tradeoff in the vocabulary of economics. Firms who supply complements at one point could become suppliers of substitutes at a later point if they are allowed to. In other words, platform leaders today support complements that enhance the value of the platform, while also having the motive and ability to discourage those same business partners from developing services that substitute for the platform’s services, which could reduce the platform’s value. Seen through this lens, platform leaders inherently face a conflict of interest, and antitrust law should intervene if platform leaders could place excessive limitations on existing business partners.
This economic framing is not wrong. Rather, it is necessary, but not sufficient. If I take a sober view of events in the Microsoft trial, I am not convinced the economics alone persuaded the judge in Microsoft’s case, or, for that matter, the public.
As judges sort through the endless detail of contracting provisions, they need a broad perspective, one that sharpens their focus on a key question. One central question in particular inhabits a lot of a judge’s mindshare: how did the platform leader use its discretion, and for what purposes? In case it is not obvious, shepherds deserve a lot of discretion, while only a fool gives a fox much license.
Before the trial, when it initially faced this question from reporters and Congress, Microsoft tried to dismiss the discussion altogether. Their representatives argued that high technology differs from every other market in its speed and productivity, and, therefore, ought to be thought of as incomparable to other antitrust examples. This reflected the high tech elite’s view of their own exceptionalism.
Reporters dutifully restated this argument, and, long story short, it did not get far with the public once the sensationalism started making headlines, and it especially did not get far with the trial judge. To be fair, if you watched recent congressional testimony, it appears as if the lawyers for the big four instructed their CEOs not to try it this approach this time around.
Well before lawyers and advocates exaggerate claims, the perspective of both sides usually have some merit, and usually the twain do not meet. Most executives tend to remember every detail behind growth, and know the risks confronted and overcome, and usually are reluctant to give up something that works for their interests, and sometimes these interests can be narrowly defined. In contrast, many partners will know examples of a rule that hindered them, and point to complaints that executives ignored, and aspire to have rules changed, and, again, their interests tend to be narrow.
Consider the quality-control process today for iPhone apps as an example. The merits and absurdity of some of Apples conduct get a lot of attention in online forums, especially the 30% take for Apple. Apple can reasonably claim the present set of rules work well overall, and only emerged after considerable experimentation, and today they seek to protect all who benefit from the entire system, like a shepherd. It is no surprise however, that some partners accuse Apple of tweaking rules to their own benefit, and using the process to further Apple’s ambitions at the expense of the partner’s, like a fox in a henhouse. So it goes.
More generally, based on publically available information, all of the big four already face this debate. Self-serving behavior shows up in different guise in different parts of the big four’s business, but it is always there. As noted, Apple’s apps compete with the apps of others, so it has incentives to shape distribution of other apps. Amazon’s products compete with some products coming from its third—party sellers, and it too faces mixed incentives. Google’s services compete with online services who also advertise on their search engine, and they too face issues over their charges for listing on the Play store. Facebook faces an additional issues, because it has bought firms that were trying to grow their own platforms to compete with Facebook.
Look, those four each contain rather different businesses in their details, which merits some caution in making a sweeping characterization. My only point: the question about self-serving behavior arises in each instance. That frames a fox-shepherd problem for prosecutors in each case.
Lessons from prior experience
Circling back to lessons of the past for antitrust today, the Shepherd-Fox problem was one of the deeper sources of miscommunication leading up to the Microsoft trial. In the late 1990s Microsoft could reasonably claim to be a shepherd for all its platform’s partners, and it could reasonably claim to have improved the platform in ways that benefited partners. Moreover, for years some of the industry gossip about their behavior stressed misinformed nonsense. Accordingly, Microsoft’s executives had learned to trust their own judgment and to mistrust the complaints of outsiders. Right in line with that mistrust, many employees and executives took umbrage to being characterized as a fox in a henhouse, dismissing the accusations out of hand.
Those habits-of-mind poorly positioned the firm for a court case. As any observer of the trial knowns, When prosecutors came looking, they found lots of examples that looked like fox-like behavior. Onerous contract restrictions and cumbersome processes for business partners produced plenty of bad optics in court, and fueled the prosecution’s case that the platform had become too self-serving at the expense of competitive processes. Prosecutors had plenty to work with when it came time to prove motive, intent, and ability to misuse discretion.
What is the lesson for the big four? Ask an executive in technology today, and sometimes you will hear the following: As long as a platform’s actions can be construed as friendly to customers, the platform leader will be off the hook. That is not wrong lessons, but it is an incomplete one. Looking with hindsight and foresight, that perspective seems too sanguine about the prospects for the big four. Microsoft had done plenty for its customers, but so what? There was plenty of evidence of acting like a fox in a hen-house. The bigger lesson is this: all it took were a few bad examples to paint a picture of a pattern, and every firm has such examples.
Do not get me wrong. I am not saying a fox and hen-house analogy is fair or unfair to platform leaders. Rather, I am saying that economists like to think the economic trade-off between the interests of platform leaders, platform partners, and platform customers emerge from some grand policy compromise. That is not how prosecutors think, nor how judges decide. In the Microsoft case there was no such grand consideration. The economic framing of the case only went so far. As it was, the decision was vulnerable to metaphor, shrewdly applied and convincingly argued. Done persuasively, with enough examples of selfish behavior, excuses about “helping customers” came across as empty.
Some advocates argue, somewhat philosophically, that platforms deserve discretion, and governments are bound to err once they intervene. I have sympathy with that point of view, but only up to a point. Below are two examples from outside antitrust where government routinely do not give the big four a blank check.
First, when it started selling ads, Google banned ads for cigarettes, porn and alcohol, and it downgraded in its quality score for websites that used deceptive means to attract users. That helped the service foster trust with new users, enabling it to grow. After it became bigger should Google have continued to have unqualified discretion to shepherd the entire ad system? Nobody thinks so. A while ago the Federal Trade Commission decided to investigate deceptive online advertising, just as it investigates deceptive advertising in other media. It is not a big philosophical step to next ask whether Google should have unfettered discretion to structure the ad business, search process, and related e-commerce to its own benefit.
This gets us to the other legacy of the Microsoft case: As we think about future policy dilemmas, are there a general set of criteria for the antitrust issues facing all four firms? Veterans of court cases will point out that every court case is its own circus. Just because Microsoft failed to be persuasive in its day does not imply any of the big four will be unpersuasive.
Looking back on the Microsoft trial, it did not articulate a general set of principles about acceptable or excusable self-serving behavior from a platform leader. It did not settle what criteria best determine when a court should consider a platform leader’s behavior closer to that of a shepherd or a fox. The appropriate general criteria remains unclear.
One of the great scholars of law & economics turns 90 years old today. In his long and distinguished career, Thomas Sowell has written over 40 books and countless opinion columns. He has been a professor of economics and a long-time Senior Fellow at the Hoover Institution. He received a National Humanities Medal in 2002 for a lifetime of scholarship, which has only continued since then. His ability to look at issues with an international perspective, using the analytical tools of economics to better understand institutions, is an inspiration to us at the International Center for Law & Economics.
Here, almost as a blog post festschrift as a long-time reader of his works, I want to briefly write about how Sowell’s voluminous writings on visions, law, race, and economics could be the basis for a positive agenda to achieve a greater measure of racial justice in the United States.
The Importance of Visions
One of the most important aspects of Sowell’s work is his ability to distill wide-ranging issues into debates involving different mental models, or a “Conflict of Visions.” He calls one vision the “tragic” or “constrained” vision, which sees all humans as inherently limited in knowledge, wisdom, and virtue, and fundamentally self-interested even at their best. The other vision is the “utopian” or “unconstrained” vision, which sees human limitations as artifacts of social arrangements and cultures, and that there are some capable by virtue of superior knowledge and morality that can redesign society to create a better world.
An implication of the constrained vision is that the difference in knowledge and virtue between the best and the worst in society is actually quite small. As a result, no one person or group of people can be trusted with redesigning institutions which have spontaneously evolved. The best we can hope for is institutions that reasonably deter bad conduct and allow people the freedom to solve their own problems.
An important implication of the unconstrained vision, on the other hand, is that there are some who because of superior enlightenment, which Sowell calls the “Vision of the Anointed,” can redesign institutions to fundamentally change human nature, which is seen as malleable. Institutions are far more often seen as the result of deliberate human design and choice, and that failures to change them to be more just or equal is a result of immorality or lack of will.
The importance of visions to how we view things like justice and institutions makes all the difference. In the constrained view, institutions like language, culture, and even much of the law result from the “spontaneous ordering” that is the result of human action but not of human design. Limited government, markets, and tradition are all important in helping individuals coordinate action. Markets work because self-interested individuals benefit when they serve others. There are no solutions to difficult societal problems, including racism, only trade-offs.
But in the unconstrained view, limits on government power are seen as impediments to public-spirited experts creating a better society. Markets, traditions, and cultures are to be redesigned from the top down by those who are forward-looking, relying on their articulated reason. There is a belief that solutions could be imposed if only there is sufficient political will and the right people in charge. When it comes to an issue like racism, those who are sufficiently “woke” should be in charge of redesigning institutions to provide for a solution to things like systemic racism.
For Sowell, what he calls “traditional justice” is achieved by processes that hold people accountable for harms to others. Its focus is on flesh-and-blood human beings, not abstractions like all men or blacks versus whites. Differences in outcomes are not just or unjust, by this point of view, what is important is that the processes are just. These processes should focus on institutional incentives of participants. Reforms should be careful not to upset important incentive structures which have evolved over time as the best way for limited human beings to coordinate behavior.
The “Quest for Cosmic Justice,” on the other hand, flows from the unconstrained vision. Cosmic justice sees disparities between abstract groups, like whites and blacks, as unjust and in need of correction. If results from impartial processes like markets or law result in disparities, those with an unconstrained vision often see those processes as themselves racist. The conclusion is that the law should intervene to create better outcomes. This presumes considerable knowledge and morality on behalf of those who are in charge of the interventions.
For Sowell, a large part of his research project has been showing that those with the unconstrained vision often harm those they are proclaiming the intention to help in their quest for cosmic justice.
A Constrained Vision of Racial Justice
Sowell has written quite a lot on race, culture, intellectuals, economics, and public policy. One of the main thrusts of his argument about race is that attempts at cosmic justice often harm living flesh-and-blood individuals in the name of intertemporal abstractions like “social justice” for black Americans. Sowell nowhere denies that racism is an important component of understanding the history of black Americans. But his constant challenge is that racism can’t be the only variable which explains disparities. Sowell points to the importance of culture and education in building human capital to be successful in market economies. Without taking those other variables into account, there is no way to determine the extent that racism is the cause of disparities.
This has important implications for achieving racial justice today. When it comes to policies pursued in the name of racial justice, Sowell has argued that many programs often harm not only members of disfavored groups, but the members of the favored groups.
For instance, Sowell has argued that affirmative action actually harms not only flesh-and-blood white and Asian-Americans who are passed over, but also harms those African-Americans who are “mismatched” in their educational endeavors and end up failing or dropping out of schools when they could have been much better served by attending schools where they would have been very successful. Another example Sowell often points to is minimum wage legislation, which is often justified in the name of helping the downtrodden, but has the effect of harming low-skilled workers by increasing unemployment, most especially young African-American males.
Any attempts at achieving racial justice, in terms of correcting historical injustices, must take into account how changes in processes could actually end up hurting flesh-and-blood human beings, especially when those harmed are black Americans.
A Positive Agenda for Policy Reform
In Sowell’s constrained vision, a large part of the equation for African-American improvement is going to be cultural change. However, white Americans should not think that this means they have no responsibility in working towards racial justice. A positive agenda must take into consideration real harms experienced by African-Americans due to government action (and inaction). Thus, traditional justice demands institutional reforms, and in some cases, recompense.
The policy part of this equation outlined below is motivated by traditional justice concerns that hold people accountable under the rule of law for violations of constitutional rights and promotes institutional reforms to more properly align incentives.
What follows below are policy proposals aimed at achieving a greater degree of racial justice for black Americans, but fundamentally informed by the constrained vision and traditional justice concerns outlined by Sowell. Most of these proposals are not on issues Sowell has written a lot on. In fact, some proposals may actually not be something he would support, but are—in my opinion—consistent with the constrained vision and traditional justice.
Reparations for Historical Rights Violations
Sowell once wrote this in regards to reparations for black Americans:
Nevertheless, it remains painfully clear that those people who were torn from their homes in Africa in centuries past and forcibly brought across the Atlantic in chains suffered not only horribly, but unjustly. Were they and their captors still alive, the reparations and retribution owed would be staggering. Time and death, however, cheat us of such opportunities for justice, however galling that may be. We can, of course, create new injustices among our flesh-and-blood contemporaries for the sake of symbolic expiation, so that the son or daughter of a black doctor or executive can get into an elite college ahead of the son or daughter of a white factory worker or farmer, but only believers in the vision of cosmic justice are likely to take moral solace from that. We can only make our choices among alternatives actually available, and rectifying the past is not one of those options.
In other words, if the victims and perpetrators of injustice are no longer alive, it is not just to hold entire members of respective races accountable for crimes which they did not commit. However, this would presumably leave open the possibility of applying traditional justice concepts in those cases where death has not cheated us.
For instance, there are still black Americans alive who suffered from Jim Crow, as well as children and family members of those lynched. While it is too little, too late, it seems consistent with traditional justice to still seek out and prosecute criminally perpetrators who committed heinous acts but a few generations ago against still living victims. This is not unprecedented. Old Nazis are still prosecuted for crimes against Jews. A similar thing could be done in the United States.
Similarly, civil rights lawsuits for the damages caused by Jim Crow could be another way to recompense those who were harmed. Alternatively, it could be done by legislation. The Civil Liberties Act of 1988 was passed under President Reagan and gave living Japanese Americans who were interned during World War II some limited reparations. A similar system could be set up for living victims of Jim Crow.
Statutes of limitations may need to be changed to facilitate these criminal prosecutions and civil rights lawsuits, but it is quite clearly consistent with the idea of holding flesh-and-blood persons accountable for their unlawful actions.
Holding flesh-and-blood perpetrators accountable for rights violations should not be confused with the cosmic justice idea—that Sowell consistently decries—that says intertemporal abstractions can be held accountable for crimes. In other words, this is not holding “whites” accountable for all historical injustices to “blacks.” This is specifically giving redress to victims and deterring future bad conduct.
End Qualified Immunity
Another way to promote racial justice consistent with the constrained vision is to end one of the Warren Court’s egregious examples of judicial activism: qualified immunity. Qualified immunity is nowhere mentioned in the statute for civil rights, 42 USC § 1983. As Sowell argues in his writings, judges in the constrained vision are supposed to declare what the law is, not what they believe it should be, unlike those in the unconstrained vision who—according to Sowell— believe they have the right to amend the laws through judicial edict. The introduction of qualified immunity into the law by the activist Warren Court should be overturned.
In a civil rights lawsuit, the goal is to make the victim (or their families) of a rights violation whole by monetary damages. From a legal perspective, this is necessary to give the victim justice. From an economic perspective this is necessary to deter future bad conduct and properly align ex ante incentives going forward. Under a well-functioning system, juries would, after hearing all the evidence, make a decision about whether constitutional rights were violated and the extent of damages. A functioning system of settlements would result as a common law develops determining what counts as reasonable or unreasonable uses of force. This doesn’t mean plaintiffs always win, either. Officers may be determined to be acting reasonably under the circumstances once all the evidence is presented to a jury.
However, one of the greatest obstacles to holding police officers accountable in misconduct cases is the doctrine of qualified immunity… courts have widely expanded its scope to the point that qualified immunity is now protecting officers even when their conduct violates the law, as long as the officers weren’t on clear notice from specific judicial precedent that what they did was illegal when they did it… This standard has predictably led to a situation where officer misconduct which judges and juries would likely find egregious never makes it to court. The Cato Institute’s website Unlawful Shield details many cases where federal courts found an officer’s conduct was illegal yet nonetheless protected by qualified immunity.
Immunity of this nature has profound consequences on the incentive structure facing police officers. Police officers, as well as the departments that employ them, are insufficiently accountable when gross misconduct does not get past a motion to dismiss for qualified immunity… The result is to encourage police officers to take insufficient care when making the choice about the level of force to use.
Those with a constrained vision focus on processes and incentives. In this case, it is police officers who have insufficient incentives to take reasonable care when they receive qualified immunity for their conduct.
End the Drug War
While not something he has written a lot on, Sowell has argued for the decriminalization of drugs, comparing the War on Drugs to the earlier attempts at Prohibition of alcohol. This is consistent with the constrained vision, which cares about the institutional incentives created by law.
Interestingly, work by Michelle Alexander in the second chapter of The New Jim Crow is largely consistent with Sowell’s point of view. There she argued the institutional incentives of police departments were systematically changed when the drug war was ramped up.
Alexander asks a question which is right in line with the constrained vision:
[I]t is fair to wonder why the police would choose to arrest such an astonishing percentage of the American public for minor drug crimes. The fact that police are legally allowed to engage in a wholesale roundup of nonviolent drug offenders does not answer the question why they would choose to do so, particularly when most police departments have far more serious crimes to prevent and solve. Why would police prioritize drug-law enforcement? Drug use and abuse is nothing new; in fact, it was on the decline, not on the rise, when the War on Drugs began.
Alexander locates the impetus for ramping up the drug war in federal subsidies:
In 1988, at the behest of the Reagan administration, Congress revised the program that provides federal aid to law enforcement, renaming it the Edward Byrne Memorial State and Local Law Enforcement Assistance Program after a New York City police officer who was shot to death while guarding the home of a drug-case witness. The Byrne program was designed to encourage every federal grant recipient to help fight the War on Drugs. Millions of dollars in federal aid have been offered to state and local law enforcement agencies willing to wage the war. By the late 1990s, the overwhelming majority of state and local police forces in the country had availed themselves of the newly available resources and added a significant military component to buttress their drug-war operations.
On top of that, police departments were benefited by civil asset forfeiture:
As if the free military equipment, training, and cash grants were not enough, the Reagan administration provided law enforcement with yet another financial incentive to devote extraordinary resources to drug law enforcement, rather than more serious crimes: state and local law enforcement agencies were granted the authority to keep, for their own use, the vast majority of cash and assets they seize when waging the drug war. This dramatic change in policy gave state and local police an enormous stake in the War on Drugs—not in its success, but in its perpetual existence. Suddenly, police departments were capable of increasing the size of their budgets, quite substantially, simply by taking the cash, cars, and homes of people suspected of drug use or sales. Because those who were targeted were typically poor or of moderate means, they often lacked the resources to hire an attorney or pay the considerable court costs. As a result, most people who had their cash or property seized did not challenge the government’s action, especially because the government could retaliate by filing criminal charges—baseless or not.
As Alexander notes, black Americans (and other minorities) were largely targeted in this ramped up War on Drugs, noting the drug war’s effects have been to disproportionately imprison black Americans even though drug usage and sales are relatively similar across races. Police officers have incredible discretion in determining who to investigate and bring charges against. When it comes to the drug war, this discretion is magnified because the activity is largely consensual, meaning officers can’t rely on victims to come to them to start an investigation. Alexander finds the reason the criminal justice system has targeted black Americans is because of implicit bias in police officers, prosecutors, and judges, which mirrors the bias shown in media coverage and in larger white American society.
Anyone inspired by Sowell would need to determine whether this is because of racism or some other variable. It is important to note here that Sowell never denies that racism exists or is a real problem in American society. But he does challenge us to determine whether this alone is the cause of disparities. Here, Alexander makes a strong case that it is implicit racism that causes the disparities in enforcement of the War on Drugs. A race-neutral explanation could be as follows, even though it still suggests ending the War on Drugs: the enforcement costs against those unable to afford to challenge the system are lower. And black Americans are disproportionately represented among the poor in this country. As will be discussed below in the section on reforming indigent criminal defense, most prosecutions are initiated against defendants who can’t afford a lawyer. The result could be racially disparate even without a racist motivation.
Regardless of whether racism is the variable that explains the disparate impact of the War on Drugs, it should be ended. This may be an area where traditional and cosmic justice concerns can be united in an effort to reform the criminal justice system.
Reform Indigent Criminal Defense
A related aspect of how the criminal justice system has created a real barrier for far too many black Americans is the often poor quality of indigent criminal defense. Indigent defense is a large part of criminal defense in this country since a very high number of criminal prosecutions are initiated against those who are often too poor to afford a lawyer (roughly 80%). Since black Americans are disproportionately represented among the indigent and those in the criminal justice system, it should be no surprise that black Americans are disproportionately represented by public defenders in this country.
According to the constrained vision, it is important to look at the institutional incentives of public defenders. Considering the extremely high societal costs of false convictions, it is important to get these incentives right.
David Friedman and Stephen Schulhofer’s seminal article exploring the law & economics of indigent criminal defense highlighted the conflict of interest inherent in government choosing who represents criminal defendants when the government is in charge of prosecuting. They analyzed each of the models used in the United States for indigent defense from an economic point of view and found each wanting. On top of that, there is also a calculation problem inherent in government-run public defender’s offices whereby defendants may be systematically deprived of viable defense strategies because of a lack of price signals.
An interesting alternative proposed by Friedman and Schulhofer is a voucher system. This is similar to the voucher system Sowell has often touted for education. The idea would be that indigent criminal defendants get to pick the lawyer of their choosing that is part of the voucher program. The government would subsidize the provision of indigent defense, in this model, but would not actually pick the lawyer or run the public defender organization. Incentives would be more closely aligned between the defendant and counsel.
While much more could be said consistent with the constrained vision that could help flesh-and-blood black Americans, including abolishing occupational licensing, ending wage controls, promoting school choice, and ending counterproductive welfare policies, this is enough for now. Racial justice demands holding rights violators accountable and making victims whole. Racial justice also means reforming institutions to make sure incentives are right to deter conduct which harms black Americans. However, the growing desire to do something to promote racial justice in this country should not fall into the trap of cosmic justice thinking, which often ends up hurting flesh-and-blood people of all races in the present in the name of intertemporal abstractions.
Happy 90th birthday to one of the greatest law & economics scholars ever, Dr. Thomas Sowell.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Miranda Perry Fleischer, (Professor Law and Co-Director of Tax Programs at the University of San Diego School of Law); and Matt Zwolinski (Professor of Philosophy, University of San Diego; founder and director, USD Center for Ethics, Economics, and Public Policy; founder and contributor, Bleeding Heart Libertarians Blog)]
This week, Americans began receiving cold, hard cash from the government. Meant to cushion the economic fallout of Covid-19, the CARES Act provides households with relief payments of up to $1200 per adult and $500 per child. As we have written elsewhere, direct cash transfers are the simplest, least paternalistic, and most efficient way to protect Americans’ economic health – pandemic or not. The idea of simply giving people money has deep historical and wide ideological roots, culminating in Andrew Yang’s popularization of a universal basic income (“UBI”) during his now-suspended presidential campaign. The CARES Act relief provisions embody some of the potential benefits of a UBI, but nevertheless fail in key ways to deliver its true promise.
Provide Cash, No-Strings-Attached
Most promisingly, the relief payments are no-strings-attached. Recipients can use them as they – not the government – think best, be it for rent, food, or a laptop for a child to learn remotely. This freedom is a welcome departure from most current aid programs, which are often in-kind or restricted transfers. Kansas prohibits welfare recipients from using benefits at movie theaters and swimming pools. SNAP recipients cannot purchase “hot food” such as a ready-to-eat roasted chicken; California has a 17-page pamphlet identifying which foods users of Women, Infants and Children (“WIC”) benefits can buy (for example, white eggs but not brown).
These restrictions arise from a distrust of beneficiaries. Yet numerous studies show that recipients of cash transfers do not waste benefits on alcohol, drugs or gambling. Instead, beneficiaries in developing countries purchase livestock, metal roofs, or healthier food. In wealthier countries, cash transfers are associated with improvements in infant health, better nutrition, higher test scores, more schooling, and lower rates of arrest for young adults – all of which suggest beneficiaries do not waste cash.
Avoid Asset Tests
A second positive of the relief payments is that they eschew asset tests, unlike many welfare programs. For example, a family can lose hundreds of dollars of SNAP benefits if their countable assets exceed $2,250. Such limits act as an implicit wealth tax and discourage lower-income individuals from saving. Indeed, some recipients report engaging in transactions like buying furniture on lay-away (which does not count) to avoid the asset limits. Lower-income individuals, for whom a car repair bill or traffic ticket can lead to financial ruin, should be encouraged to – not penalized for – saving for a rainy day.
Don’t Worry So Much about the Labor Market
A third pro is that the direct relief payments are not tied to a showing of desert. They do not require one to work, be looking for work, or show that one is either unable to work or engaged in a substitute such as child care or school. Again, this contrasts with most current welfare programs. SNAP requires able-bodied childless adults to work or participate in training or education 80 hours a month. Supplemental Security Income requires non-elderly recipients to prove that they are blind or disabled. Nor do the relief payments require recipients to pass a drug test, or prove they have no criminal record.
As with spending restrictions, these requirements display distrust of beneficiaries. The fear is that “money for nothing” will encourage low-income individuals to leave their jobs en masse. But this fear, too, is largely overblown. Although past experiments with unconditional transfers show that total work hours drop, the bulk of this drop is from teenagers staying in school longer, new mothers delaying entrance into the workforce, and primary earners reducing their hours from say, 60 to 50 hours a week. We could also imagine UBI recipients spending time volunteering, engaging in the arts, or taking care of friends and relatives. None of these are necessarily bad things.
Don’t Limit Aid to the “Deserving”
On these three counts, the CARES Act embraces the promise of a UBI. But the CARES Act departs from key aspects of a well-designed, true UBI. Most importantly, the size of the relief payments – one-time transfers of $1200 per adult – pale in comparison to the Act’s enhanced unemployment benefits of $600/week. This mismatch underscores how deeply ingrained our country’s obsession with helping only the “deserving” poor is and how narrowly “desert” is defined. The Act’s most generous aid is limited to individuals with pre-existing connections to the formal labor market who leave under very specific conditions. Someone who cannot work because they are caring for a family member sick with COVID-19 qualifies, but not an adult child who left a job months ago to care for an aging parent with Alzheimer’s. A parent who cannot work because her child’s school was cancelled due to the pandemic qualifies, but not a parent who hasn’t worked the past couple years due to the lack of affordable child care. And because unemployment benefits not only turn on being previously employed but also rise the higher one’s past wages were, this mismatch magnifies that our safety net helps the slightly poor much more than the very poorest among us.
Don’t Impose Bureaucratic Hurdles
The botched roll-out of the enhanced unemployment benefits illustrates another downside to targeting aid only to the “deserving”: It is far more complicated than giving aid to all who need it. Guidance for self-employed workers (newly eligible for such benefits) is still forthcoming. Individuals with more than one employer before the crisis struggle to input multiple jobs in the system, even though their benefits increase as their past wages do. Even college graduates have trouble completing the clunky forms; a friend who teaches yoga had to choose between “aqua fitness instructor” and “physical education” when listing her job.
These frustrations are just another example of the government’s ineptitude at determining who is and is not work capable – even in good times. Often, the very people that can navigate the system to convince the government they are unable to work are actually the most work-capable. Those least capable of work, unable to navigate the system, receive nothing. And as millions of Americans spend countless hours on the phone and navigating crashing websites, they are learning what has been painfully obvious to many lower-income individuals for years – the government often puts insurmountable barriers in the way of even the “deserving poor.” These barriers – numerous office visits, lengthy forms, drug tests – are sometimes so time consuming that beneficiaries must choose between obtaining benefits to which they are legally entitled and applying for jobs or working extra hours. Lesson one from the CARES Act is that universal payments, paid to all, avoid these pitfalls.
Don’t Means Test Up Front
The CARES Act contains three other flaws that a well-designed UBI would also fix. First, the structure of the cash transfers highlights the drawbacks of upfront means testing. In an attempt to limit aid to Americans in financial distress, the $1200 relief payments begin to phase-out at five cents on the dollar when income exceeds a certain threshold: $75,000 for childless, single individuals and $150,000 for married couples. The catch is that for most Americans, their 2019 or 2018 incomes will determine whether their relief payments phase-out – and therefore how much aid they receive now, in 2020. In a world where 22 million Americans have filed for unemployment in the past month, looking to one or two-year old data to determine need is meaningless. Many Americans whose pre-pandemic incomes exceeded the threshold are now struggling to make mortgage payments and put food on the table, but will receive little or no direct cash aid under the CARES Act until April of 2021.
This absurdity magnifies a problem inherent in ex ante means tests. Often, one’s past financial status does not tell us much about an individual’s current needs. This is particularly true when incomes fluctuate from period to period, as is the case with many lower-income workers. Imagine a fast food worker and SNAP beneficiary whose schedule changes month to month, if not week to week. If she is lucky enough to work a lot in November, she may see her December SNAP benefits reduced. But what if her boss gives her fewer shifts in December? Both her paycheck and her SNAP benefits will be lower in December, leaving her struggling.
The solution is to send cash to all Americans, and recapture the transfer through the income tax system. Mathematically, an ex post tax is exactly the same as an ex ante phase out. Consider the CARES Act. A childless single individual with an income of $85,000 is $10,000 over the threshold, reducing her benefit by $500 and netting her $700. Giving her a check for $1200 and taxing her an additional 5% on income above $75,000 also nets her $700. As a practical matter, however, an ex post tax is more accurate because hindsight is 20-20. Lesson two from the CARES Act is that universal payments offset by taxes are superior to ex ante means-testing.
Provide Regular Payments
Third, the CARES Act provides one lump sum payment, with struggling Americans wondering whether Congress will act again. This is a missed opportunity: Studies show that families receiving SNAP benefits face challenges planning for even a month at a time. Lesson three is that guaranteed monthly or bi-weekly payments – as a true UBI would provide — would help households plan and provide some peace of mind amidst this uncertainty.
Provide Equal Payments to Children and Adults
Finally, the CARES Act provides a smaller benefit to children than adults. This is nonsensical. A single parent with two children faces greater hardship than a married couple with one child, as she has the same number of mouths to feed with fewer earners. Further, social science evidence suggests that augmenting family income has positive long-run consequences for children. Lesson four from the CARES Act – the empirical case for a UBI is strongest for families with children.
It’s Better to Be Overly, not Underly, Generous
The Act’s direct cash payments are a step in the right direction. But they demonstrate that not all cash assistance plans are created equal. Uniform and periodic payments to all – regardless of age and one’s relationship to the workforce – is the best way to protect Americans’ economic health, pandemic or not. This is not the time to be stingy or moralistic in our assistance. Better to err on the side of being overly generous now, especially when we can correct that error later through the tax system. Errors that result in withholding aid from those who need it, alas, might not be so easy to correct.
At a time when nations are engaged in bidding wars in the worldwide market to alleviate the shortages of critical medical necessities for the Covid-19 crisis, it certainly bares the question, have free trade and competition policies resulting in efficient global integrated market networks gone too far? Did economists and policy makers advocating for efficient competitive markets not foresee a failure of the supply chain in meeting a surge in demand during an inevitable global crisis such as this one?
The failures in securing medical supplies have escalated a global health crisis to geopolitical spats fuelled by strong nationalistic public sentiments. In the process of competing to acquire highly treasured medical equipment, governments are confiscating, outbidding, and diverting shipments at the risk of not adhering to the terms of established free trade agreements and international trading rules, all at the cost of the humanitarian needs of other nations.
Since the start of the Covid-19 crisis, all levels of government in Canada have been working on diversifying the supply chain for critical equipment both domestically and internationally. But, most importantly, these governments are bolstering domestic production and an integrated domestic supply network recognizing the increasing likelihood of tightening borders impacting the movement of critical products.
For the past 3 weeks in his daily briefings, Canada’s Prime Minister, Justin Trudeau, has repeatedly confirmed the Government’s support of domestic enterprises that are switching their manufacturing lines to produce critical medical supplies and of other “made in Canada” products.
As conditions worsen in the US and the White House hardens its position towards collaboration and sharing for the greater global humanitarian good—even in the presence of a recent bilateral agreement to keep the movement of essential goods fluid—Canada’s response has become more retaliatory. Now shifting to a message emphasizing that the need for “made in Canada” products is one of extreme urgency.
On April 3rd, President Trump ordered Minnesota-based 3M to stop exporting medical-grade masks to Canada and Latin America; a decision that was enabled by the triggering of the 1950 Defence Production Act. In response, Ontario Premier, Doug Ford, stated in his public address:
Never again in the history of Canada should we ever be beholden to companies around the world for the safety and wellbeing of the people of Canada. There is nothing we can’t build right here in Ontario. As we get these companies round up and we get through this, we can’t be going over to other sources because we’re going to save a nickel.
Premier Ford’s words ring true for many Canadians as they watch this crisis unfold and wonder where would it stop if the crisis worsens? Will our neighbour to the south block shipments of a Covid-19 vaccine when it is developed? Will it extend to other essential goods such as food or medicine?
There are reports that the decline in the number of foreign workers in farming caused by travel restrictions and quarantine rules in both Canada and the US will cause food production shortages, which makes the actions of the White House very unsettling for Canadians. Canada’s exports to the US constitute 75% of total Canadian exports, while imports from the US constitute 46%. Canada’s imports of food and beverages from the US were valued at US $24 billion in 2018 including: prepared foods, fresh vegetables, fresh fruits, other snack foods, and non-alcoholic beverages.
The length and depth of the crisis will determine to what extent the US and Canadian markets will experience shortages in products. For Canada, the severity of the pandemic in the US could result in further restrictions on the border. And it is becoming progressively more likely that it will also result in a significant reduction in the volume of necessities crossing the border between the two nations.
Increasingly, the depth and pain experienced from shortages in necessities will shape public sentiment towards free trade and strengthen mainstream demands of more nationalistic and protectionist policies. This will result in more pressure on political and government establishments to take action.
The reliance on free trade and competition policies favouring highly integrated supply chain networks is showing cracks in meeting national interests in this time of crisis. This goes well beyond the usual economic factors of contention between countries of domestic employment, job loss and resource allocation. The need for correction, however, risks moving the pendulum too far to the side of protectionism.
Free trade setbacks and global integration disruptions would become the new economic reality to ensure that domestic self-sufficiency comes first. A new trade trend has been set in motion and there is no going back from some level of disintegrating globalised supply chain productions.
How would domestic self-sufficiency be achieved?
Would international conglomerates build local plants and forgo their profit maximizing strategies of producing in growing economies that offer cheap wages and resources in order to avoid increased protectionism?
Will the Canada-United States-Mexico Agreement (CUSMA) known as the NEW NAFTA, which until today has not been put into effect, be renegotiated to allow for production measures for securing domestic necessities in the form of higher tariffs, trade quotas, and state subsidies?
Are advanced capitalist economies willing to create State-Owned Industries to produce domestic products for what it deems necessities?
Many other trade policy variations and options focused on protectionism are possible which could lead to the creation of domestic monopolies. Furthermore, any return to protected national production networks will reduce consumer welfare and eventually impede technological advancements that result from competition.
Divergence between free trade agreements and competition policy in a new era of protectionism.
For the past 30 years, national competition laws and policies have increasingly become an integrated part of free trade agreements, albeit in the form of soft competition law language, making references to the parties’ respective competition laws, and the need for transparency, procedural fairness in enforcement, and cooperation.
Similarly, free trade objectives and frameworks have become part of the design and implementation of competition legislation and, subsequently, case law. Both of which are intended to encourage competitive market systems and efficiency, an implied by-product of open markets.
In that regard, the competition legal framework in Canada, the Competition Act, seeks to maintain and strengthen competitive market forces by encouraging maximum efficiency in the use of economic resources. Provisions to determine the level of competitiveness in the market consider barriers to entry, among them, tariff and non-tariff barriers to international trade. These provisions further direct adjudicators to examine free trade agreements currently in force and their role in facilitating the current or future possibility of an international incumbent entering the market to preserve or increase competition. And it goes further to also assess the extent of an increase in the real value of exports, or substitution of domestic products for imported products.
It is evident in the design of free trade agreements and competition legislation that efficiency, competition in price, and diversification of products is to be achieved by access to imported goods and by encouraging the creation of global competitive suppliers.
Therefore, the re-emergence of protectionist nationalistic measures in international trade will result in a divergence between competition laws and free trade agreements. Such setbacks would leave competition enforcers, administrators, and adjudicators grappling with the conflict between the economic principles set out in competition law and the policy objectives that could be stipulated in future trade agreements.
The challenge ahead facing governments and industries is how to correct for the cracks in the current globalized competitive supply networks that have been revealed during this crisis without falling into a trap of nationalism and protectionism.
The COVID-19 pandemic and the shutdown of many public-facing businesses has resulted in many sudden shifts in demand for common goods. The demand for hand sanitizer has drastically increased for hospitals, businesses, and individuals. At the same time, demand for distilled spirits has fallen substantially, as the closure of bars, restaurants, and tasting rooms has cut craft distillers off from their primary buyers. Since ethanol is a key ingredient in both spirits and sanitizer, this situation presents an obvious opportunity for distillers to shift their production from the former to the latter. Hundreds of distilleries have made this transition, but it has not without obstacles. Some of these reflect a real scarcity of needed supplies, but other constraints have been externally imposed by government regulations and the tax code.
The World Health Organization provides guidelines and recipes for locally producing hand sanitizer. The relevant formulation for distilleries calls for only four ingredients: high-proof ethanol (96%), hydrogen peroxide (3%), glycerol (98%), and sterile distilled or boiled water. Distilleries are well-positioned to produce or obtain ethanol and water. Glycerol is used in only small amounts and does not currently appear to be a substantial constraint on production. Hydrogen peroxide is harder to come by, but distilleries are adapting and cooperating to ensure supply. Skip Tognetti, owner of Letterpress Distilling in Seattle, Washington, reports that one local distiller obtained a drum of 34% hydrogen peroxide, which stretches a long way when diluted to a concentration of 3%. Local distillers have been sharing this drum so that they can all produce sanitizer.
Another constraint is finding containers in which to the put the finished product. Not all containers are suitable for holding high-proof alcoholic solutions, and supplies of those that are recommended for sanitizer are scarce. The fact that many of these bottles are produced in China has reportedly also limited the supply. Distillers are therefore having to get creative; Tognetti reports looking into shampoo bottles, and in Chicago distillers have re-purposed glass beer growlers. For informal channels, some distillers have allowed consumers to bring their own containers to fill with sanitizer for personal use. Food and Drug Administration labeling requirements have also prevented the use of travel-size bottles, since the bottles are too small to display the necessary information.
The raw materials for producing ethanol are also coming from some unexpected sources. Breweries are typically unable to produce alcohol at high enough proof for sanitizer, but multiple breweries in Chicago are donating beer that distilleries can bring up to the required purity. Beer giant Anheuser-Busch is also producing sanitizer with the ethanol removed from its alcohol-free beers.
In many cases, the sanitizer is donated or sold at low-cost to hospitals and other essential services, or to local consumers. Online donations have helped to fund some of these efforts, and at least one food and beverage testing lab has stepped up to offer free testing to breweries and distilleries producing sanitizer to ensure compliance with WHO guidelines. Distillers report that the regulatory landscape has been somewhat confusing in recent weeks, and posts in a Facebook group have provided advice for how to get through the FDA’s registration process. In general, distillers going through the process report that agencies have been responsive. Tom Burkleaux of New Deal Distilling in Portland, Oregon says he “had to do some mighty paperwork,” but that the FDA and the Oregon Board of Pharmacy were both quick to process applications, with responses coming in just a few hours or less.
In general, the redirection of craft distilleries to producing hand sanitizer is an example of private businesses responding to market signals and the evident challenges of the health crisis to produce much-needed goods; in some cases, sanitizer represents one of their only sources of revenue during the shutdown, providing a lifeline for small businesses. The Distilled Spirits Council currently lists nearly 600 distilleries making sanitizer in the United States.
There is one significant obstacle that has hindered the production of sanitizer, however: an FDA requirement that distilleries obtain extra ingredients to denature their alcohol.
According to the WHO, the four ingredients mentioned above are all that are needed to make sanitizer. In fact, WHO specifically notes that it in most circumstances it is inadvisable to add anything else: “it is not recommended to add any bittering agents to reduce the risk of ingestion of the handrubs” except in cases where there is a high probably of accidental ingestion. Further, “[…] there is no published information on the compatibility and deterrent potential of such chemicals when used in alcohol-based handrubs to discourage their abuse. It is important to note that such additives may make the products toxic and add to production costs.”
Denaturing agents are used to render alcohol either too bitter or too toxic to consume, deterring abuse by adults or accidental ingestion by children. In ordinary circumstances, there are valid reasons to denature sanitizer. In the current pandemic, however, the denaturing requirement is a significant bottleneck in production.
The federal Tax and Trade Bureau is the primary agency regulating alcohol production in the United States. The TTB took action early to encourage distilleries to produce sanitizer, officially releasing guidance on March 18 instructing them that they are free to commence production without prior authorization or formula approval, so long as they are making sanitizer in accordance with WHO guidelines. On March 23, the FDA issued its own emergency authorization of hand sanitizer production; unlike the WHO, FDA guidance does require the use of denaturants. As a result, on March 26 the TTB issued new guidance to be consistent with the FDA.
Under current rules, only sanitizer made with denatured alcohol is exempt from the federal excise tax on beverage alcohol. Federal excise taxes begin at $2.70 per gallon for low-volume distilleries and reach up to $13.50 per gallon, significantly increasing the cost of producing hand sanitizer; state excise taxes can raise these costs even higher.
To be clear, if I didn’t have to track down denaturing agents (there are several, but isopropyl alcohol is the most common), I could turn out 200 gallons of finished hand sanitizer TODAY.
(As an additional concern, the Distilled Spirits Council notes that the extremely bitter or toxic nature of denaturing agents may impose additional costs on distillers given the need to thoroughly cleanse them from their equipment.)
Congress attempted to address these concerns in the CARES Act, the coronavirus relief package. Section 2308 explicitly waives the federal excise tax on distilled spirits used for the production of sanitizer, however it leaves the formula specification in the hands of the FDA. Unless the agency revises its guidance, production in the US will be constrained by the requirement to add denaturing agents to the plentiful supply of ethanol, or distilleries will risk being targeted with enforcement actions if they produce perfectly usable sanitizer without denaturing their alcohol.
Local distilleries provide agile production capacity
In recent days, larger spirits producers including Pernod-Ricard, Diageo, and Bacardi have announced plans to produce sanitizer. Given their resources and economies of scale, they may end up taking over a significant part of the market. Yet small, local distilleries have displayed the agility necessary to rapidly shift production. It’s worth noting that many of these distilleries did not exist until fairly recently. According to the American Craft Spirits Association, there were fewer than 100 craft distilleries operating in the United States in 2005. By 2018, there were more than 1,800. This growth is the result of changing consumer interests, but also the liberalization of state and local laws to permit distilleries and tasting rooms. That many of these distilleries have the capacity to produce sanitizer in a time of emergency is a welcome, if unintended, consequence of this liberalization.
[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.
This post is authored by Ben Sperry, (Associate Director, Legal Research, International Center for Law & Economics).]
The visceral reaction to the New York Times’ recent story on Matt Colvin, the man who had 17,700 bottles of hand sanitizer with nowhere to sell them, shows there is a fundamental misunderstanding of the importance of prices and the informational function they serve in the economy. Calls to enforce laws against “price gouging” may actually prove more harmful to consumers and society than allowing prices to rise (or fall, of course) in response to market conditions.
Nobel-prize winning economist Friedrich Hayek explained how price signals serve as information that allows for coordination in a market society:
We must look at the price system as such a mechanism for communicating information if we want to understand its real function… The most significant fact about this system is the economy of knowledge with which it operates, or how little the individual participants need to know in order to be able to take the right action. In abbreviated form, by a kind of symbol, only the most essential information is passed on and passed on only to those concerned. It is more than a metaphor to describe the price system as a kind of machinery for registering change, or a system of telecommunications which enables individual producers to watch merely the movement of a few pointers, as an engineer might watch the hands of a few dials, in order to adjust their activities to changes of which they may never know more than is reflected in the price movement.
Economic actors don’t need a PhD in economics or even to pay attention to the news about the coronavirus to change their behavior. Higher prices for goods or services alone give important information to individuals — whether consumers, producers, distributors, or entrepreneurs — to conserve scarce resources, produce more, and look for (or invest in creating!) alternatives.
Prices are fundamental to rationing scarce resources, especially during an emergency. Allowing prices to rapidly rise has three salutary effects (as explained by Professor Michael Munger in his terrific twitter thread):
Consumers ration how much they really need;
Producers respond to the rising prices by ramping up supply and distributors make more available; and
Entrepreneurs find new substitutes in order to innovate around bottlenecks in the supply chain.
Despite the distaste with which the public often treats “price gouging,” officials should take care to ensure that they don’t prevent these three necessary responses from occurring.
Rationing by consumers
During a crisis, if prices for goods that are in high demand but short supply are forced to stay at pre-crisis levels, the informational signal of a shortage isn’t given — at least by the market directly. This encourages consumers to buy more than is rationally justified under the circumstances. This stockpiling leads to shortages.
Companies respond by rationing in various ways, like instituting shorter hours or placing limits on how much of certain high-demand goods can be bought by any one consumer. Lines (and unavailability), instead of price, become the primary cost borne by consumers trying to obtain the scarce but underpriced goods.
If, instead, prices rise in light of the short supply and high demand, price-elastic consumers will buy less, freeing up supply for others. And, critically, price-inelastic consumers (i.e. those who most need the good) will be provided a better shot at purchase.
According to the New York Times story on Mr. Colvin, he focused on buying out the hand sanitizer in rural areas of Tennessee and Kentucky, since the major metro areas were already cleaned out. His goal was to then sell these hand sanitizers (and other high-demand goods) online at market prices. He was essentially acting as a speculator and bringing information to the market (much like an insider trader). If successful, he would be coordinating supply and demand between geographical areas by successfully arbitraging. This often occurs when emergencies are localized, like post-Katrina New Orleans or post-Irma Florida. In those cases, higher prices induced suppliers to shift goods and services from around the country to the affected areas. Similarly, here Mr. Colvin was arguably providing a beneficial service, by shifting the supply of high-demand goods from low-demand rural areas to consumers facing localized shortages.
For those who object to Mr. Colvin’s bulk purchasing-for-resale scheme, the answer is similar to those who object to ticket resellers: the retailer should raise the price. If the Walmarts, Targets, and Dollar Trees raised prices or rationed supply like the supermarket in Denmark, Mr. Colvin would not have been able to afford nearly as much hand sanitizer. (Of course, it’s also possible — had those outlets raised prices — that Mr. Colvin would not have been able to profitably re-route the excess local supply to those in other parts of the country most in need.)
The role of “price gouging” laws and social norms
A common retort, of course, is that Colvin was able to profit from the pandemic precisely because he was able to purchase a large amount of stock at normal retail prices, even after the pandemic began. Thus, he was not a producer who happened to have a restricted amount of supply in the face of new demand, but a mere reseller who exacerbated the supply shortage problems.
But such an observation truncates the analysis and misses the crucial role that social norms against “price gouging” and state “price gouging” laws play in facilitating shortages during a crisis.
Under these laws, typically retailers may raise prices by at most 10% during a declared state of emergency. But even without such laws, brick-and-mortar businesses are tied to a location in which they are repeat players, and they may not want to take a reputational hit by raising prices during an emergency and violating the “price gouging” norm. By contrast, individual sellers, especially pseudonymous third-party sellers using online platforms, do not rely on repeat interactions to the same degree, and may be harder to track down for prosecution.
Thus, the social norms and laws exacerbate the conditions that create the need for emergency pricing, and lead to outsized arbitrage opportunities for those willing to violate norms and the law. But, critically, this violation is only a symptom of the larger problem that social norms and laws stand in the way, in the first instance, of retailers using emergency pricing to ration scarce supplies.
Normally, third-party sales sites have much more dynamic pricing than brick and mortar outlets, which just tend to run out of underpriced goods for a period of time rather than raise prices. This explains why Mr. Colvin was able to sell hand sanitizer for prices much higher than retail on Amazon before the site suspended his ability to do so. On the other hand, in response to public criticism, Amazon, Walmart, eBay, and other platforms continue to crack down on third party “price-gouging” on their sites.
But even PR-centric anti-gouging campaigns are not ultimately immune to the laws of supply and demand. Even Amazon.com, as a first party seller, ends up needing to raise prices, ostensibly as the pricing feedback mechanisms respond to cost increases up and down the supply chain.
The desire to help the poor who cannot afford higher priced essentials is what drives the policy responses, but in reality no one benefits from shortages. Those who stockpile the in-demand goods are unlikely to be poor because doing so entails a significant upfront cost. And if they are poor, then the potential for resale at a higher price would be a benefit.
Increased production and distribution
During a crisis, it is imperative that spiking demand is met by increased production. Prices are feedback mechanisms that provide realistic estimates of demand to producers. Even if good-hearted producers forswearing the profit motive want to increase production as an act of charity, they still need to understand consumer demand in order to produce the correct amount.
Of course, prices are not the only source of information. Producers reading the news that there is a shortage undoubtedly can ramp up their production. But even still, in order to optimize production (i.e., not just blindly increase output and hope they get it right), they need a feedback mechanism. Prices are the most efficient mechanism available for quickly translating the amount of social need (demand) for a given product to guarantee that producers do not undersupply the product (leaving more people without than need the good), or oversupply the product (consuming more resources than necessary in a time of crisis). Prices, when allowed to adjust to actual demand, thus allow society to avoid exacerbating shortages and misallocating resources.
The opportunity to earn more profit incentivizes distributors all along the supply chain. Amazon is hiring 100,000 workers to help ship all the products which are being ordered right now. Grocers and retailers are doing their best to line the shelves with more in-demand food and supplies.
Distributors rely on more than just price signals alone, obviously, such as information about how quickly goods are selling out. But even as retail prices stay low for consumers for many goods, distributors often are paying more to producers in order to keep the shelves full, as in the case of eggs. These are the relevant price signals for producers to increase production to meet demand.
For instance, hand sanitizer companies like GOJO and EO Products are ramping up production in response to known demand (so much that the price of isopropyl alcohol is jumping sharply). Farmers are trying to produce as much as is necessary to meet the increased orders (and prices) they are receiving. Even previously low-demand goods like beans are facing a boom time. These instances are likely caused by a mix of anticipatory response based on general news, as well as the slightly laggier price signals flowing through the supply chain. But, even with an “early warning” from the media, the manufacturers still need to ultimately shape their behavior with more precise information. This comes in the form of orders from retailers at increased frequencies and prices, which are both rising because of insufficient supply. In search of the most important price signal, profits, manufacturers and farmers are increasing production.
These responses to higher prices have the salutary effect of making available more of the products consumers need the most during a crisis.
Unfortunately, however, government regulations on sales of distilled products and concerns about licensing have led distillers to give away those products rather than charge for them. Thus, beneficial as this may be, without the ability to efficiently price such products, not nearly as much will be produced as would otherwise be. The non-emergency price of zero effectively guarantees continued shortages because the demand for these free alternatives will far outstrip supply.
Amazon is now prioritizing the shipment of high-demand goods like household staples and medical supplies in its fulfillment services.
Without price signals, entrepreneurs would have far less incentive to shift production and distribution to the highest valued use.
While stories like that of Mr. Colvin buying all of the hand sanitizer in Tennessee understandably bother people, government efforts to prevent prices from adjusting only impede the information sharing processes inherent in markets.
If the concern is to help the poor, it would be better to pursue less distortionary public policy than arbitrarily capping prices. The US government, for instance, is currently considering a progressively tiered one-time payment to lower income individuals.
Moves to create new and enforce existing “price-gouging” laws are likely to become more prevalent the longer shortages persist. Platforms will likely continue to receive pressure to remove “price-gougers,” as well. These policies should be resisted. Not only will these moves not prevent shortages, they will exacerbate them and push the sale of high-demand goods into grey markets where prices will likely be even higher.
Prices are an important source of information not only for consumers, but for producers, distributors, and entrepreneurs. Short circuiting this signal will only be to the detriment of society.
Carrie Wade, Ph.D., MPH is the Director of Harm Reduction Policy and Senior Fellow at the R Street Institute.
Abstinence approaches work exceedingly well on an individual level but continue to fail when applied to populations. We can see this in several areas: teen pregnancy; continued drug use regardless of severe criminal penalties; and high smoking rates in vulnerable populations, despite targeted efforts to prevent youth and adult uptake.
The good news is that abstinence-oriented prevention strategies do seem to have a positive effect on smoking. Overall, teen use has steadily declined since 1996. This may be attributed to an increase in educational efforts to prevent uptake, stiff penalties for retailers who fail to verify legal age of purchase, the increased cost of cigarettes, and a myriad of other interventions.
Unfortunately many are left behind. Populations with lower levels of educational attainment, African Americans and, ironically, those with less disposable income have smoking rates two to three times that of the general population. In light of this, how can we help people for whom the abstinence-only message has failed? Harm reduction strategies can have a positive effect on the quality of life of smokers who cannot or do not wish to quit.
Why harm reduction?
Harm reduction approaches recognize that reduction in risky behavior is one possible means to address public health goals. They take a pragmatic approach to the consequences of risk behaviors – focusing on short-term attainable goals rather than long-term ideals—and provide options beyond abstinence to decrease harm relative to the riskier behavior.
In economic terms, traditional public health approaches to drug use target supply and demand, which is to say they attempt to decrease the supply of a drug while also reducing the demand for it. But this often leads to more risky behaviors and adverse outcomes. For example, when prescription opioids were restricted, those who were not deterred from such an inconvenience switched to heroin; when heroin became tricky to smuggle, traffickers switched to fentanyl. We might predict the same effects when it comes to cigarettes.
Given this, since we know that the riskiest of behaviors, such as tobacco, alcohol and other drug use will continue—and possibly flourish in many populations—we should instead focus on ways to decrease the supply of the most dangerous methods of use and increase the supply of and demand for safer, innovative tools. This is the crux of harm reduction.
Opioid Harm Reduction
Like most innovation, harm reduction strategies for opioid and/or injection drug users were born out of a need. In the 1980s, sterile syringes were certainly not an innovative technology. However, the idea that clean needle distribution could put a quick end to the transmission of the Hepatitis B virus in Amsterdam was, and the success of this intervention was noticed worldwide.
Although clean needle distribution was illegal at the time, activists who saw a need for this humanitarian intervention risked jail time and high fines to reduce the risk of infectious disease transmission among injection drug users in New Haven and Boston. Making such programs accessible was not an easy thing to do. Amid fears that dangerous drug use may increase and the idea that harm reduction programs would tacitly endorse illegal activity, there was resistance in governments and institutions adopting harm reduction strategies as a public health intervention.
However, following a noticeable decrease in the incidence of HIV in this population, syringe exchange access expanded across the United States and Europe. At first, clean syringe access programs (SAPs) operated with the consent of the communities they served but as the idea spread, these programs received financial and logistical support from several health departments. As of 2014, there are over 200 SAPs operating in 33 states and the District of Columbia.
Time has shown that these approaches are wildly successful in their primary objective and enormously cost effective. In 2008, Washington D.C. allocated $650,000 to increase harm reduction services including syringe access. As of 2011, it was estimated that this investment had averted 120 cases of HIV, saving $44 million.
Seven studies conducted by leading scientific and governmental agencies from 1991 through 2001 have also concluded that syringe access programs result in a decrease in HIV transmission without residual effects of increased injection drug use. In addition, SAPs are correlated with increased entry into treatment and detox programs and do not result in increases in crime in neighborhoods that support these programs.
Tobacco harm reduction
We know that some populations have a higher risk of smoking and of developing and dying from smoking-related diseases. With successful one-year quit rates hovering around 10 percent, harm reduction strategies can offer ways to transition smokers off of the most dangerous nicotine delivery device: the combustible cigarette.
In 2008, the World Health Organization developed the MPOWER policy package aimed to reduce the burden of cigarette smoking worldwide. In their vision statement, the authors explicitly state a goal where “no child or adult is exposed to tobacco smoke.”
Using an abstinence-only framework, MPOWER strategies are:
To monitor tobacco use and obtain data on use in youth and adults;
To protect society from second-hand smoke and decrease the availability of places that people are allowed to smoke by enacting and enforcing indoor smoking bans;
To offer assistance in smoking cessation through strengthening health systems and legalization of nicotine replacement therapies (NRTs) and other pharmaceutical interventions where necessary;
To warn the public of the dangers of smoking through public health campaigns, package warnings and counter advertising;
To enact and enforce advertising bans; and
To raise tobacco excise taxes.
These strategies have been shown to reduce the prevalence of tobacco use. People who quit smoking have a greater chance of remaining abstinent if they use NRTs. People exposed to pictorial health warnings are more likely to say they want to quit as a result. Countries with comprehensive advertising bans have a larger decrease in smoking rates compared to those without. Raising taxes has proven consistently to reduce consumption of tobacco products.
As a practical matter, the abstinence approach is also limited by individual governmental laws. Article 13 of the Framework Convention on Tobacco Control recognizes that constitutional principles or laws may limit the capabilities of governments to implement these policy measures. In the United States, cigarettes are all but protected by the complexity of both the 1998 Master Settlement Agreement and the Family Smoking Protection and Tobacco Control Act of 2009. This guarantees availability to consumers – ironically increasing the need of more reduced-risk nicotine products, such as e-cigarettes, heat-not-burn devices or oral Snus, all of which offer an alternative to combustible use for people who either cannot or do not wish to quit smoking.
Several regulatory agencies, including the FDA in the United States and Public Health England in the United Kingdom, recognize that tobacco products exist on a continuum of risk, with combustible products (the most widely used) being the most dangerous and non-combustible products existing on the opposite end of the spectrum. In fact, Public Health England estimates that e-cigarettes are at least 95% safer than combustible products and many toxicological and epidemiological studies support this assertion.
Of course for tobacco harm reduction to work, people must have an incentive to move away from combustible cigarettes.There are two equally important strategies to convince people to do so. First, public health officials need to acknowledge that e-cigarettes are less risky. Continued mixed messages from government officials and tobacco use prevention organizations confuse people regarding the actual risks from e-cigarettes. Over half of adults in the United States believe that nicotine is the culprit of smoking-related illnesses – and who can blame them when our current tobacco control strategies are focused on lowering nicotine concentrations and ridding our world of e-cigarettes?
The second is price. People who cannot or do not wish to quit smoking will never switch to safer alternatives if they are more, or as, expensive as cigarettes. Keeping the total cost of reduced risk products low will encourage people who might not otherwise consider switching to do so. The best available estimates show that e-cigarette demand is much more vulnerable to price increases than combustible cigarettes – meaning that smokers are unlikely to respond to price increases meant to dissuade them from smoking, and are less likely to vape as a means to quit or as a safer alternative.
Of course strategies to prevent smoking or encourage cessation should be a priority for all populations that smoke, but harm-reduction approaches—in particular with respect to smoking—play a vital role in decreasing death and disease in people who engage in such risky behavior. For this reason, they should always be promoted alongside abstinence approaches.
Historically, taxes had the key purpose of raising revenues. The “best” taxes would be on goods with few substitutes (i.e., inelastic demand) and on goods deemed to be luxuries. In Wealth of Nations Adam Smith notes:
Sugar, rum, and tobacco are commodities which are nowhere necessaries of life, which are become objects of almost universal consumption, and which are therefore extremely proper subjects of taxation.
The Economist notes in 1764, a fiscal crisis driven by wars in North America led Britain’s parliament began enforcing tariffs on sugar and molasses imported from outside the empire. In the U.S., from 1868 until 1913, 90 percent of all federal revenue came from taxes on liquor, beer, wine and tobacco.
Over time, the rationale for these taxes has shifted toward “sin taxes” designed to nudge consumers away from harmful or distasteful consumption. The Temperance movement in the U.S. argued for higher taxes to discourage alcohol consumption. Since the Surgeon General’s warning on the dangers of smoking, tobacco tax increases have been justified as a way to get smokers to quit. More recently, a perceived obesity epidemic has led several American cities as well as Thailand, Britain, Ireland, South Africa to impose taxes on sugar-sweetened beverages to reduce sugar consumption.
Because demand curves slope down, “sin taxes” do change behavior by reducing the quantity demanded. However, for many products subject to such taxes, demand is not especially responsive. For example, as shown in the figure below, a one percent increase in the price of tobacco is associated with a one-half of one percent decrease in sales.
Substitutability is another consideration for tax policy. An increase in the tax on spirits will result in an increase in beer and wine purchases. A high toll on a road will divert traffic to untolled streets that may not be designed for increased traffic volumes. A spike in tobacco taxes in one state will result in a spike in sales in bordering states as well as increase illegal interstate sales or smuggling. The Economist reports:
After Berkeley introduced its tax, sales of sugary drinks rose by 6.9% in neighbouring cities. Denmark, which instituted a tax on fat-laden foods in 2011, ran into similar problems. The government got rid of the tax a year later when it discovered that many shoppers were buying butter in neighbouring Germany and Sweden.
Advocates of “sin” taxes on tobacco, alcohol, and sugar argue their use impose negative externalities on the public, since governments have to spend more to take care of sick people. With approximately one-third of the U.S. population covered by some form of government funded health insurance, such as Medicare or Medicaid, what were once private costs of healthcare have been transformed into a public cost.
According to Centers for Disease Control and Prevention in U.S., smoking-related illness in the U.S. costs more than $300 billion each year, including; (1) nearly $170 billion for direct medical care for adults and (2) more than $156 billion in lost productivity, including $5.6 billion in lost productivity due to secondhand smoke exposure.
On the other hand, The Economist points out:
Smoking, in contrast, probably saves taxpayers money. Lifelong smoking will bring forward a person’s death by about ten years, which means that smokers tend to die just as they would start drawing from state pensions. In a study published in 2002 Kip Viscusi, an economist at Vanderbilt University who has served as an expert witness on behalf of tobacco companies, estimated that even if tobacco were untaxed, Americans could still expect to save the government an average of 32 cents for every pack of cigarettes they smoke.
The CDC’s cost estimates raise important questions regarding who bears the burden of smoking related illness. For example, much of the direct cost is borne by private insurance, which charge steeper premiums for customers who smoke. In addition, the CDC estimates reflect costs imposed by people who have smoked for decades—many of whom have now quit. A proper accounting of the costs vis-à-vis tax policy should evaluate the discounted costs imposed by today’s smokers.
State and local governments in the U.S. collect more than $18 billion a year in tobacco taxes. While some jurisdictions earmark a portion of tobacco taxes for prevention and cessation efforts, in practice most tobacco taxes are treated by policymakers as general revenues to be spent in whatever way the legislative body determines. Thus, in practice, there is no clear nexus between taxes levied on tobacco and government’s use of the tax revenues on smoking related costs.
Most of the harm from smoking is caused by the inhalation of toxicants released through the combustion of tobacco. Public Health England and the American Cancer Society have concluded non-combustible tobacco products, such as e-cigarettes, “heat-not-burn” products, smokeless tobacco, are considerably less harmful than combustible products.
Many experts believe that the best option for smokers who are unable or unwilling to quit smoking is to switch to a less harmful alternative activity that has similar attributes, such as using non-combustible nicotine delivery products. Policies that encourage smokers to switch from more harmful combustible tobacco products to less harmful non-combustible products would be considered a form of “harm reduction.”
Nine U.S. states now have taxes on vapor products. In addition, several local jurisdictions have enacted taxes. Their methods and levels of taxation vary widely. Policy makers considering a tax on vapor products should account for the following factors.
The current market for e-cigarettes as well as heat-not-burn products in the range of 0-10 percent of the cigarette market. Given the relatively small size of the e-cigarette and heated tobacco product market, it is unlikely any level of taxation of e-cigarettes and heated tobacco products would generate significant tax revenues to the taxing jurisdiction. Moreover much of the current research likely represents early adopters and higher income consumer groups. As such, the current empirical data based on total market size and price/tax levels are likely to be far from indicative of the “actual” market for these products.
The demand for e-cigarettes is much more responsive to a change in price than the demand for combustible cigarettes. My review of the published research to date finds the median estimated own-price elasticity is -1.096, meaning something close to a 1-to-1 relationship: a tax resulting in a one percent increase in e-cigarette prices would be associated with one percent decline in e-cigarette sales. Many of those lost sales would be shifted to purchases of combustible cigarettes.
Research on the price responsiveness of vapor products is relatively new and sparse. There are fewer than a dozen published articles, and the first article was published in 2014. As a result, the literature reports a wide range of estimated elasticities that calls into question the reliability of published estimates, as shown in the figure below. As a relatively unformed area of research, the policy debate would benefit from additional research that involves larger samples with better statistical power, reflects the dynamic nature of this new product category, and accounts for the wide variety of vapor products.
With respect to taxation and pricing, policymakers would benefit from reliable information regarding the size of the vapor product market and the degree to which vapor products are substitutes for combustible tobacco products. It may turn out that a tax on vapor products may be, as The Economist notes, less efficient than they look.
Many reporters, analysts, and even competition authorities have adopted various degrees of the usual stance that big is bad, and bigger is even badder. But worse yet, once this presumption applies, agencies have been skeptical of claimed efficiencies, placing a heightened burden on the merging parties to prove them and often ignoring them altogether. And, of course (and perhaps even worse still), there is the perennial problem of (often questionable) market definition — which tanked the Sysco/US Foods merger and which undergirds the FTC’s challenge of the Staples/Office Depot merger.
All of these issues are at play in the proposed acquisition of British aluminum can manufacturer Rexam PLC by American can manufacturer Ball Corp., which has likewise drawn the attention of competition authorities around the world — including those in Brazil, the European Union, and the United States.
But the Ball/Rexam merger has met with some important regulatory successes. Just recently the members of CADE, Brazil’s competition authority, unanimously approved the merger with limited divestitures. The most recent reports also indicate that the EU will likely approve it, as well. It’s now largely down to the FTC, which should approve the merger and not kill it or over-burden it with required divestitures on the basis of questionable antitrust economics.
The proposed merger raises a number of interesting issues in the surprisingly complex beverage container market. But this merger merits regulatory approval.
The International Center for Law & Economics recently released a research paper entitled, The Ball-Rexam Merger: The Case for a Competitive Can Market. The white paper offers an in-depth assessment of the economics of the beverage packaging industry; the place of the Ball-Rexam merger within this remarkably complex, global market; and the likely competitive effects of the deal.
The upshot is that the proposed merger is unlikely to have anticompetitive effects, and any competitive concerns that do arise can be readily addressed by a few targeted divestitures.
The bottom line
The production and distribution of aluminum cans is a surprisingly dynamic industry, characterized by evolving technology, shifting demand, complex bargaining dynamics, and significant changes in the costs of production and distribution. Despite the superficial appearance that the proposed merger will increase concentration in aluminum can manufacturing, we conclude that a proper understanding of the marketplace dynamics suggests that the merger is unlikely to have actual anticompetitive effects.
All told, and as we summarize in our Executive Summary, we found at least seven specific reasons for this conclusion:
Because the appropriately defined product market includes not only stand-alone can manufacturers, but also vertically integrated beverage companies, as well as plastic and glass packaging manufacturers, the actual increase in concentration from the merger will be substantially less than suggested by the change in the number of nationwide aluminum can manufacturers.
Moreover, in nearly all of the relevant geographic markets (which are much smaller than the typically nationwide markets from which concentration numbers are derived), the merger will not affect market concentration at all.
While beverage packaging isn’t a typical, rapidly evolving, high-technology market, technological change is occurring. Coupled with shifting consumer demand (often driven by powerful beverage company marketing efforts), and considerable (and increasing) buyer power, historical beverage packaging market shares may have little predictive value going forward.
The key importance of transportation costs and the effects of current input prices suggest that expanding demand can be effectively met only by expanding the geographic scope of production and by economizing on aluminum supply costs. These, in turn, suggest that increasing overall market concentration is consistent with increased, rather than decreased, competitiveness.
The markets in which Ball and Rexam operate are dominated by a few large customers, who are themselves direct competitors in the upstream marketplace. These companies have shown a remarkable willingness and ability to invest in competing packaging supply capacity and to exert their substantial buyer power to discipline prices.
For this same reason, complaints leveled against the proposed merger by these beverage giants — which are as much competitors as they are customers of the merging companies — should be viewed with skepticism.
Finally, the merger should generate significant managerial and overhead efficiencies, and the merged firm’s expanded geographic footprint should allow it to service larger geographic areas for its multinational customers, thus lowering transaction costs and increasing its value to these customers.
Distinguishing Ardagh: The interchangeability of aluminum and glass
An important potential sticking point for the FTC’s review of the merger is its recent decision to challenge the Ardagh-Saint Gobain merger. The cases are superficially similar, in that they both involve beverage packaging. But Ardagh should not stand as a model for the Commission’s treatment of Ball/Rexam. The FTC made a number of mistakes in Ardagh (including market definition and the treatment of efficiencies — the latter of which brought out a strenuous dissent from Commissioner Wright). But even on its own (questionable) terms, Ardagh shouldn’t mean trouble for Ball/Rexam.
As we noted in our December 1st letter to the FTC on the Ball/Rexam merger, and as we discuss in detail in the paper, the situation in the aluminum can market is quite different than the (alleged) market for “(1) the manufacture and sale of glass containers to Brewers; and (2) the manufacture and sale of glass containers to Distillers” at issue in Ardagh.
Importantly, the FTC found (almost certainly incorrectly, at least for the brewers) that other container types (e.g., plastic bottles and aluminum cans) were not part of the relevant product market in Ardagh. But in the markets in which aluminum cans are a primary form of packaging (most notably, soda and beer), our research indicates that glass, plastic, and aluminum are most definitely substitutes.
The Big Four beverage companies (Coca-Cola, PepsiCo, Anheuser-Busch InBev, and MillerCoors), which collectively make up 80% of the U.S. market for Ball and Rexam, are all vertically integrated to some degree, and provide much of their own supply of containers (a situation significantly different than the distillers in Ardagh). These companies exert powerful price discipline on the aluminum packaging market by, among other things, increasing (or threatening to increase) their own container manufacturing capacity, sponsoring new entry, and shifting production (and, via marketing, consumer demand) to competing packaging types.
For soda, Ardagh is obviously inapposite, as soda packaging wasn’t at issue there. But the FTC’s conclusion in Ardagh that aluminum cans (which in fact make up 56% of the beer packaging market) don’t compete with glass bottles for beer packaging is also suspect.
For aluminum can manufacturers Ball and Rexam, aluminum can’t be excluded from the market (obviously), and much of the beer in the U.S. that is packaged in aluminum is quite clearly also packaged in glass. The FTC claimed in Ardagh that glass and aluminum are consumed in distinct situations, so they don’t exert price pressure on each other. But that ignores the considerable ability of beer manufacturers to influence consumption choices, as well as the reality that consumer preferences for each type of container (whether driven by beer company marketing efforts or not) are merging, with cost considerations dominating other factors.
In fact, consumers consume beer in both packaging types largely interchangeably (with a few limited exceptions — e.g., poolside drinking demands aluminum or plastic), and beer manufacturers readily switch between the two types of packaging as the relative production costs shift.
Craft brewers, to take one important example, are rapidly switching to aluminum from glass, despite a supposed stigma surrounding canned beers. Some craft brewers (particularly the larger ones) do package at least some of their beers in both types of containers, or simultaneously package some of their beers in glass and some of their beers in cans, while for many craft brewers it’s one or the other. Yet there’s no indication that craft beer consumption has fallen off because consumers won’t drink beer from cans in some situations — and obviously the prospect of this outcome hasn’t stopped craft brewers from abandoning bottles entirely in favor of more economical cans, nor has it induced them, as a general rule, to offer both types of packaging.
A very short time ago it might have seemed that aluminum wasn’t in the same market as glass for craft beer packaging. But, as recent trends have borne out, that differentiation wasn’t primarily a function of consumer preference (either at the brewer or end-consumer level). Rather, it was a function of bottling/canning costs (until recently the machinery required for canning was prohibitively expensive), materials costs (at various times glass has been cheaper than aluminum, depending on volume), and transportation costs (which cut against glass, but the relative attractiveness of different packaging materials is importantly a function of variable transportation costs). To be sure, consumer preference isn’t irrelevant, but the ease with which brewers have shifted consumer preferences suggests that it isn’t a strong constraint.
Transportation costs are key
Transportation costs, in fact, are a key part of the story — and of the conclusion that the Ball/Rexam merger is unlikely to have anticompetitive effects. First of all, transporting empty cans (or bottles, for that matter) is tremendously inefficient — which means that the relevant geographic markets for assessing the competitive effects of the Ball/Rexam merger are essentially the largely non-overlapping 200 mile circles around the companies’ manufacturing facilities. Because there are very few markets in which the two companies both have plants, the merger doesn’t change the extent of competition in the vast majority of relevant geographic markets.
But transportation costs are also relevant to the interchangeability of packaging materials. Glass is more expensive to transport than aluminum, and this is true not just for empty bottles, but for full ones, of course. So, among other things, by switching to cans (even if it entails up-front cost), smaller breweries can expand their geographic reach, potentially expanding sales enough to more than cover switching costs. The merger would further lower the costs of cans (and thus of geographic expansion) by enabling beverage companies to transact with a single company across a wider geographic range.
The reality is that the most important factor in packaging choice is cost, and that the packaging alternatives are functionally interchangeable. As a result, and given that the direct consumers of beverage packaging are beverage companies rather than end-consumers, relatively small cost changes readily spur changes in packaging choices. While there are some switching costs that might impede these shifts, they are readily overcome. For large beverage companies that already use multiple types and sizes of packaging for the same product, the costs are trivial: They already have packaging designs, marketing materials, distribution facilities and the like in place. For smaller companies, a shift can be more difficult, but innovations in labeling, mobile canning/bottling facilities, outsourced distribution and the like significantly reduce these costs.
“There’s a great future in plastics”
All of this is even more true for plastic — even in the beer market. In fact, in 2010, 10% of the beer consumed in Europe was sold in plastic bottles, as was 15% of all beer consumed in South Korea. We weren’t able to find reliable numbers for the U.S., but particularly for cheaper beers, U.S. brewers are increasingly moving to plastic. And plastic bottles are the norm at stadiums and arenas. Whatever the exact numbers, clearly plastic holds a small fraction of the beer container market compared to glass and aluminum. But that number is just as clearly growing, and as cost considerations impel them (and technology enables them), giant, powerful brewers like AB InBev and MillerCoors are certainly willing and able to push consumers toward plastic.
Meanwhile soda companies like Coca-cola and Pepsi have successfully moved their markets so that today a majority of packaged soda is sold in plastic containers. There’s no evidence that this shift came about as a result of end-consumer demand, nor that the shift to plastic was delayed by a lack of demand elasticity; rather, it was primarily a function of these companies’ ability to realize bigger profits on sales in plastic containers (not least because they own their own plastic packaging production facilities).
And while it’s not at issue in Ball/Rexam because spirits are rarely sold in aluminum packaging, the FTC’s conclusion in Ardagh that
[n]on-glass packaging materials, such as plastic containers, are not in this relevant product market because not enough spirits customers would switch to non-glass packaging materials to make a SSNIP in glass containers to spirits customers unprofitable for a hypothetical monopolist
is highly suspect — which suggests the Commission may have gotten it wrong in other ways, too. For example, as one report notes:
But the most noteworthy inroads against glass have been made in distilled liquor. In terms of total units, plastic containers, almost all of them polyethylene terephthalate (PET), have surpassed glass and now hold a 56% share, which is projected to rise to 69% by 2017.
True, most of this must be tiny-volume airplane bottles, but by no means all of it is, and it’s clear that the cost advantages of plastic are driving a shift in distilled liquor packaging, as well. Some high-end brands are even moving to plastic. Whatever resistance (and this true for beer, too) that may have existed in the past because of glass’s “image,” is breaking down: Don’t forget that even high-quality wines are now often sold with screw-tops or even in boxes — something that was once thought impossible.
The overall point is that the beverage packaging market faced by can makers like Ball and Rexam is remarkably complex, and, crucially, the presence of powerful, vertically integrated customers means that past or current demand by end-users is a poor indicator of what the market will look like in the future as input costs and other considerations faced by these companies shift. Right now, for example, over 50% of the world’s soda is packaged in plastic bottles, and this margin is set to increase: The global plastic packaging market (not limited to just beverages) is expected to grow at a CAGR of 5.2% between 2014 and 2020, while aluminum packaging is expected to grow at just 2.9%.
A note on efficiencies
As noted above, the proposed Ball/Rexam merger also holds out the promise of substantial efficiencies (estimated at $300 million by the merging parties, due mainly to decreased transportation costs). There is a risk, however, that the FTC may effectively disregard those efficiencies, as it did in Ardagh (and in St. Luke’s before it), by saddling them with a higher burden of proof than it requires of its own prima facie claims. If the goal of antitrust law is to promote consumer welfare, competition authorities can’t ignore efficiencies in merger analysis.
In his Ardaghdissent, Commissioner Wright noted that:
Even when the same burden of proof is applied to anticompetitive effects and efficiencies, of course, reasonable minds can and often do differ when identifying and quantifying cognizable efficiencies as appears to have occurred in this case. My own analysis of cognizable efficiencies in this matter indicates they are significant. In my view, a critical issue highlighted by this case is whether, when, and to what extent the Commission will credit efficiencies generally, as well as whether the burden faced by the parties in establishing that proffered efficiencies are cognizable under the Merger Guidelines is higher than the burden of proof facing the agencies in establishing anticompetitive effects. After reviewing the record evidence on both anticompetitive effects and efficiencies in this case, my own view is that it would be impossible to come to the conclusions about each set forth in the Complaint and by the Commission — and particularly the conclusion that cognizable efficiencies are nearly zero — without applying asymmetric burdens.
The Commission shouldn’t make the same mistake here. In fact, here, where can manufacturers are squeezed between powerful companies both upstream (e.g., Alcoa) and downstream (e.g., AB InBev), and where transportation costs limit the opportunities for expanding the customer base of any particular plant, the ability to capitalize on economies of scale and geographic scope is essential to independent manufacturers’ abilities to efficiently meet rising demand.
Read our complete assessment of the merger’s effect here.
The Religious Freedom Restoration Act (RFRA) subjects government-imposed burdens on religious exercise to strict scrutiny. In particular, the Act provides that “[g]overnment shall not substantially burden a person’s exercise of religion even if the burden results from a rule of general applicability” unless the government can establish that doing so is the least restrictive means of furthering a “compelling government interest.”
So suppose a for-profit corporation’s stock is owned entirely by evangelical Christians with deeply held religious objections to abortion. May our federal government force the company to provide abortifacients to its employees? That’s the central issue in Sebelius v. Hobby Lobby Stores, which the Supreme Court will soon decide. As is so often the case, resolution of the issue turns on a seemingly mundane matter: Is a for-profit corporation a “person” for purposes of RFRA?
In an amicus brief filed in the case, a group of forty-four corporate and criminal law professors argued that treating corporations as RFRA persons would contradict basic principles of corporate law. Specifically, they asserted that corporations are distinct legal entities from their shareholders, who enjoy limited liability behind a corporate veil and cannot infect the corporation with their own personal religious views. The very nature of a corporation, the scholars argued, precludes shareholders from exercising their religion in corporate form. Thus, for-profit corporations can’t be “persons” for purposes of RFRA.
In what amounts to an epic takedown of the law professor amici, William & Mary law professors Alan Meese and Nathan Oman have published an article explaining why for-profit corporations are, in fact, RFRA persons. Their piece in the Harvard Law Review Forum responds methodically to the key points made by the law professor amici and to a few other arguments against granting corporations free exercise rights.
Among the arguments that Meese and Oman ably rebut are:
Religious freedom applies only to natural persons.
Corporations are simply instrumentalities by which people act in the world, Meese and Oman observe. Indeed, they are nothing more than nexuses of contracts, provided in standard form but highly tailorable by those utilizing them. “When individuals act religiously using corporations they are engaged in religious exercise. When we regulate corporations, we in fact burden the individuals who use the corporate form to pursue their goals.”
Given the essence of a corporation, which separates ownership and control, for-profit corporations can’t exercise religion in accordance with the views of their stockholders.
This claim is simply false. First, it is possible — pretty easy, in fact — to unite ownership and control in a corporation. Business planners regularly do so using shareholder agreements, and many states, including Delaware, explicitly allow for shareholder management of close corporations. Second, scads of for-profit corporations engage in religiously motivated behavior — i.e., religious exercise. Meese and Oman provide a nice litany of examples (with citations omitted here):
A kosher supermarket owned by Orthodox Jews challenged Massachusetts’ Sunday closing laws in 1960. For seventy years, the Ukrops Supermarket chain in Virginia closed on Sundays, declined to sell alcohol, and encouraged employees to worship weekly. A small grocery store in Minneapolis with a Muslim owner prepares halal meat and avoids taking loans that require payment of interest prohibited by Islamic law. Chick-fil-A, whose mission statement promises to “glorify God,” is closed on Sundays. A deli that complied with the kosher standards of its Conservative Jewish owners challenged the Orthodox definition of kosher found in New York’s kosher food law, echoing a previous challenge by a different corporation of a similar New Jersey law. Tyson Foods employs more than 120 chaplains as part of its effort to maintain a “faith-friendly” culture. New York City is home to many Kosher supermarkets that close two hours before sundown on Friday and do not reopen until Sunday. A fast-food chain prints citations of biblical verses on its packaging and cups. A Jewish entrepreneur in Brooklyn runs a gas station and coffee shop that serves only Kosher food. Hobby Lobby closes on Sundays and plays Christian music in its stores. The company provides employees with free access to chaplains, spiritual counseling, and religiously themed financial advice. Moreover, the company does not sell shot glasses, refuses to allow its trucks to “backhaul” beer, and lost $3.3 million after declining to lease an empty building to a liquor store.
As these examples illustrate, the assertion by lower courts that “for-profit, secular corporations cannot engage in religious exercise” is just empirically false.
Allowing for-profit corporations to have religious beliefs would create intracorporate conflicts that would reduce the social value of the corporate form of business.
The corporate and criminal law professor amici described a parade of horribles that would occur if corporations were deemed RFRA persons. They insisted, for example, that RFRA protection would inject religion into a corporation in a way that “could make the raising of capital more challenging, recruitment of employees more difficult, and entrepreneurial energy less likely to flourish.” In addition, they said, RFRA protection “would invite contentious shareholder meetings, disruptive proxy contests, and expensive litigation regarding whether the corporations should adopt a religion and, if so, which one.”
But actual experience suggests there’s no reason to worry about such speculative harms. As Meese and Oman observe, we’ve had lots of experience with this sort of thing: Federal and state laws already allow for-profit corporations to decline to perform or pay for certain medical procedures if they have religious or moral objections. From the Supreme Court’s 1963 Sherbert decision to its 1990 Smith decision, strict scrutiny applied to governmental infringements on corporations’ religious exercise. A number of states have enacted their own versions of RFRA, most of which apply to corporations. Thus, “[f]or over half a century, … there has been no per se bar to free exercise claims by for-profit corporations, and the parade of horribles envisioned by the [law professor amici] has simply not materialized.” Indeed, “the scholars do not cite a single example of a corporate governance dispute connected to [corporate] decisions [related to religious exercise].”
Permitting for-profit corporations to claim protection under RFRA will lead to all sorts of false claims of religious belief in an attempt to evade government regulation.
The law professor amici suggest that affording RFRA protection to for-profit corporations may allow such companies to evade regulatory requirements by manufacturing a religious identity. They argue that “[c]ompanies suffering a competitive disadvantage [because of a government regulation] will simply claim a ‘Road to Damascus’ conversion. A company will adopt a board resolution asserting a religious belief inconsistent with whatever regulation they find obnoxious . . . .”
As Meese and Oman explain, however, this problem is not unique to for-profit corporations. Natural persons may also assert insincere religious claims, and courts may need to assess sincerity to determine if free exercise rights are being violated. The law professor amici contend that it would be unprecedented for courts to assess whether religious beliefs are asserted in “good faith.” But the Supreme Court decision the amici cite in support of that proposition, Meese and Oman note, held only that courts lack competence to evaluate the truth of theological assertions or the accuracy of a particular litigant’s interpretation of his faith. “This task is entirely separate … from the question of whether a litigant’s asserted religious beliefs are sincerely held. Courts applying RFRA have not infrequently evaluated such sincerity.”
In addition to rebutting the foregoing arguments (and several others) against treating for-profit corporations as RFRA persons, Meese and Oman set forth a convincing affirmative argument based on the plain text of the statute and the Dictionary Act. I’ll let you read that one on your own.
I’ll also point interested readers to Steve Bainbridge’s fantastic work on this issue. Here is his critique of the corporate and criminal law professors’ amicus brief. Here is his proposal for using the corporate law doctrine of reverse veil piercing to assess a for-profit corporation’s religious beliefs.