Archives For regulation

Over the past few weeks, Truth on the Market has had several posts related to harm reduction policies, with a focus on tobacco, e-cigarettes, and other vapor products:

Harm reduction policies are used to manage a wide range of behaviors including recreational drug use and sexual activity. Needle-exchange programs reduce the spread of infectious diseases among users of heroin and other injected drugs. Opioid replacement therapy substitutes illegal opioids, such as heroin, with a longer acting but less euphoric opioid. Safer sex education and condom distribution in schools are designed to reduce teenage pregnancy and reduce the spread of sexually transmitted infections. None of these harm reduction policies stop the risky behavior, nor do the policies eliminate the potential for harm. Nevertheless, the policies intend to reduce the expected harm.

Carrie Wade, Director of Harm Reduction Policy and Senior Fellow at the R Street Institute, draws a parallel between opiate harm reduction strategies and potential policies related to tobacco harm reduction. She notes that with successful one-year quit rates hovering around 10 percent, harm reduction strategies offer ways to transition more smokers off the most dangerous nicotine delivery device: the combustible cigarette.

Most of the harm from smoking is caused by the inhalation of toxicants released through the combustion of tobacco. Use of non-combustible nicotine delivery systems, such as e-cigarettes and smokeless tobacco generally are considered to be significantly less harmful than smoking cigarettes. UK government agency Public Health England has concluded that e-cigarettes are around 95 percent less harmful than combustible cigarettes.

In the New England Journal of Medicine, Fairchild, et al. (2018) identify a continuum of potential policies regarding the regulation of vapor products, such as e-cigarettes, show in the figure below.  They note that the most restrictive policies would effectively eliminate e-cigarettes as a viable alternative to smoking, while the most permissive may promote e-cigarette usage and potentially encourage young people—who would not do so otherwise—to take-up e-cigarettes. In between these extremes are policies that may discourage young people from initiating use of e-cigarettes, while encouraging current smokers to switch to less harmful vapor products.

nejmp1711991_f1

International Center for Law & Economics chief economist, Eric Fruits, notes in his blog post that more than 20 countries have introduced taxation on e-cigarettes and other vapor products. In the United States, several states and local jurisdictions have enacted e-cigarette taxes. His post is based on a recently released ICLE white paper entitled Vapor products, harm reduction, and taxation: Principles, evidence and a research agenda.

Under a harm reduction principle, Fruits argues that e-cigarettes and other vapor products should face no taxes or low taxes relative to conventional cigarettes, to guide consumers toward a safer alternative to smoking.

In contrast to harm reduction principles,  the precautionary principle as well as principles of tax equity point toward the taxation of vapor products at rates similar to conventional cigarettes.

On the one hand, some policymakers claim that the objective of taxing nicotine products is to reduce nicotine consumption. On the other hand, Dan Mitchell, co-founder of the Center for Freedom and Prosperity, points out that some politicians are concerned that they will lose tax revenue if a substantial number of smokers switch to options such as vaping.

Often missed in the policy discussion is the effect of fiscal policies on innovation and the development and commercialization of harm-reducing products. Also, often missed are the consequences for current consumers of nicotine products, including smokers seeking to quit using harmful conventional cigarettes.

Policy decisions regarding taxation of vapor products should take into account both long-term fiscal effects and broader economic and welfare effects. These effects might (or might not) suggest very different tax policies to those that have been enacted or are under consideration. These considerations, however, are frustrated by unreliable and wildly divergent empirical estimates of consumer demand in the face of changing prices and/or rising taxes.

Along the lines of uncertain—if not surprising—impacts Fritz Laux, professor of economics at Northeastern State University, provides an explanation of why smoke-free air laws have not been found to adversely affect revenues or employment in the restaurant and hospitality industries.

He argues that social norms regarding smoking in restaurants have changed to the point that many smokers themselves support bans on smoking in restaurants. In this way, he hypothesizes, smoke-free air laws do not impose a significant constraint on consumer behavior or business activity. We might likewise infer, by extension, that policies which do not prohibit vaping in public spaces (leaving such decisions to the discretion of business owners and managers) could encourage switching by people who otherwise would have to exit buildings in order to vape or smoke—without adversely affecting businesses.

Principles of harm reduction recognize that every policy proposal has uncertain outcomes as well as potential spillovers and unforeseen consequences. With such high risks and costs associated with cigarette and other combustible use, taxes and regulations must be developed in an environment of uncertainty and with an eye toward a net reduction in harm, rather than an unattainable goal of zero harm or in an overt pursuit of tax revenues.

 

Dan Mitchell is the co-founder of the Center for Freedom and Prosperity.

In an ideal world, the discussion and debate about how (or if) to tax e-cigarettes, heat-not-burn, and other tobacco harm-reduction products would be guided by science. Policy makers would confer with experts, analyze evidence, and craft prudent and sensible laws and regulations.

In the real world, however, politicians are guided by other factors.

There are two things to understand, both of which are based on my conversations with policy staff in Washington and elsewhere.

First, this is a battle over tax revenue. Politicians are concerned that they will lose tax revenue if a substantial number of smokers switch to options such as vaping.

This is very much akin to the concern that electric cars and fuel-efficient cars will lead to a loss of money from excise taxes on gasoline.

In the case of fuel taxes, politicians are anxiously looking at other sources of revenue, such as miles-driven levies. Their main goal is to maintain – or preferably increase – the amount of money that is diverted to the redistributive state so that politicians can reward various interest groups.

In the case of tobacco, a reduction in the number of smokers (or the tax-driven propensity of smokers to seek out black-market cigarettes) is leading politicians to concoct new schemes for taxing e-cigarettes and related non-combustible products.

Second, this is a quasi-ideological fight. Not about capitalism versus socialism, or big government versus small government. It’s basically a fight over paternalism, or a battle over goals.

For all intents and purposes, the question is whether lawmakers should seek to simultaneously discourage both tobacco use and vaping because both carry some risk (and perhaps because both are considered vices for the lower classes)? Or should they welcome vaping since it leads to harm reduction as smokers shift to a dramatically safer way of consuming nicotine?

In statistics, researchers presumably always recognize the dangers of certain types of mistakes, known as Type I errors (also known as a “false positive”) and Type II errors (also known as a “false negative”).

How does this relate to smoking, vaping, and taxes?

Simply stated, both sides of the fight are focused on a key goal and secondary issues are pushed aside. In other words, tradeoffs are being ignored.

The advocates of high taxes on e-cigarettes and other non-combustible products are fixated on the possibility that vaping will entice some people into the market. Maybe vaping wil even act as a gateway to smoking. So, they want high taxes on vaping, akin to high taxes on tobacco, even though the net result is that this leads many smokers to stick with cigarettes instead of making a switch to less harmful products.

On the other side of the debate are those focused on overall public health. They see emerging non-combustible products as very effective ways of promoting harm reduction. Is it possible that e-cigarettes may be tempting to some people who otherwise would never try tobacco? Yes, that’s possible, but it’s easily offset by the very large benefits that accrue as smokers become vapers.

For all intents and purposes, the fight over the taxation of vaping is similar to other ideological fights.

The old joke in Washington is that a conservative is someone who will jail 99 innocent people in order to put one crook in prison and a liberal is someone who will free 99 guilty people to prevent one innocent person from being convicted (or, if you prefer, a conservative will deny 99 poor people to catch one welfare fraudster and a liberal will line the pockets of 99 fraudsters to make sure one genuinely poor person gets money).

The vaping fight hasn’t quite reached this stage, but the battle lines are very familiar. At some point in the future, observers may joke that one side is willing to accept more smoking if one teenager forgoes vaping while the other side is willing to have lots of vapers if it means one less smoker.

Having explained the real drivers of this debate, I’ll close by injecting my two cents and explaining why the paternalists are wrong. But rather than focus on libertarian-type arguments about personal liberty, I’ll rely on three points, all of which are based on conventional cost-benefit analysis and the sensible approach to excise taxation.

  • First, tax policy should focus on incentivizing a switch and not punishing those who chose a less harmful products. The goal should be harm reduction rather than revenue maximization.
  • Second, low tax burdens also translate into lower long-run spending burdens because a shift to vaping means a reduction in overall healthcare costs related to smoking cigarettes.
  • Third, it makes no sense to impose punitive “sin taxes” on behaviors that are much less, well, sinful. There’s a big difference in the health and fiscal impact of cigarettes compared to the alternatives.

One final point is that this issue has a reverse-class-warfare component. Anti-smoking activists generally have succeeded in stigmatizing cigarette consumption and most smokers are now disproportionately from the lower-income community. For better (harm reduction) or worse (elitism), low-income smokers are generally treated with disdain for their lifestyle choices.  

It is not an explicit policy, but that disdain now seems to extend to any form of nicotine consumption, even though the health effects of vaping are vastly lower.

It is a truth universally acknowledged that unwanted telephone calls are among the most reviled annoyances known to man. But this does not mean that laws intended to prohibit these calls are themselves necessarily good. Indeed, in one sense we know intuitively that they are not good. These laws have proven wholly ineffective at curtailing the robocall menace — it is hard to call any law as ineffective as these “good”. And these laws can be bad in another sense: because they fail to curtail undesirable speech but may burden desirable speech, they raise potentially serious First Amendment concerns.

I presented my exploration of these concerns, coming out soon in the Brooklyn Law Review, last month at TPRC. The discussion, which I get into below, focuses on the Telephone Consumer Protection Act (TCPA), the main law that we have to fight against robocalls. It considers both narrow First Amendment concerns raised by the TCPA as well as broader concerns about the Act in the modern technological setting.

Telemarketing Sucks

It is hard to imagine that there is a need to explain how much of a pain telemarketing is. Indeed, it is rare that I give a talk on the subject without receiving a call during the talk. At the last FCC Open Meeting, after the Commission voted on a pair of enforcement actions taken against telemarketers, Commissioner Rosenworcel picked up her cell phone to share that she had received a robocall during the vote. Robocalls are the most complained of issue at both the FCC and FTC. Today, there are well over 4 billion robocalls made every month. It’s estimated that half of all phone calls made in 2019 will be scams (most of which start with a robocall). .

It’s worth noting that things were not always this way. Unsolicited and unwanted phone calls have been around for decades — but they have become something altogether different and more problematic in the past 10 years. The origin of telemarketing was the simple extension of traditional marketing to the medium of the telephone. This form of telemarketing was a huge annoyance — but fundamentally it was, or at least was intended to be, a mere extension of legitimate business practices. There was almost always a real business on the other end of the line, trying to advertise real business opportunities.

This changed in the 2000s with the creation of the Do Not Call (DNC) registry. The DNC registry effectively killed the “legitimate” telemarketing business. Companies faced significant penalties if they called individuals on the DNC registry, and most telemarketing firms tied the registry into their calling systems so that numbers on it could not be called. And, unsurprisingly, an overwhelming majority of Americans put their phone numbers on the registry. As a result the business proposition behind telemarketing quickly dried up. There simply weren’t enough individuals not on the DNC list to justify the risk of accidentally calling individuals who were on the list.

Of course, anyone with a telephone today knows that the creation of the DNC registry did not eliminate robocalls. But it did change the nature of the calls. The calls we receive today are, overwhelmingly, not coming from real businesses trying to market real services or products. Rather, they’re coming from hucksters, fraudsters, and scammers — from Rachels from Cardholder Services and others who are looking for opportunities to defraud. Sometimes they may use these calls to find unsophisticated consumers who can be conned out of credit card information. Other times they are engaged in any number of increasingly sophisticated scams designed to trick consumers into giving up valuable information.

There is, however, a more important, more basic difference between pre-DNC calls and the ones we receive today. Back in the age of legitimate businesses trying to use the telephone for marketing, the relationship mattered. Those businesses couldn’t engage in business anonymously. But today’s robocallers are scam artists. They need no identity to pull off their scams. Indeed, a lack of identity can be advantageous to them. And this means that legal tools such as the DNC list or the TCPA (which I turn to below), which are premised on the ability to take legal action against bad actors who can be identified and who have assets than can be attached through legal proceedings, are wholly ineffective against these newfangled robocallers.

The TCPA Sucks

The TCPA is the first law that was adopted to fight unwanted phone calls. Adopted in 1992, it made it illegal to call people using autodialers or prerecorded messages without prior express consent. (The details have more nuance than this, but that’s the gist.) It also created a private right of action with significant statutory damages of up to $1,500 per call.

Importantly, the justification for the TCPA wasn’t merely “telemarketing sucks.” Had it been, the TCPA would have had a serious problem: telemarketing, although exceptionally disliked, is speech, which means that it is protected by the First Amendment. Rather, the TCPA was enacted primarily upon two grounds. First, telemarketers were invading the privacy of individuals’ homes. The First Amendment is license to speak; it is not license to break into someone’s home and force them to listen. And second, telemarketing calls could impose significant real costs on the recipients of calls. At the time, receiving a telemarketing call could, for instance, cost cellular customers several dollars; and due to the primitive technologies used for autodialing, these calls would regularly tie up residential and commercial phone lines for extended periods of time, interfere with emergency calls, and fill up answering machine tapes.

It is no secret that the TCPA was not particularly successful. As the technologies for making robocalls improved throughout the 1990s and their costs went down, firms only increased their use of them. And we were still in a world of analog telephones, and Caller ID was still a new and not universally-available technology, which made it exceptionally difficult to bring suits under the TCPA. Perhaps more important, while robocalls were annoying, they were not the omnipresent fact of life that they are today: cell phones were still rare; most of these calls came to landline phones during dinner where they were simply ignored.

As discussed above, the first generation of robocallers and telemarketers quickly died off following adoption of the DNC registry.

And the TCPA is proving no more effective during this second generation of robocallers. This is unsurprising. Callers who are willing to blithely ignore the DNC registry are just as willing to blithely ignore the TCPA. Every couple of months the FCC or FTC announces a large fine — millions or tens of millions of dollars — against a telemarketing firm that was responsible for making millions or tens of millions or even hundreds of millions of calls over a multi-month period. At a time when there are over 4 billion of these calls made every month, such enforcement actions are a drop in the ocean.

Which brings us to the FIrst Amendment and the TCPA, presented in very cursory form here (see the paper for more detailed analysis). First, it must be acknowledged that the TCPA was challenged several times following its adoption and was consistently upheld by courts applying intermediate scrutiny to it, on the basis that it was regulation of commercial speech (which traditionally has been reviewed under that more permissive standard). However, recent Supreme Court opinions, most notably that in Reed v. Town of Gilbert, suggest that even the commercial speech at issue in the TCPA may need to be subject to the more probing review of strict scrutiny — a conclusion that several lower courts have reached.

But even putting the question of whether the TCPA should be reviewed subject to strict or intermediate scrutiny, a contemporary facial challenge to the TCPA on First Amendment grounds would likely succeed (no matter what standard of review was applied). Generally, courts are very reluctant to allow regulation of speech that is either under- or over-inclusive — and the TCPA is substantially both. We know that it is under-inclusive because robocalls have been a problem for a long time and the problem is only getting worse. And, at the same time, there are myriad stories of well-meaning companies getting caught up on the TCPA’s web of strict liability for trying to do things that clearly should not be deemed illegal: sports venues sending confirmation texts when spectators participate in text-based games on the jumbotron; community banks getting sued by their own members for trying to send out important customer information; pharmacies reminding patients to get flu shots. There is discussion to be had about how and whether calls like these should be permitted — but they are unquestionably different in kind from the sort of telemarketing robocalls animating the TCPA (and general public outrage).

In other words the TCPA prohibits some amount of desirable, Constitutionally-protected, speech in a vainglorious and wholly ineffective effort to curtail robocalls. That is a recipe for any law to be deemed an unconstitutional restriction on speech under the First Amendment.

Good News: Things Don’t Need to Suck!

But there is another, more interesting, reason that the TCPA would likely not survive a First Amendment challenge today: there are lots of alternative approaches to addressing the problem of robocalls. Interestingly, the FCC itself has the ability to direct implementation of some of these approaches. And, more important, the FCC itself is the greatest impediment to some of them being implemented. In the language of the First Amendment, restrictions on speech need to be narrowly tailored. It is hard to say that a law is narrowly tailored when the government itself controls the ability to implement more tailored approaches to addressing a speech-related problem. And it is untenable to say that the government can restrict speech to address a problem that is, in fact, the result of the government’s own design.

In particular, the FCC regulates a great deal of how the telephone network operates, including over the protocols that carriers use for interconnection and call completion. Large parts of the telephone network are built upon protocols first developed in the era of analog phones and telephone monopolies. And the FCC itself has long prohibited carriers from blocking known-scam calls (on the ground that, as common carriers, it is their principal duty to carry telephone traffic without regard to the content of the calls).

Fortunately, some of these rules are starting to change. The Commission is working to implement rules that will give carriers and their customers greater ability to block calls. And we are tantalizingly close to transitioning the telephone network away from its traditional unauthenticated architecture to one that uses a strong cyrptographic infrastructure to provide fully authenticated calls (in other words, Caller ID that actually works).

The irony of these efforts is that they demonstrate the unconstitutionality of the TCPA: today there are better, less burdensome, more effective ways to deal with the problems of uncouth telemarketers and robocalls. At the time the TCPA was adopted, these approaches were technologically infeasible, so the its burdens upon speech were more reasonable. But that cannot be said today. The goal of the FCC and legislators (both of whom are looking to update the TCPA and its implementation) should be less about improving the TCPA and more about improving our telecommunications architecture so that we have less need for cludgel-like laws in the mold of the TCPA.

 

Fritz L. Laux is a Professor of Economics at Northeastern State University in Tahlequah, Oklahoma.

The puzzling lack of economic impacts

One focus in the analysis of smoke-free air (SFA) laws has been on measuring the impact smoking bans have on the restaurant and hospitality industries. The overwhelming or “consensus” result of this research is that bans impose no adverse impact on industry revenues and employment levels (Scollo et al., 2003; Scollo and Lal, 2008; Hahn, 2010; CDC Fact Sheet, 2014).

What’s puzzling about this literature is that the “no-statistical-significance” result is presented as a neutral or, “this takes the issue off the table” result. I would suggest that the robustness of this finding should be presented as “shocking” and highly significant (if not “statistically significant”).

The economic model for the behavior of profit-maximizing firms would indicate that any restaurant or hospitality venue that could benefit from a smoking ban would already have implemented such a ban. Thus, the imposition of smoking bans should never help and should always hurt such industries. While our model predicts that bans can never help restaurants and can only hurt them, our finding shows that bans tend to have no impact, and may slightly help the average restaurant. This should be viewed, if not highlighted, as surprising.  

Clearly, we understand why the result might be presented with the “no adverse economic impact” headline. Restaurant and hospitality industry groups are important constituencies that can influence policy, and estimates of the business impacts of SFA laws can motivate or placate policy activists. If the laws have, on average, no adverse impact on the members of a local restaurant association, then that restaurant association should have no incentive to oppose SFA ordinances.

My suggestion, however, is that we should give more attention to the strangeness of this result and to the investigation of how this result can be occurring. Where is the market failure that prevented more restaurateurs from implementing SFA policies of their own accord, without need for SFA ordinances? Can efforts to bring more publicity to these market failures help restaurateurs and the public better to understand why SFA policies can make good policy?

Sources of market failure

The obvious (if not tautological) explanation for this weird result is that restaurateurs have somehow been consistently misestimating the business impact of SFA. There are several possible reasons for why this would happen and the most likely of these, it seems, is that social norms play a role in defining how restaurant employees and customers respond to a ban (Leibenstein, 1950). Before imposition of a ban, if the norm is to allow for smoking, then politeness dictates that we will expect restaurants to allow smoking. After a ban (and the resulting change in norms), just as nobody expects to smoke at a fitness club, smoking customers experience reduced desire or expectation of smoking in restaurants. Thus, if the ban changes the norm in ways that restaurateurs do not anticipate, we see empirical results such that industry impact is positive or zero instead of negative.

Borland (2006) with coauthors from the International Tobacco Control project provide evidence of just this kind of an effect. In a survey of current smokers, they found that for those U.S. smokers reporting that they lived in jurisdictions where restaurant smoking was not banned, only 17.5% supported bans on restaurant smoking. For smokers who reported total bans on restaurant smoking in their jurisdictions, 65.5% supported bans on restaurant smoking. Not surprisingly, it seems that expectations and preferences are affected by changes in norms.

With over three-fourths of the U.S. population now living in jurisdictions covered by 100% smoke-free restaurant laws, such shifts in norms within the U.S. are well underway. However, in communities where restaurant smoking is still commonly accepted, complaining to a restaurant manager about another customer’s smoking might seem a bit strange and confrontational. In these situations, patrons and employees may also not be as aware of the health consequences of secondhand smoke. After the publicity of a smoke-free air ordinance heightens awareness and after having experienced eating in a smoke-free restaurants, the value patrons place on smoke-free air may go up. Similarly, restaurant employee may acquire increased preferences for work in smoke-free establishments (Tang et al., 2004).

Although this argument seems less convincing (given the large percentages of restaurants that did go smoke-free well in advance of SFA law implementation), another possible explanation for how restaurateurs could have so consistently misestimated the business impact of smoke-free air policies is that they may have been influenced by incorrect or biased information. From the 1980s through the early 2000s, restaurant managers would have received lots of communication from various state and national industry associations arguing either that smoking restrictions would hurt business or that improved ventilation, rather than going smoke free, would be the correct industry response. As can be seen in online archives of tobacco industry documents, the Tobacco Institute was actively working with hospitality industry associations to promote such an “accommodation strategy” (via improved ventilation and smoking sections) for restaurants during these years when most smoke-free air legislation was passed (Dearlove et al., 2002). This industry-funded analysis, as intended, did likely have some influence the decisions made by restaurateurs.

Implications

From those who oppose SFA laws, the primary argument has been that, if bans do not hurt the restaurant and hospitality industries, why do they need to be imposed on these industries? Would not any restaurants and bars that could benefit from smoking bans have already implemented such bans of their own accords? My suggestion is that, in any advocacy for SFA, it may be appropriate to try to answer these objections more directly. Using research like the Borland et al. (2006) article, we can suggest why it is that restaurateurs, who would benefit from SFA implementation, don’t implement SFA policies of their own accords. Then, after having offered theoretical explanations, we can present our empirical analyses of the economic impact on the restaurant and hospitality industries with more credibility. The idea is that, just as good empirical work gives credence to theory, intuitive theoretical explanations give credence to empirical results.

Carrie Wade, Ph.D., MPH is the Director of Harm Reduction Policy and Senior Fellow at the R Street Institute.

Abstinence approaches work exceedingly well on an individual level but continue to fail when applied to populations. We can see this in several areas: teen pregnancy; continued drug use regardless of severe criminal penalties; and high smoking rates in vulnerable populations, despite targeted efforts to prevent youth and adult uptake.

The good news is that abstinence-oriented prevention strategies do seem to have a positive effect on smoking. Overall, teen use has steadily declined since 1996. This may be attributed to an increase in educational efforts to prevent uptake, stiff penalties for retailers who fail to verify legal age of purchase, the increased cost of cigarettes, and a myriad of other interventions.

Unfortunately many are left behind. Populations with lower levels of educational attainment, African Americans and, ironically, those with less disposable income have smoking rates two to three times that of the general population. In light of this, how can we help people for whom the abstinence-only message has failed? Harm reduction strategies can have a positive effect on the quality of life of smokers who cannot or do not wish to quit.

Why harm reduction?

Harm reduction approaches recognize that reduction in risky behavior is one possible means to address public health goals. They take a pragmatic approach to the consequences of risk behaviors – focusing on short-term attainable goals rather than long-term ideals—and provide options beyond abstinence to decrease harm relative to the riskier behavior.

In economic terms, traditional public health approaches to drug use target supply and demand, which is to say they attempt to decrease the supply of a drug while also reducing the demand for it. But this often leads to more risky behaviors and adverse outcomes. For example, when prescription opioids were restricted, those who were not deterred from such an inconvenience switched to heroin; when heroin became tricky to smuggle, traffickers switched to fentanyl. We might predict the same effects when it comes to cigarettes.

Given this, since we know that the riskiest of behaviors, such as tobacco, alcohol and other drug use will continue—and possibly flourish in many populations—we should instead focus on ways to decrease the supply of the most dangerous methods of use and increase the supply of and demand for safer, innovative tools. This is the crux of harm reduction.

Opioid Harm Reduction

Like most innovation, harm reduction strategies for opioid and/or injection drug users were born out of a need. In the 1980s, sterile syringes were certainly not an innovative technology. However, the idea that clean needle distribution could put a quick end to the transmission of the Hepatitis B virus in Amsterdam was, and the success of this intervention was noticed worldwide.

Although clean needle distribution was illegal at the time, activists who saw a need for this humanitarian intervention risked jail time and high fines to reduce the risk of infectious disease transmission among injection drug users in New Haven and Boston. Making such programs accessible was not an easy thing to do. Amid fears that dangerous drug use may increase and the idea that harm reduction programs would tacitly endorse illegal activity, there was resistance in governments and institutions adopting harm reduction strategies as a public health intervention.

However, following a noticeable decrease in the incidence of HIV in this population, syringe exchange access expanded across the United States and Europe. At first, clean syringe access programs (SAPs) operated with the consent of the communities they served but as the idea spread, these programs received financial and logistical support from several health departments. As of 2014, there are over 200 SAPs operating in 33 states and the District of Columbia.

Successes

Time has shown that these approaches are wildly successful in their primary objective and enormously cost effective. In 2008, Washington D.C. allocated $650,000 to increase harm reduction services including syringe access. As of 2011, it was estimated that this investment had averted 120 cases of HIV, saving $44 million.

Seven studies conducted by leading scientific and governmental agencies from 1991 through 2001 have also concluded that syringe access programs result in a decrease in HIV transmission without residual effects of increased injection drug use. In addition, SAPs are correlated with increased entry into treatment and detox programs and do not result in increases in crime in neighborhoods that support these programs.

Tobacco harm reduction

We know that some populations have a higher risk of smoking and of developing and dying from smoking-related diseases. With successful one-year quit rates hovering around 10 percent, harm reduction strategies can offer ways to transition smokers off of the most dangerous nicotine delivery device: the combustible cigarette.

In 2008, the World Health Organization developed the MPOWER policy package aimed to reduce the burden of cigarette smoking worldwide. In their vision statement, the authors explicitly state a goal where “no child or adult is exposed to tobacco smoke.”

Using an abstinence-only framework, MPOWER strategies are:

  1. To monitor tobacco use and obtain data on use in youth and adults;
  2. To protect society from second-hand smoke and decrease the availability of places that people are allowed to smoke by enacting and enforcing indoor smoking bans;
  3. To offer assistance in smoking cessation through strengthening health systems and legalization of nicotine replacement therapies (NRTs) and other pharmaceutical interventions where necessary;
  4. To warn the public of the dangers of smoking through public health campaigns, package warnings and counter advertising;
  5. To enact and enforce advertising bans; and
  6. To raise tobacco excise taxes.

These strategies have been shown to reduce the prevalence of tobacco use. People who quit smoking have a greater chance of remaining abstinent if they use NRTs. People exposed to pictorial health warnings are more likely to say they want to quit as a result. Countries with comprehensive advertising bans have a larger decrease in smoking rates compared to those without. Raising taxes has proven consistently to reduce consumption of tobacco products.

But, the effects of MPOWER programs are limited. Tobacco and smoking are often deeply ingrained in the culture and identity of communities. Studies repeatedly show that smoking is strongly tied to occupation and education, smokers’ self-identity and also the role that tobacco has in the economy and identity of the community.

As a practical matter, the abstinence approach is also limited by individual governmental laws. Article 13 of the Framework Convention on Tobacco Control recognizes that constitutional principles or laws may limit the capabilities of governments to implement these policy measures. In the United States, cigarettes are all but protected by the complexity of both the 1998 Master Settlement Agreement and the Family Smoking Protection and Tobacco Control Act of 2009. This guarantees availability to consumers – ironically increasing the need of more reduced-risk nicotine products, such as e-cigarettes, heat-not-burn devices or oral Snus, all of which offer an alternative to combustible use for people who either cannot or do not wish to quit smoking.

Several regulatory agencies, including the FDA in the United States and Public Health England in the United Kingdom, recognize that tobacco products exist on a continuum of risk, with combustible products (the most widely used) being the most dangerous and non-combustible products existing on the opposite end of the spectrum. In fact, Public Health England estimates that e-cigarettes are at least 95% safer than combustible products and many toxicological and epidemiological studies support this assertion.

Of course for tobacco harm reduction to work, people must have an incentive to move away from combustible cigarettes.There are two equally important strategies to convince people to do so. First, public health officials need to acknowledge that e-cigarettes are less risky. Continued mixed messages from government officials and tobacco use prevention organizations confuse people regarding the actual risks from e-cigarettes. Over half of adults in the United States believe that nicotine is the culprit of smoking-related illnesses – and who can blame them when our current tobacco control strategies are focused on lowering nicotine concentrations and ridding our world of e-cigarettes?

The second is price. People who cannot or do not wish to quit smoking will never switch to safer alternatives if they are more, or as, expensive as cigarettes. Keeping the total cost of reduced risk products low will encourage people who might not otherwise consider switching to do so. The best available estimates show that e-cigarette demand is much more vulnerable to price increases than combustible cigarettes – meaning that smokers are unlikely to respond to price increases meant to dissuade them from smoking, and are less likely to vape as a means to quit or as a safer alternative.

Of course strategies to prevent smoking or encourage cessation should be a priority for all populations that smoke, but harm-reduction approaches—in particular with respect to smoking—play a vital role in decreasing death and disease in people who engage in such risky behavior. For this reason, they should always be promoted alongside abstinence approaches.

ICLE has released a white paper entitled Vapor products, harm reduction, and taxation: Principles, evidence and a research agenda, authored by ICLE Chief Economist, Eric Fruits.

More than 20 countries have introduced taxation on e-cigarettes and other vapor products. In the United States, several states and local jurisdictions have enacted e-cigarette taxes.

The concept of tobacco harm reduction began in 1976 when Michael Russell, a psychiatrist and lecturer at the Addiction Research Unit of Maudsley Hospital in London, wrote: “People smoke for nicotine but they die from the tar.”  Russell hypothesized that reducing the ratio of tar to nicotine could be the key to safer smoking.

Since then, much of the harm from smoking has been well-established as caused almost exclusively by toxicants released through the combustion of tobacco. Public Health England and the American Cancer Society have concluded non-combustible tobacco products as well as pure nicotine products are considerably less harmful than combustible products. Earlier this year, the American Cancer Society shifted its position on e-cigarettes, recommending that individuals who do not quit smoking, “… should be encouraged to switch to the least harmful form of tobacco product possible; switching to the exclusive use of e-cigarettes is preferable to continuing to smoke combustible products.”

In contrast, some public health advocates urge a precautionary approach in which the introduction and sale of e-cigarettes be limited or halted until the products are demonstrably safe.

Policymakers face a wide range of strategies regarding the taxation of vapor products. On the one hand, principles of harm reduction suggest vapor products should face no taxes or low taxes relative to conventional cigarettes, to guide consumers toward a safer alternative to smoking. the U.K. House of Commons Science and Technology Committee concludes:

The level of taxation on smoking-related products should directly correspond to the health risks that they present, to encourage less harmful consumption. Applying that logic, e-cigarettes should remain the least-taxed and conventional cigarettes the most, with heat-not-burn products falling between the two.

In contrast, the precautionary principle as well as principles of tax equity point toward the taxation of vapor products at rates similar to conventional cigarettes.

Analysis of tax policy issues is complicated by divergent—and sometimes obscured—intentions of such policies. Some policymakers claim that the objective of taxing nicotine products is to reduce nicotine consumption. Other policymakers indicate the objective is to raise revenues to support government spending. Often missed in the policy discussion is the effect of fiscal policies on innovation and the development and commercialization of harm-reducing products. Also, often missed are the consequences for current consumers of nicotine products, including smokers seeking to quit using harmful conventional cigarettes.

Policy decisions regarding taxation of vapor products should take into account both long-term fiscal effects, as well as broader economic and welfare effects. These effects might (or might not) suggest very different tax policies to those that have been enacted or are under consideration.

Apart from being a significant source of revenue, the cigarette taxes have been promoted as “sin” taxes to discourage consumption either because of externalities caused by smoking (increased costs for third-party health payers and health consequences) or paternalism. According to Centers for Disease Control and Prevention in U.S., smoking-related illness in the U.S. costs more than $300 billion each year, including; (1) nearly $170 billion for direct medical care for adults and (2) more than $156 billion in lost productivity, including $5.6 billion in lost productivity due to secondhand smoke exposure.

The CDC’s cost estimates raise important questions regarding who bears the burden of smoking related illness. Much of the cost is borne by private insurance, which charges steeper premiums for customers who smoke. In addition, the CDC estimates reflect costs imposed by people who have smoked for decades—many of whom have now quit. A proper accounting of the costs vis-à-vis tax policy would measure the incremental discounted costs imposed by today’s smokers.

According to Levy et al. (2017), a strategy of replacing cigarette smoking with e-cigarettes would yield substantial life year gains, even under pessimistic assumptions regarding cessation, initiation, and relative harm. Increased longevity does not simply extend the individual’s years of retirement and reliance on government transfers but has impact on greater work effort and productivity together with higher tax payments on consumption.

Vapor products that cause less direct harm or have lower externalities (e.g., the absence of “second hand smoke”) should be subject to a lower “sin” tax. A cost-benefit analysis of the desired excise tax rate on vapor products would include reduced health spending as an offset against excise tax revenue that was foregone by putting a lesser rate on those products.

State and local governments in the U.S. collect more than $18 billion a year in tobacco taxes. While some jurisdictions earmark a portion of tobacco taxes for prevention and cessation efforts, in practice most tobacco taxes are treated by policymakers as general revenues to be spent in whatever way the legislative body determines.

In the long-run, the goals of reducing or eliminating consumption of the taxed good and generating revenues are in conflict. If the tax is successful in reducing consumption, it falls short in generating revenue. Similarly, if the tax succeeds in generating revenues, it falls short in reducing or eliminating consumption.

Substitutability is another consideration. An increase in the tax on spirits will result in an increase in beer and wine purchases. A high toll on a road will divert traffic to untolled streets that may not be designed for increased traffic volumes. Evidence from the U.S. and Europe indicate high or rising tobacco taxes in one jurisdiction will result in increased sales in bordering jurisdictions as well as increase illegal cross-jurisdiction sales or smuggling.

As of March 2018, nine U.S. states have enacted taxes on e-cigarettes:

California 65.08% on wholesale price
Delaware 0.05 USD/ml
DC 70% on wholesale price
Kansas 0.05 USD/ml
Louisiana 0.05 USD/ml
Minnesota 95% of wholesale price
North Carolina 0.05 USD/ml
Pennsylvania 40% of wholesaler price
West Virginia 0.075 USD/ml

In addition, 22 countries outside of the U.S. have introduced taxation on e-cigarettes.

The effects of different types of taxation on usage and thus economic outcomes varies. Research to date finds a wide range of own price and cross price elasticities for e-cigarettes. While most researchers conclude that the demand for e-cigarettes is more elastic than the demand for combustible cigarettes, some studies find inelastic demand and some studies find highly elastic demand. Economic theory would point to e-cigarettes as a substitute for combustible cigarettes. Some empirical research supports this hypothesis, while others conclude the two products are complements.

In addition to e-cigarettes, little cigars and smokeless tobacco are also potential substitutes for cigarettes. The results from Zheng, et al. (2016) suggest increases in sales of little cigars and smokeless tobacco products would account for about 14 percent of the decline in cigarette sales associated with a hypothetical 10 percent increase in the price of cigarettes. On the other hand, another study using a seemingly identical data set (Zheng, et al., 2017), suggests that sales of little cigars and smokeless tobacco would decrease in the face of an increase in cigarette prices.

The wide range of estimated elasticities calls into question the reliability of published estimates. As a nascent area of research, the policy debate would benefit from additional research that involves larger samples with better statistical power, reflects the dynamic nature of this relatively new product category, and accounts for the wide variety of vapor products.

More importantly, demand and supply conditions for e-cigarettes, heated tobacco products and other electronic nicotine delivery products have been changing rapidly over the past few years—and are expected for rapidly change into the foreseeable future. Thus, estimates of demand parameters, such as elasticity and cross-price elasticity estimates, are almost certain to vary over time as users gain knowledge and experience and as products and suppliers enter the market.

Because the market for e-cigarettes and other vapor products is small and developing, the tax bearing capacity of these new product segments are untested and unknown. Moreover, current tax levels and prices could be also misleading based on the relatively sparse empirical data, in which case more data points and evaluation is needed. One can argue, given the slow growth rates of these segments in many markets, that current prices of e-cigarettes and heat-not-burn products are relatively high when compared to cigarettes and a tax or an increase on existing tax would slow down the segment growth or even lead to a decline.

Separately, the challenges in assessing a tax on electronic nicotine delivery products indicate the costs of collecting the tax, especially an excise tax, may be much higher than similar taxes levied on combustible cigarettes. In addition, as discussed above, heavy taxation of this relatively new industry would likely stifle innovation in a way that is contrary to the goal harm reduction.

Principles of harm reduction recognize that every proposal has uncertain outcomes as well as potential spillovers and unforeseen consequences. Nevertheless, the basic principle of harm reduction is a focus on safer rather than safe. Policymakers must make their decisions weighing the expected benefits and expected costs. With such high risks and costs associated with cigarette and other combustible use, taxes and regulations must be developed in an environment of uncertainty and with an eye toward a net reduction in harm, rather than an unattainable goal of zero harm.

Read the full report.

The Economist takes on “sin taxes” in a recent article, “‘Sin’ taxes—eg, on tobacco—are less efficient than they look.” The article has several lessons for policy makers eyeing taxes on e-cigarettes and other vapor products.

Historically, taxes had the key purpose of raising revenues. The “best” taxes would be on goods with few substitutes (i.e., inelastic demand) and on goods deemed to be luxuries. In Wealth of Nations Adam Smith notes:

Sugar, rum, and tobacco are commodities which are nowhere necessaries of life, which are become objects of almost universal consumption, and which are therefore extremely proper subjects of taxation.

The Economist notes in 1764, a fiscal crisis driven by wars in North America led Britain’s parliament began enforcing tariffs on sugar and molasses imported from outside the empire. In the U.S., from 1868 until 1913, 90 percent of all federal revenue came from taxes on liquor, beer, wine and tobacco.

Over time, the rationale for these taxes has shifted toward “sin taxes” designed to nudge consumers away from harmful or distasteful consumption. The Temperance movement in the U.S. argued for higher taxes to discourage alcohol consumption. Since the Surgeon General’s warning on the dangers of smoking, tobacco tax increases have been justified as a way to get smokers to quit. More recently, a perceived obesity epidemic has led several American cities as well as Thailand, Britain, Ireland, South Africa to impose taxes on sugar-sweetened beverages to reduce sugar consumption.

Because demand curves slope down, “sin taxes” do change behavior by reducing the quantity demanded. However, for many products subject to such taxes, demand is not especially responsive. For example, as shown in the figure below, a one percent increase in the price of tobacco is associated with a one-half of one percent decrease in sales.

Economist-Sin-Taxes

 

Substitutability is another consideration for tax policy. An increase in the tax on spirits will result in an increase in beer and wine purchases. A high toll on a road will divert traffic to untolled streets that may not be designed for increased traffic volumes. A spike in tobacco taxes in one state will result in a spike in sales in bordering states as well as increase illegal interstate sales or smuggling. The Economist reports:

After Berkeley introduced its tax, sales of sugary drinks rose by 6.9% in neighbouring cities. Denmark, which instituted a tax on fat-laden foods in 2011, ran into similar problems. The government got rid of the tax a year later when it discovered that many shoppers were buying butter in neighbouring Germany and Sweden.

Advocates of “sin” taxes on tobacco, alcohol, and sugar argue their use impose negative externalities on the public, since governments have to spend more to take care of sick people. With approximately one-third of the U.S. population covered by some form of government funded health insurance, such as Medicare or Medicaid, what were once private costs of healthcare have been transformed into a public cost.

According to Centers for Disease Control and Prevention in U.S., smoking-related illness in the U.S. costs more than $300 billion each year, including; (1) nearly $170 billion for direct medical care for adults and (2) more than $156 billion in lost productivity, including $5.6 billion in lost productivity due to secondhand smoke exposure.

On the other hand, The Economist points out:

Smoking, in contrast, probably saves taxpayers money. Lifelong smoking will bring forward a person’s death by about ten years, which means that smokers tend to die just as they would start drawing from state pensions. In a study published in 2002 Kip Viscusi, an economist at Vanderbilt University who has served as an expert witness on behalf of tobacco companies, estimated that even if tobacco were untaxed, Americans could still expect to save the government an average of 32 cents for every pack of cigarettes they smoke.

The CDC’s cost estimates raise important questions regarding who bears the burden of smoking related illness. For example, much of the direct cost is borne by private insurance, which charge steeper premiums for customers who smoke. In addition, the CDC estimates reflect costs imposed by people who have smoked for decades—many of whom have now quit. A proper accounting of the costs vis-à-vis tax policy should evaluate the discounted costs imposed by today’s smokers.

State and local governments in the U.S. collect more than $18 billion a year in tobacco taxes. While some jurisdictions earmark a portion of tobacco taxes for prevention and cessation efforts, in practice most tobacco taxes are treated by policymakers as general revenues to be spent in whatever way the legislative body determines. Thus, in practice, there is no clear nexus between taxes levied on tobacco and government’s use of the tax revenues on smoking related costs.

Most of the harm from smoking is caused by the inhalation of toxicants released through the combustion of tobacco. Public Health England and the American Cancer Society have concluded non-combustible tobacco products, such as e-cigarettes, “heat-not-burn” products, smokeless tobacco, are considerably less harmful than combustible products.

Many experts believe that the best option for smokers who are unable or unwilling to quit smoking is to switch to a less harmful alternative activity that has similar attributes, such as using non-combustible nicotine delivery products. Policies that encourage smokers to switch from more harmful combustible tobacco products to less harmful non-combustible products would be considered a form of “harm reduction.”

Nine U.S. states now have taxes on vapor products. In addition, several local jurisdictions have enacted taxes. Their methods and levels of taxation vary widely. Policy makers considering a tax on vapor products should account for the following factors.

  • The current market for e-cigarettes as well as heat-not-burn products in the range of 0-10 percent of the cigarette market. Given the relatively small size of the e-cigarette and heated tobacco product market, it is unlikely any level of taxation of e-cigarettes and heated tobacco products would generate significant tax revenues to the taxing jurisdiction. Moreover much of the current research likely represents early adopters and higher income consumer groups. As such, the current empirical data based on total market size and price/tax levels are likely to be far from indicative of the “actual” market for these products.
  • The demand for e-cigarettes is much more responsive to a change in price than the demand for combustible cigarettes. My review of the published research to date finds the median estimated own-price elasticity is -1.096, meaning something close to a 1-to-1 relationship: a tax resulting in a one percent increase in e-cigarette prices would be associated with one percent decline in e-cigarette sales. Many of those lost sales would be shifted to purchases of combustible cigarettes.
  • Research on the price responsiveness of vapor products is relatively new and sparse. There are fewer than a dozen published articles, and the first article was published in 2014. As a result, the literature reports a wide range of estimated elasticities that calls into question the reliability of published estimates, as shown in the figure below. As a relatively unformed area of research, the policy debate would benefit from additional research that involves larger samples with better statistical power, reflects the dynamic nature of this new product category, and accounts for the wide variety of vapor products.

 

With respect to taxation and pricing, policymakers would benefit from reliable information regarding the size of the vapor product market and the degree to which vapor products are substitutes for combustible tobacco products. It may turn out that a tax on vapor products may be, as The Economist notes, less efficient than they look.

Ours is not an age of nuance.  It’s an age of tribalism, of teams—“Yer either fer us or agin’ us!”  Perhaps I should have been less surprised, then, when I read the unfavorable review of my book How to Regulate in, of all places, the Federalist Society Review.

I had expected some positive feedback from reviewer J. Kennerly Davis, a contributor to the Federalist Society’s Regulatory Transparency Project.  The “About” section of the Project’s website states:

In the ultra-complex and interconnected digital age in which we live, government must issue and enforce regulations to protect public health and safety.  However, despite the best of intentions, government regulation can fail, stifle innovation, foreclose opportunity, and harm the most vulnerable among us.  It is for precisely these reasons that we must be diligent in reviewing how our policies either succeed or fail us, and think about how we might improve them.

I might not have expressed these sentiments in such pro-regulation terms.  For example, I don’t think government should regulate, even “to protect public health and safety,” absent (1) a market failure and (2) confidence that systematic governmental failures won’t cause the cure to be worse than the disease.  I agree, though, that regulation is sometimes appropriate, that government interventions often fail (in systematic ways), and that regulatory policies should regularly be reviewed with an eye toward reducing the combined costs of market and government failures.

Those are, in fact, the central themes of How to Regulate.  The book sets forth an overarching goal for regulation (minimize the sum of error and decision costs) and then catalogues, for six oft-cited bases for regulating, what regulatory tools are available to policymakers and how each may misfire.  For every possible intervention, the book considers the potential for failure from two sources—the knowledge problem identified by F.A. Hayek and public choice concerns (rent-seeking, regulatory capture, etc.).  It ends up arguing:

  • for property rights-based approaches to environmental protection (versus the command-and-control status quo);
  • for increased reliance on the private sector to produce public goods;
  • that recognizing property rights, rather than allocating usage, is the best way to address the tragedy of the commons;
  • that market-based mechanisms, not shareholder suits and mandatory structural rules like those imposed by Sarbanes-Oxley and Dodd-Frank, are the best way to constrain agency costs in the corporate context;
  • that insider trading restrictions should be left to corporations themselves;
  • that antitrust law should continue to evolve in the consumer welfare-focused direction Robert Bork recommended;
  • against the FCC’s recently abrogated net neutrality rules;
  • that occupational licensure is primarily about rent-seeking and should be avoided;
  • that incentives for voluntary disclosure will usually obviate the need for mandatory disclosure to correct information asymmetry;
  • that the claims of behavioral economics do not justify paternalistic policies to protect people from themselves; and
  • that “libertarian-paternalism” is largely a ruse that tends to morph into hard paternalism.

Given the congruence of my book’s prescriptions with the purported aims of the Regulatory Transparency Project—not to mention the laundry list of specific market-oriented policies the book advocates—I had expected a generally positive review from Mr. Davis (whom I sincerely thank for reading and reviewing the book; book reviews are a ton of work).

I didn’t get what I’d expected.  Instead, Mr. Davis denounced my book for perpetuating “progressive assumptions about state and society” (“wrongheaded” assumptions, the editor’s introduction notes).  He responded to my proposed methodology with a “meh,” noting that it “is not clearly better than the status quo.”  His one compliment, which I’ll gladly accept, was that my discussion of economic theory was “generally accessible.”

Following are a few thoughts on Mr. Davis’s critiques.

Are My Assumptions Progressive?

According to Mr. Davis, my book endorses three progressive concepts:

(i) the idea that market based arrangements among private parties routinely misallocate resources, (ii) the idea that government policymakers are capable of formulating executive directives that can correct private ordering market failures and optimize the allocation of resources, and (iii) the idea that the welfare of society is actually something that exists separate and apart from the individual welfare of each of the members of society.

I agree with Mr. Davis that these are progressive ideas.  If my book embraced them, it might be fair to label it “progressive.”  But it doesn’t.  Not one of them.

  1. Market Failure

Nothing in my book suggests that “market based arrangements among private parties routinely misallocate resources.”  I do say that “markets sometimes fail to work well,” and I explain how, in narrow sets of circumstances, market failures may emerge.  Understanding exactly what may happen in those narrow sets of circumstances helps to identify the least restrictive option for addressing problems and would thus would seem a pre-requisite to effective policymaking for a conservative or libertarian.  My mere invocation of the term “market failure,” however, was enough for Mr. Davis to kick me off the team.

Mr. Davis ignored altogether the many points where I explain how private ordering fixes situations that could lead to poor market performance.  At the end of the information asymmetry chapter, for example, I write,

This chapter has described information asymmetry as a problem, and indeed it is one.  But it can also present an opportunity for profit.  Entrepreneurs have long sought to make money—and create social value—by developing ways to correct informational imbalances and thereby facilitate transactions that wouldn’t otherwise occur.

I then describe the advent of companies like Carfax, AirBnb, and Uber, all of which offer privately ordered solutions to instances of information asymmetry that might otherwise create lemons problems.  I conclude:

These businesses thrive precisely because of information asymmetry.  By offering privately ordered solutions to the problem, they allow previously under-utilized assets to generate heretofore unrealized value.  And they enrich the people who created and financed them.  It’s a marvelous thing.

That theme—that potential market failures invite privately ordered solutions that often obviate the need for any governmental fix—permeates the book.  In the public goods chapter, I spend a great deal of time explaining how privately ordered devices like assurance contracts facilitate the production of amenities that are non-rivalrous and non-excludable.  In discussing the tragedy of the commons, I highlight Elinor Ostrom’s work showing how “groups of individuals have displayed a remarkable ability to manage commons goods effectively without either privatizing them or relying on government intervention.”  In the chapter on externalities, I spend a full seven pages explaining why Coasean bargains are more likely than most people think to prevent inefficiencies from negative externalities.  In the chapter on agency costs, I explain why privately ordered solutions like the market for corporate control would, if not precluded by some ill-conceived regulations, constrain agency costs better than structural rules from the government.

Disregarding all this, Mr. Davis chides me for assuming that “markets routinely fail.”  And, for good measure, he explains that government interventions are often a bigger source of failure, a point I repeatedly acknowledge, as it is a—perhaps the—central theme of the book.

  1. Trust in Experts

In what may be the strangest (and certainly the most misleading) part of his review, Mr. Davis criticizes me for placing too much confidence in experts by giving short shrift to the Hayekian knowledge problem and the insights of public choice.

          a.  The Knowledge Problem

According to Mr. Davis, the approach I advocate “is centered around fully functioning experts.”  He continues:

This progressive trust in experts is misplaced.  It is simply false to suppose that government policymakers are capable of formulating executive directives that effectively improve upon private arrangements and optimize the allocation of resources.  Friedrich Hayek and other classical liberals have persuasively argued, and everyday experience has repeatedly confirmed, that the information needed to allocate resources efficiently is voluminous and complex and widely dispersed.  So much so that government experts acting through top down directives can never hope to match the efficiency of resource allocation made through countless voluntary market transactions among private parties who actually possess the information needed to allocate the resources most efficiently.

Amen and hallelujah!  I couldn’t agree more!  Indeed, I said something similar when I came to the first regulatory tool my book examines (and criticizes), command-and-control pollution rules.  I wrote:

The difficulty here is an instance of a problem that afflicts regulation generally.  At the end of the day, regulating involves centralized economic planning:  A regulating “planner” mandates that productive resources be allocated away from some uses and toward others.  That requires the planner to know the relative value of different resource uses.  But such information, in the words of Nobel laureate F.A. Hayek, “is not given to anyone in its totality.”  The personal preferences of thousands or millions of individuals—preferences only they know—determine whether there should be more widgets and fewer gidgets, or vice-versa.  As Hayek observed, voluntary trading among resource owners in a free market generates prices that signal how resources should be allocated (i.e., toward the uses for which resource owners may command the highest prices).  But centralized economic planners—including regulators—don’t allocate resources on the basis of relative prices.  Regulators, in fact, generally assume that prices are wrong due to the market failure the regulators are seeking to address.  Thus, the so-called knowledge problem that afflicts regulation generally is particularly acute for command-and-control approaches that require regulators to make refined judgments on the basis of information about relative costs and benefits.

That was just the first of many times I invoked the knowledge problem to argue against top-down directives and in favor of market-oriented policies that would enable individuals to harness local knowledge to which regulators would not be privy.  The index to the book includes a “knowledge problem” entry with no fewer than nine sub-entries (e.g., “with licensure regimes,” “with Pigouvian taxes,” “with mandatory disclosure regimes”).  There are undoubtedly more mentions of the knowledge problem than those listed in the index, for the book assesses the degree to which the knowledge problem creates difficulties for every regulatory approach it considers.

Mr. Davis does mention one time where I “acknowledge[] the work of Hayek” and “recognize[] that context specific information is vitally important,” but he says I miss the point:

Having conceded these critical points [about the importance of context-specific information], Professor Lambert fails to follow them to the logical conclusion that private ordering arrangements are best for regulating resources efficiently.  Instead, he stops one step short, suggesting that policymakers defer to the regulator most familiar with the regulated party when they need context-specific information for their analysis.  Professor Lambert is mistaken.  The best information for resource allocation is not to be found in the regional office of the regulator.  It resides with the persons who have long been controlled and directed by the progressive regulatory system.  These are the ones to whom policymakers should defer.

I was initially puzzled by Mr. Davis’s description of how my approach would address the knowledge problem.  It’s inconsistent with the way I described the problem (the “regional office of the regulator” wouldn’t know people’s personal preferences, etc.), and I couldn’t remember ever suggesting that regulatory devolution—delegating decisions down toward local regulators—was the solution to the knowledge problem.

When I checked the citation in the sentences just quoted, I realized that Mr. Davis had misunderstood the point I was making in the passage he cited (my own fault, no doubt, not his).  The cited passage was at the very end of the book, where I was summarizing the book’s contributions.  I claimed to have set forth a plan for selecting regulatory approaches that would minimize the sum of error and decision costs.  I wanted to acknowledge, though, the irony of promulgating a generally applicable plan for regulating in a book that, time and again, decries top-down imposition of one-size-fits-all rules.  Thus, I wrote:

A central theme of this book is that Hayek’s knowledge problem—the fact that no central planner can possess and process all the information needed to allocate resources so as to unlock their greatest possible value—applies to regulation, which is ultimately a set of centralized decisions about resource allocation.  The very knowledge problem besetting regulators’ decisions about what others should do similarly afflicts pointy-headed academics’ efforts to set forth ex ante rules about what regulators should do.  Context-specific information to which only the “regulator on the spot” is privy may call for occasional departures from the regulatory plan proposed here.

As should be obvious, my point was not that the knowledge problem can generally be fixed by regulatory devolution.  Rather, I was acknowledging that the general regulatory approach I had set forth—i.e., the rules policymakers should follow in selecting among regulatory approaches—may occasionally misfire and should thus be implemented flexibly.

           b.  Public Choice Concerns

A second problem with my purported trust in experts, Mr. Davis explains, stems from the insights of public choice:

Actual policymakers simply don’t live up to [Woodrow] Wilson’s ideal of the disinterested, objective, apolitical, expert technocrat.  To the contrary, a vast amount of research related to public choice theory has convincingly demonstrated that decisions of regulatory agencies are frequently shaped by politics, institutional self-interest and the influence of the entities the agencies regulate.

Again, huzzah!  Those words could have been lifted straight out of the three full pages of discussion I devoted to public choice concerns with the very first regulatory intervention the book considered.  A snippet from that discussion:

While one might initially expect regulators pursuing the public interest to resist efforts to manipulate regulation for private gain, that assumes that government officials are not themselves rational, self-interest maximizers.  As scholars associated with the “public choice” economic tradition have demonstrated, government officials do not shed their self-interested nature when they step into the public square.  They are often receptive to lobbying in favor of questionable rules, especially since they benefit from regulatory expansions, which tend to enhance their job status and often their incomes.  They also tend to become “captured” by powerful regulatees who may shower them with personal benefits and potentially employ them after their stints in government have ended.

That’s just a slice.  Elsewhere in those three pages, I explain (1) how the dynamic of concentrated benefits and diffuse costs allows inefficient protectionist policies to persist, (2) how firms that benefit from protectionist regulation are often assisted by “pro-social” groups that will make a public interest case for the rules (Bruce Yandle’s Bootleggers and Baptists syndrome), and (3) the “[t]wo types of losses [that] result from the sort of interest-group manipulation public choice predicts.”  And that’s just the book’s initial foray into public choice.  The entry for “public choice concerns” in the book’s index includes eight sub-entries.  As with the knowledge problem, I addressed the public choice issues that could arise from every major regulatory approach the book considered.

For Mr. Davis, though, that was not enough to keep me out of the camp of Wilsonian progressives.  He explains:

Professor Lambert devotes a good deal of attention to the problem of “agency capture” by regulated entities.  However, he fails to acknowledge that a symbiotic relationship between regulators and regulated is not a bug in the regulatory system, but an inherent feature of a system defined by extensive and continuing government involvement in the allocation of resources.

To be honest, I’m not sure what that last sentence means.  Apparently, I didn’t recite some talismanic incantation that would indicate that I really do believe public choice concerns are a big problem for regulation.  I did say this in one of the book’s many discussions of public choice:

A regulator that has both regular contact with its regulatees and significant discretionary authority over them is particularly susceptible to capture.  The regulator’s discretionary authority provides regulatees with a strong motive to win over the regulator, which has the power to hobble the regulatee’s potential rivals and protect its revenue stream.  The regular contact between the regulator and the regulatee provides the regulatee with better access to those in power than that available to parties with opposing interests.  Moreover, the regulatee’s preferred course of action is likely (1) to create concentrated benefits (to the regulatee) and diffuse costs (to consumers generally), and (2) to involve an expansion of the regulator’s authority.  The upshot is that that those who bear the cost of the preferred policy are less likely to organize against it, and regulators, who benefit from turf expansion, are more likely to prefer it.  Rate-of-return regulation thus involves the precise combination that leads to regulatory expansion at consumer expense: broad and discretionary government power, close contact between regulators and regulatees, decisions that generally involve concentrated benefits and diffuse costs, and regular opportunities to expand regulators’ power and prestige.

In light of this combination of features, it should come as no surprise that the history of rate-of-return regulation is littered with instances of agency capture and regulatory expansion.

Even that was not enough to convince Mr. Davis that I reject the Wilsonian assumption of “disinterested, objective, apolitical, expert technocrat[s].”  I don’t know what more I could have said.

  1. Social Welfare

Mr. Davis is right when he says, “Professor Lambert’s ultimate goal for his book is to provide policymakers with a resource that will enable them to make regulatory decisions that produce greater social welfare.”  But nowhere in my book do I suggest, as he says I do, “that the welfare of society is actually something that exists separate and apart from the individual welfare of each of the members of society.”  What I mean by “social welfare” is the aggregate welfare of all the individuals in a society.  And I’m careful to point out that only they know what makes them better off.  (At one point, for example, I write that “[g]overnment planners have no way of knowing how much pleasure regulatees derive from banned activities…or how much displeasure they experience when they must comply with an affirmative command…. [W]ith many paternalistic policies and proposals…government planners are really just guessing about welfare effects.”)

I agree with Mr. Davis that “[t]here is no single generally accepted methodology that anyone can use to determine objectively how and to what extent the welfare of society will be affected by a particular regulatory directive.”  For that reason, nowhere in the book do I suggest any sort of “metes and bounds” measurement of social welfare.  (I certainly do not endorse the use of GDP, which Mr. Davis rightly criticizes; that term appears nowhere in the book.)

Rather than prescribing any sort of precise measurement of social welfare, my book operates at the level of general principles:  We have reasons to believe that inefficiencies may arise when conditions are thus; there is a range of potential government responses to this situation—from doing nothing, to facilitating a privately ordered solution, to mandating various actions; based on our experience with these different interventions, the likely downsides of each (stemming from, for example, the knowledge problem and public choice concerns) are so-and-so; all things considered, the aggregate welfare of the individuals within this group will probably be greatest with policy x.

It is true that the thrust of the book is consequentialist, not deontological.  But it’s a book about policy, not ethics.  And its version of consequentialism is rule, not act, utilitarianism.  Is a consequentialist approach to policymaking enough to render one a progressive?  Should we excise John Stuart Mill’s On Liberty from the classical liberal canon?  I surely hope not.

Is My Proposed Approach an Improvement?

Mr. Davis’s second major criticism of my book—that what it proposes is “just the status quo”—has more bite.  By that, I mean two things.  First, it’s a more painful criticism to receive.  It’s easier for an author to hear “you’re saying something wrong” than “you’re not saying anything new.”

Second, there may be more merit to this criticism.  As Mr. Davis observes, I noted in the book’s introduction that “[a]t times during the drafting, I … wondered whether th[e] book was ‘original’ enough.”  I ultimately concluded that it was because it “br[ought] together insights of legal theorists and economists of various stripes…and systematize[d] their ideas into a unified, practical approach to regulating.”  Mr. Davis thinks I’ve overstated the book’s value, and he may be right.

The current regulatory landscape would suggest, though, that my book’s approach to selecting among potential regulatory policies isn’t “just the status quo.”  The approach I recommend would generate the specific policies catalogued at the outset of this response (in the bullet points).  The fact that those policies haven’t been implemented under the existing regulatory approach suggests that what I’m recommending must be something different than the status quo.

Mr. Davis observes—and I acknowledge—that my recommended approach resembles the review required of major executive agency regulations under Executive Order 12866, President Clinton’s revised version of President Reagan’s Executive Order 12291.  But that order is quite limited in its scope.  It doesn’t cover “minor” executive agency rules (those with expected costs of less than $100 million) or rules from independent agencies or from Congress or from courts or at the state or local level.  Moreover, I understand from talking to a former administrator of the Office of Information and Regulatory Affairs, which is charged with implementing the order, that it has actually generated little serious consideration of less restrictive alternatives, something my approach emphasizes.

What my book proposes is not some sort of governmental procedure; indeed, I emphasize in the conclusion that the book “has not addressed … how existing regulatory institutions should be reformed to encourage the sort of analysis th[e] book recommends.”  Instead, I propose a way to think through specific areas of regulation, one that is informed by a great deal of learning about both market and government failures.  The best audience for the book is probably law students who will someday find themselves influencing public policy as lawyers, legislators, regulators, or judges.  I am thus heartened that the book is being used as a text at several law schools.  My guess is that few law students receive significant exposure to Hayek, public choice, etc.

So, who knows?  Perhaps the book will make a difference at the margin.  Or perhaps it will amount to sound and fury, signifying nothing.  But I don’t think a classical liberal could fairly say that the analysis it counsels “is not clearly better than the status quo.”

A Truly Better Approach to Regulating

Mr. Davis ends his review with a stirring call to revamp the administrative state to bring it “in complete and consistent compliance with the fundamental law of our republic embodied in the Constitution, with its provisions interpreted to faithfully conform to their original public meaning.”  Among other things, he calls for restoring the separation of powers, which has been erased in agencies that combine legislative, executive, and judicial functions, and for eliminating unchecked government power, which results when the legislature delegates broad rulemaking and adjudicatory authority to politically unaccountable bureaucrats.

Once again, I concur.  There are major problems—constitutional and otherwise—with the current state of administrative law and procedure.  I’d be happy to tear down the existing administrative state and begin again on a constitutionally constrained tabula rasa.

But that’s not what my book was about.  I deliberately set out to write a book about the substance of regulation, not the process by which rules should be imposed.  I took that tack for two reasons.  First, there are numerous articles and books, by scholars far more expert than I, on the structure of the administrative state.  I could add little value on administrative process.

Second, the less-addressed substantive question—what, as a substantive matter, should a policy addressing x do?—would exist even if Mr. Davis’s constitutionally constrained regulatory process were implemented.  Suppose that we got rid of independent agencies, curtailed delegations of rulemaking authority to the executive branch, and returned to a system in which Congress wrote all rules, the executive branch enforced them, and the courts resolved any disputes.  Someone would still have to write the rule, and that someone (or group of people) should have some sense of the pros and cons of one approach over another.  That is what my book seeks to provide.

A hard core Hayekian—one who had immersed himself in Law, Legislation, and Liberty—might respond that no one should design regulation (purposive rules that Hayek would call thesis) and that efficient, “purpose-independent” laws (what Hayek called nomos) will just emerge as disputes arise.  But that is not Mr. Davis’s view.  He writes:

A system of governance or regulation based on the rule of law attains its policy objectives by proscribing actions that are inconsistent with those objectives.  For example, this type of regulation would prohibit a regulated party from discharging a pollutant in any amount greater than the limiting amount specified in the regulation.  Under this proscriptive approach to regulation, any and all actions not specifically prohibited are permitted.

Mr. Davis has thus contemplated a purposive rule, crafted by someone.  That someone should know the various policy options and the upsides and downsides of each.  How to Regulate could help.

Conclusion

I’m not sure why Mr. Davis viewed my book as no more than dressed-up progressivism.  Maybe he was triggered by the book’s cover art, which he says “is faithful to the progressive tradition,” resembling “the walls of public buildings from San Francisco to Stalingrad.”  Maybe it was a case of Sunstein Derangement Syndrome.  (Progressive legal scholar Cass Sunstein had nice things to say about the book, despite its criticisms of a number of his ideas.)  Or perhaps it was that I used the term “market failure.”  Many conservatives and libertarians fear, with good reason, that conceding the existence of market failures invites all sorts of government meddling.

At the end of the day, though, I believe we classical liberals should stop pretending that market outcomes are always perfect, that pure private ordering is always and everywhere the best policy.  We should certainly sing markets’ praises; they usually work so well that people don’t even notice them, and we should point that out.  We should continually remind people that government interventions also fail—and in systematic ways (e.g., the knowledge problem and public choice concerns).  We should insist that a market failure is never a sufficient condition for a governmental fix; one must always consider whether the cure will be worse than the disease.  In short, we should take and promote the view that government should operate “under a presumption of error.”

That view, economist Aaron Director famously observed, is the essence of laissez faire.  It’s implicit in the purpose statement of the Federalist Society’s Regulatory Transparency Project.  And it’s the central point of How to Regulate.

So let’s go easy on the friendly fire.

The Eleventh Circuit’s LabMD opinion came out last week and has been something of a rorschach test for those of us who study consumer protection law.

Neil Chilson found the result to be a disturbing sign of slippage in Congress’s command that the FTC refrain from basing enforcement on “public policy.” Berin Szóka, on the other hand, saw the ruling as a long-awaited rebuke against the FTC’s expansive notion of its “unfairness” authority. Whereas Daniel Solove and Woodrow Hartzog described the decision as “quite narrow and… far from crippling,” in part, because “[t]he opinion says very little about the FTC’s general power to enforce Section 5 unfairness.” Even among the ICLE crew, our understandings of the opinion reflect our priors, from it being best understood as expressing due process concerns about injury-based enforcement of Section 5, on the one hand, to being about the meaning of Section 5(n)’s causation requirement, on the other.

You can expect to hear lots more about these and other LabMD-related issues from us soon, but for now we want to write about the only thing more exciting than dueling histories of the FTC’s 1980 Unfairness Statement: administrative law.

While most of those watching the LabMD case come from some nexus of FTC watchers, data security specialists, and privacy lawyers, the reality is that the case itself is mostly about administrative law (the law that governs how federal agencies are given and use their power). And the court’s opinion is best understood from a primarily administrative law perspective.

From that perspective, the case should lead to some significant introspection at the Commission. While the FTC may find ways to comply with the letter of the opinion without substantially altering its approach to data security cases, it will likely face difficulty defending that approach before the courts. True compliance with this decision will require the FTC to define what makes certain data security practices unfair in a more-coherent and far-more-readily ascertainable fashion.

The devil is in the (well-specified) details

The actual holding in the case comes in Part III of the 11th Circuit’s opinion, where the court finds for LabMD on the ground that, owing to a fatal lack of specificity in the FTC’s proposed order, “the Commission’s cease and desist order is itself unenforceable.”  This is the punchline of the opinion, to which we will return. But it is worth spending some time on the path that the court takes to get there.

It should be stressed at the outset that Part II of the opinion — in which the Court walks through the conceptual and statutory framework that supports an “unfairness” claim — is surprisingly unimportant to the court’s ultimate holding. This was the meat of the case for FTC watchers and privacy and data security lawyers, and it is a fascinating exposition. Doubtless it will be the focus of most analysis of the opinion.

But, for purposes of the court’s disposition of the case, it’s of (perhaps-frustratingly) scant importance. In short, the court assumes, arguendo, that the FTC has sufficient basis to make out an unfairness claim against LabMD before moving on to Part III of the opinion analyzing the FTC’s order given that assumption.

It’s not clear why the court took this approach — and it is dangerous to assume any particular explanation (although it is and will continue to be the subject of much debate). There are several reasonable explanations for the approach, ranging from the court thinking it obvious that the FTC’s unfairness analysis was correct, to it side-stepping the thorny question of how to define injury under Section 5, to the court avoiding writing a decision that could call into question the fundamental constitutionality of a significant portion of the FTC’s legal portfolio. Regardless — and regardless of its relative lack of importance to the ultimate holding — the analysis offered in Part II bears, and will receive, significant attention.

The FTC has two basic forms of consumer protection authority: It can take action against 1) unfair acts or practices and 2) deceptive acts or practices. The FTC’s case against LabMD was framed in terms of unfairness. Unsurprisingly, “unfairness” is a broad, ambiguous concept — one that can easily grow into an amorphous blob of ill-defined enforcement authority.

As discussed by the court (as well as by us, ad nauseum), in the 1970s the FTC made very aggressive use of its unfairness authority to regulate the advertising industry, effectively usurping Congress’ authority to legislate in that area. This over-aggressive enforcement didn’t sit well with Congress, of course, and led it to shut down the FTC for a period of time until the agency adopted a more constrained understanding of the meaning of its unfairness authority. This understanding was communicated to Congress in the FTC’s 1980 Unfairness Statement. That statement was subsequently codified by Congress, in slightly modified form, as Section 5(n) of the FTC Act.

Section 5(n) states that

The Commission shall have no authority under this section or section 57a of this title to declare unlawful an act or practice on the grounds that such act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. In determining whether an act or practice is unfair, the Commission may consider established public policies as evidence to be considered with all other evidence. Such public policy considerations may not serve as a primary basis for such determination.

The meaning of Section 5(n) has been the subject of intense debate for years (for example, here, here and here). In particular, it is unclear whether Section 5(n) defines a test for what constitutes unfair conduct (that which “causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition”) or whether instead imposes a necessary, but not necessarily sufficient, condition on the extent of the FTC’s authority to bring cases. The meaning of “cause” under 5(n) is also unclear because, unlike causation in traditional legal contexts, Section 5(n) also targets conduct that is “likely to cause” harm.

Section 5(n) concludes with an important, but also somewhat inscrutable, discussion of the role of “public policy” in the Commission’s unfairness enforcement, indicating that that Commission is free to consider “established public policies” as evidence of unfair conduct, but may not use such considerations “as a primary basis” for its unfairness enforcement.

Just say no to public policy

Section 5 empowers and directs the FTC to police unfair business practices, and there is little reason to think that bad data security practices cannot sometimes fall under its purview. But the FTC’s efforts with respect to data security (and, for that matter, privacy) over the past nearly two decades have focused extensively on developing what it considers to be a comprehensive jurisprudence to address data security concerns. This creates a distinct impression that the FTC has been using its unfairness authority to develop a new area of public policy — to legislate data security standards, in other words — as opposed to policing data security practices that are unfair under established principles of unfairness.

This is a subtle distinction — and there is frankly little guidance for understanding when the agency is acting on the basis of public policy versus when it is proscribing conduct that falls within the meaning of unfairness.

But it is an important distinction. If it is the case — or, more precisely, if the courts think that it is the case — that the FTC is acting on the basis of public policy, then the FTC’s data security efforts are clearly problematic under Section 5(n)’s prohibition on the use of public policy as the primary basis for unfairness actions.

And this is where the Commission gets itself into trouble. The Commission’s efforts to develop its data security enforcement program looks an awful lot like something being driven by public policy, and not so much as merely enforcing existing policy as captured by, in the LabMD court’s words (echoing the FTC’s pre-Section 5(n) unfairness factors), “well-established legal standard[s], whether grounded in statute, the common law, or the Constitution.”

The distinction between effecting public policy and enforcing legal norms is… not very clear. Nonetheless, exploring and respecting that distinction is an important task for courts and agencies.

Unfortunately, this case does not well describe how to make that distinction. The opinion is more than a bit muddled and difficult to clearly interpret. Nonetheless, reading the court’s dicta in Part II is instructive. It’s clearly the case that some bad security practices, in some contexts, can be unfair practices. So the proper task for the FTC is to discover how to police “unfairness” within data security cases rather than setting out to become a first-order data security enforcement agency.

How does public policy become well-established law?

Part II of the Eleventh Circuit’s opinion — even if dicta — is important for future interpretations of Section 5 cases. The court goes to great lengths to demonstrate, based on the FTC’s enforcement history and related Congressional rebukes, that the Commission may not rely upon vague “public policy” standards for bringing “unfairness” actions.

But this raises a critical question about the nature of the FTC’s unfairness authority. The Commission was created largely to police conduct that could not readily be proscribed by statute or simple rules. In some cases this means conduct that is hard to label or describe in text with any degree of precision — “I know it when I see it” kinds of acts and practices. In other cases, it may refer to novel or otherwise unpredictable conduct that could not be foreseen by legislators or regulators. In either case, the very purpose of the FTC is to be able to protect consumers from conduct that is not necessarily proscribed elsewhere.

This means that the Commission must have some ability to take action against “unfair” conduct that has not previously been enshrined as “unfair” in “well-established legal standard[s], whether grounded in statute, the common law, or the Constitution.” But that ability is not unbounded, of course.

The court explained that the Commission could expound upon what acts fall within the meaning of “unfair” in one of two ways: It could use its rulemaking authority to issue Congressionally reviewable rules, or it could proceed on a case-by-case basis.

In either case, the court’s discussion of how the Commission is to determine what is “unfair” within the constraints of Section 5(n) is frustratingly vague. The earlier parts of the opinion tell us that unfairness is to be adjudged based upon “well-established legal standards,” but here the court tells us that the scope of unfairness can be altered — that is, those well-established legal standards can be changed — through adjudication. It is difficult to square what the court means by this. Regardless, it is the guidance that we have been given by the court.

This is Admin Law 101

And yet perhaps there is some resolution to this conundrum in administrative law. For administrative law scholars, the 11th Circuit’s discussion of the permissibility of agencies developing binding legal norms using either rulemaking or adjudication procedures, is straight out of Chenery II.

Chenery II is a bedrock case of American administrative law, standing broadly for the proposition (as echoed by the 11th Circuit) that agencies can generally develop legal rules through either rulemaking or adjudication, that there may be good reasons to use either in any given case, and that (assuming Congress has empowered the agency to use both) it is primarily up to the agency to determine which approach is preferable in any given case.

But, while Chenery II certainly allows agencies to proceed on a case-by-case basis, that permission is not a broad license to eschew the development of determinate legal standards. And the reason is fairly obvious: if an agency develops rules that are difficult to know ex ante, they can hardly provide guidance for private parties as they order their affairs.

Chenery II places an important caveat on the use of case-by-case adjudication. Much like the judges in the LabMD opinion, the Chenery II court was concerned with specificity and clarity, and tells us that agencies may not rely on vague bases for their rules or enforcement actions and expect courts to “chisel” out the details. Rather:

If the administrative action is to be tested by the basis upon which it purports to rest, that basis must be set forth with such clarity as to be understandable. It will not do for a court to be compelled to guess at the theory underlying the agency’s action; nor can a court be expected to chisel that which must be precise from what the agency has left vague and indecisive. In other words, ‘We must know what a decision means before the duty becomes ours to say whether it is right or wrong.’ (emphasis added)

The parallels between the 11th Circuit’s opinion in LabMD and the Supreme Court’s opinion in Chenery II 70 years earlier are uncanny. It is also not very surprising that the 11th Circuit opinion would reflect the principles discussed in Chenery II, nor that it would do so without reference to Chenery II: these are, after all, bedrock principles of administrative law.  

The principles set out in Chenery II, of course, do not answer the data-security law question whether the FTC properly exercised its authority in this (or any) case under Section 5. But they do provide an intelligible basis for the court sidestepping this question, and asking whether the FTC sufficiently defined what it was doing in the first place.  

Conclusion

The FTC’s data security mission has been, in essence, a voyage of public policy exploration. Its method of case-by-case adjudication, based on ill-defined consent decrees, non-binding guidance documents, and broadly-worded complaints creates the vagueness that the Court in Chenery II rejected, and that the 11th Circuit held results in unenforceable remedies.

Even in its best light, the Commission’s public materials are woefully deficient as sources of useful (and legally-binding) guidance. In its complaints the FTC does typically mention some of the facts that led it to investigate, and presents some rudimentary details of how those facts relate to its Section 5 authority. Yet the FTC issues complaints based merely on its “reason to believe” that an unfair act has taken place. This is a far different standard than that faced in district court, and undoubtedly leads the Commission to construe facts liberally in its own favor.

Moreover, targets of complaints settle for myriad reasons, and no outside authority need review the sufficiency of a complaint as part of a settlement. And the consent orders themselves are largely devoid of legal and even factual specificity. As a result, the FTC’s authority to initiate an enforcement action  is effectively based on an ill-defined series of hunches — hardly a sufficient basis for defining a clear legal standard.

So, while the court’s opinion in this case was narrowly focused on the FTC’s proposed order, the underlying legal analysis that supports its holding should be troubling to the Commission.

The specificity the 11th Circuit demands in the remedial order must exist no less in the theories of harm the Commission alleges against targets. And those theories cannot be based on mere public policy preferences. Courts that follow the Eleventh Circuit’s approach — which indeed Section 5(n) reasonably seems to require — will look more deeply into the Commission’s allegations of “unreasonable” data security in order to determine if it is actually attempting to pursue harms by proving something like negligence, or is instead simply ascribing “unfairness” to certain conduct that the Commission deems harmful.

The FTC may find ways to comply with the letter of this particular opinion without substantially altering its overall approach — but that seems unlikely. True compliance with this decision will require the FTC to respect real limits on its authority and to develop ascertainable data security requirements out of much more than mere consent decrees and kitchen-sink complaints.

A recent exchange between Chris Walker and Philip Hamburger about Walker’s ongoing empirical work on the Chevron doctrine (the idea that judges must defer to reasonable agency interpretations of ambiguous statutes) gives me a long-sought opportunity to discuss what I view as the greatest practical problem with the Chevron doctrine: it increases both politicization and polarization of law and policy. In the interest of being provocative, I will frame the discussion below by saying that both Walker & Hamburger are wrong (though actually I believe both are quite correct in their respective critiques). In particular, I argue that Walker is wrong that Chevron decreases politicization (it actually increases it, vice his empirics); and I argue Hamburger is wrong that judicial independence is, on its own, a virtue that demands preservation. Rather, I argue, Chevron increases overall politicization across the government; and judicial independence can and should play an important role in checking legislative abdication of its role as a politically-accountable legislature in a way that would moderate that overall politicization.

Walker, along with co-authors Kent Barnett and Christina Boyd, has done some of the most important and interesting work on Chevron in recent years, empirically studying how the Chevron doctrine has affected judicial behavior (see here and here) as well as that of agencies (and, I would argue, through them the Executive) (see here). But the more important question, in my mind, is how it affects the behavior of Congress. (Walker has explored this somewhat in his own work, albeit focusing less on Chevron than on how the role agencies play in the legislative process implicitly transfers Congress’s legislative functions to the Executive).

My intuition is that Chevron dramatically exacerbates Congress’s worst tendencies, encouraging Congress to push its legislative functions to the executive and to do so in a way that increases the politicization and polarization of American law and policy. I fear that Chevron effectively allows, and indeed encourages, Congress to abdicate its role as the most politically-accountable branch by deferring politically difficult questions to agencies in ambiguous terms.

One of, and possibly the, best ways to remedy this situation is to reestablish the role of judge as independent decisionmaker, as Hamburger argues. But the virtue of judicial independence is not endogenous to the judiciary. Rather, judicial independence has an instrumental virtue, at least in the context of Chevron. Where Congress has problematically abdicated its role as a politically-accountable decisionmaker by deferring important political decisions to the executive, judicial refusal to defer to executive and agency interpretations of ambiguous statutes can force Congress to remedy problematic ambiguities. This, in turn, can return the responsibility for making politically-important decisions to the most politically-accountable branch, as envisioned by the Constitution’s framers.

A refresher on the Chevron debate

Chevron is one of the defining doctrines of administrative law, both as a central concept and focal debate. It stands generally for the proposition that when Congress gives agencies ambiguous statutory instructions, it falls to the agencies, not the courts, to resolve those ambiguities. Thus, if a statute is ambiguous (the question at “step one” of the standard Chevron analysis) and the agency offers a reasonable interpretation of that ambiguity (“step two”), courts are to defer to the agency’s interpretation of the statute instead of supplying their own.

This judicially-crafted doctrine of deference is typically justified on several grounds. For instance, agencies generally have greater subject-matter expertise than courts so are more likely to offer substantively better constructions of ambiguous statutes. They have more resources that they can dedicate to evaluating alternative constructions. They generally have a longer history of implementing relevant Congressional instructions so are more likely attuned to Congressional intent – both of the statute’s enacting and present Congresses. And they are subject to more direct Congressional oversight in their day-to-day operations and exercise of statutory authority than the courts so are more likely concerned with and responsive to Congressional direction.

Chief among the justifications for Chevron deference is, as Walker says, “the need to reserve political (or policy) judgments for the more politically accountable agencies.” This is at core a separation-of-powers justification: the legislative process is fundamentally a political process, so the Constitution assigns responsibility for it to the most politically-accountable branch (the legislature) instead of the least politically-accountable branch (the judiciary). In turn, the act of interpreting statutory ambiguity is an inherently legislative process – the underlying theory being that Congress intended to leave such ambiguity in the statute in order to empower the agency to interpret it in a quasi-legislative manner. Thus, under this view, courts should defer both to this Congressional intent that the agency be empowered to interpret its statute (and, should this prove problematic, it is up to Congress to change the statute or to face political ramifications), and the courts should defer to the agency interpretation of that statute because agencies, like Congress, are more politically accountable than the courts.

Chevron has always been an intensively studied and debated doctrine. This debate has grown more heated in recent years, to the point that there is regularly scholarly discussion about whether Chevron should be repealed or narrowed and what would replace it if it were somehow curtailed – and discussion of the ongoing vitality of Chevron has entered into Supreme Court opinions and the appointments process with increasing frequency. These debates generally focus on a few issues. A first issue is that Chevron amounts to a transfer of the legislature’s Constitutional powers and responsibilities over creating the law to the executive, where the law ordinarily is only meant to be carried out. This has, the underlying concern is, contributed to the increase in the power of the executive compared to the legislature. A second, related, issue is that Chevron contributes to the (over)empowerment of independent agencies – agencies that are already out of favor with many of Chevron’s critics as Constitutionally-infirm entities whose already-specious power is dramatically increased when Chevron limits the judiciary’s ability to check their use of already-broad Congressionally-delegated authority.

A third concern about Chevron, following on these first two, is that it strips the judiciary of its role as independent arbiter of judicial questions. That is, it has historically been the purview of judges to answer statutory ambiguities and fill in legislative interstices.

Chevron is also a focal point for more generalized concerns about the power of the modern administrative state. In this context, Chevron stands as a representative of a broader class of cases – State Farm, Auer, Seminole Rock, Fox v. FCC, and the like – that have been criticized as centralizing legislative, executive, and judicial powers in agencies, allowing Congress to abdicate its role as politically-accountable legislator, abdicating the judiciary’s role in interpreting the law, as well as raising due process concerns for those subject to rules promulgated by federal agencies..

Walker and his co-authors have empirically explored the effects of Chevron in recent years, using robust surveys of federal agencies and judicial decisions to understand how the doctrine has affected the work of agencies and the courts. His most recent work (with Kent Barnett and Christina Boyd) has explored how Chevron affects judicial decisionmaking. Framing the question by explaining that “Chevron deference strives to remove politics from judicial decisionmaking,” they ask whether “Chevron deference achieve[s] this goal of removing politics from judicial decisionmaking?” They find that, empirically speaking, “the Chevron Court’s objective to reduce partisan judicial decision-making has been quite effective.” By instructing judges to defer to the political judgments (or just statutory interpretations) of agencies, judges are less political in their own decisionmaking.

Hamburger responds to this finding somewhat dismissively – and, indeed, the finding is almost tautological: “of course, judges disagree less when the Supreme Court bars them from exercising their independent judgment about what the law is.” (While a fair critique, I would temper it by arguing that it is nonetheless an important empirical finding – empirics that confirm important theory are as important as empirics that refute it, and are too often dismissed.)

Rather than focus on concerns about politicized decisionmaking by judges, Hamburger focuses instead on the importance of judicial independence – on it being “emphatically the duty of the Judicial Department to say what the law is” (quoting Marbury v. Madison). He reframes Walker’s results, arguing that “deference” to agencies is really “bias” in favor of the executive. “Rather than reveal diminished politicization, Walker’s numbers provide strong evidence of diminished judicial independence and even of institutionalized judicial bias.”

So which is it? Does Chevron reduce bias by de-politicizing judicial decisionmaking? Or does it introduce new bias in favor of the (inherently political) executive? The answer is probably that it does both. The more important answer, however, is that neither is the right question to ask.

What’s the correct measure of politicization? (or, You get what you measure)

Walker frames his study of the effects of Chevron on judicial decisionmaking by explaining that “Chevron deference strives to remove politics from judicial decisionmaking. Such deference to the political branches has long been a bedrock principle for at least some judicial conservatives.” Based on this understanding, his project is to ask whether “Chevron deference achieve[s] this goal of removing politics from judicial decisionmaking?”

This framing, that one of Chevron’s goals is to remove politics from judicial decisionmaking, is not wrong. But this goal may be more accurately stated as being to prevent the judiciary from encroaching upon the political purposes assigned to the executive and legislative branches. This restatement offers an important change in focus. It emphasizes the concern about politicizing judicial decisionmaking as a separation of powers issue. This is in apposition to concern that, on consequentialist grounds, judges should not make politicized decisions – that is, judges should avoid political decisions because it leads to substantively worse outcomes.

It is of course true that, as unelected officials with lifetime appointments, judges are the least politically accountable to the polity of any government officials. Judges’ decisions, therefore, can reasonably be expected to be less representative of, or responsive to, the concerns of the voting public than decisions of other government officials. But not all political decisions need to be directly politically accountable in order to be effectively politically accountable. A judicial interpretation of an ambiguous law, for instance, can be interpreted as a request, or even a demand, that Congress be held to political account. And where Congress is failing to perform its constitutionally-defined role as a politically-accountable decisionmaker, it may do less harm to the separation of powers for the judiciary to make political decisions that force politically-accountable responses by Congress than for the judiciary to respect its constitutional role while the Congress ignores its role.

Before going too far down this road, I should pause to label the reframing of the debate that I have impliedly proposed. To my mind, the question isn’t whether Chevron reduces political decisionmaking by judges; the question is how Chevron affects the politicization of, and ultimately accountability to the people for, the law. Critically, there is no “conservation of politicization” principle. Institutional design matters. One could imagine a model of government where Congress exercises very direct oversight over what the law is and how it is implemented, with frequent elections and a Constitutional prohibition on all but the most express and limited forms of delegation. One can also imagine a more complicated form of government in which responsibilities for making law, executing law, and interpreting law, are spread across multiple branches (possibly including myriad agencies governed by rules that even many members of those agencies do not understand). And one can reasonably expect greater politicization of decisions in the latter compared to the former – because there are more opportunities for saying that the responsibility for any decision lies with someone else (and therefore for politicization) in the latter than in the “the buck stops here” model of the former.

In the common-law tradition, judges exercised an important degree of independence because their job was, necessarily and largely, to “say what the law is.” For better or worse, we no longer live in a world where judges are expected to routinely exercise that level of discretion, and therefore to have that level of independence. Nor do I believe that “independence” is necessarily or inherently a criteria for the judiciary, at least in principle. I therefore somewhat disagree with Hamburger’s assertion that Chevron necessarily amounts to a problematic diminution in judicial independence.

Again, I return to a consequentialist understanding of the purposes of judicial independence. In my mind, we should consider the need for judicial independence in terms of whether “independent” judicial decisionmaking tends to lead to better or worse social outcomes. And here I do find myself sympathetic to Hamburger’s concerns about judicial independence. The judiciary is intended to serve as a check on the other branches. Hamburger’s concern about judicial independence is, in my mind, driven by an overwhelmingly correct intuition that the structure envisioned by the Constitution is one in which the independence of judges is an important check on the other branches. With respect to the Congress, this means, in part, ensuring that Congress is held to political account when it does legislative tasks poorly or fails to do them at all.

The courts abdicate this role when they allow agencies to save poorly drafted statutes through interpretation of ambiguity.

Judicial independence moderates politicization

Hamburger tells us that “Judges (and academics) need to wrestle with the realities of how Chevron bias and other administrative power is rapidly delegitimizing our government and creating a profound alienation.” Huzzah. Amen. I couldn’t agree more. Preach! Hear-hear!

Allow me to present my personal theory of how Chevron affects our political discourse. In the vernacular, I call this Chevron Step Three. At Step Three, Congress corrects any mistakes made by the executive or independent agencies in implementing the law or made by the courts in interpreting it. The subtle thing about Step Three is that it doesn’t exist – and, knowing this, Congress never bothers with the politically costly and practically difficult process of clarifying legislation.

To the contrary, Chevron encourages the legislature expressly not to legislate. The more expedient approach for a legislator who disagrees with a Chevron-backed agency action is to campaign on the disagreement – that is, to politicize it. If the EPA interprets the Clean Air Act too broadly, we need to retake the White House to get a new administrator in there to straighten out the EPA’s interpretation of the law. If the FCC interprets the Communications Act too narrowly, we need to retake the White House to change the chair so that we can straighten out that mess! And on the other side, we need to keep the White House so that we can protect these right-thinking agency interpretations from reversal by the loons on the other side that want to throw out all of our accomplishments. The campaign slogans write themselves.

So long as most agencies’ governing statutes are broad enough that those agencies can keep the ship of state afloat, even if drifting rudderless, legislators have little incentive to turn inward to engage in the business of government with their legislative peers. Rather, they are freed to turn outward towards their next campaign, vilifying or deifying the administrative decisions of the current government as best suits their electoral prospects.

The sharp-eyed observer will note that I’ve added a piece to the Chevron puzzle: the process described above assumes that a new administration can come in after an election and simply rewrite all of the rules adopted by the previous administration. Not to put too fine a point on the matter, but this is exactly what administrative law allows (see Fox v. FCC and State Farm). The underlying logic, which is really nothing more than an expansion of Chevron, is that statutory ambiguity delegates to agencies a “policy space” within which they are free to operate. So long as agency action stays within that space – which often allows for diametrically-opposed substantive interpretations – the courts say that it is up to Congress, not the Judiciary, to provide course corrections. Anything else would amount to politically unaccountable judges substituting their policy judgments (this is, acting independently) for those of politically-accountable legislators and administrators.

In other words, the politicization of law seen in our current political moment is largely a function of deference and a lack of stare decisis combined. A virtue of stare decisis is that it forces Congress to act to directly address politically undesirable opinions. Because agencies are not bound by stare decisis, an alternative, and politically preferable, way for Congress to remedy problematic agency decisions is to politicize the issue – instead of addressing the substantive policy issue through legislation, individual members of Congress can campaign on it. (Regular readers of this blog will be familiar with one contemporary example of this: the recent net neutrality CRA vote, which is widely recognized as having very little chance of ultimate success but is being championed by its proponents as a way to influence the 2018 elections.) This is more directly aligned with the individual member of Congress’s own incentives, because, by keeping and placing more members of her party in Congress, her party will be able to control the leadership of the agency which will thus control the shape of that agency’s policy. In other words, instead of channeling the attention of individual Congressional actors inwards to work together to develop law and policy, it channels it outwards towards campaigning on the ills and evils of the opposing administration and party vice the virtues of their own party.

The virtue of judicial independence, of judges saying what they think the law is – or even what they think the law should be – is that it forces a politically-accountable decision. Congress can either agree, or disagree; but Congress must do something. Merely waiting for the next administration to come along will not be sufficient to alter the course set by the judicial interpretation of the law. Where Congress has abdicated its responsibility to make politically-accountable decisions by deferring those decisions to the executive or agencies, the political-accountability justification for Chevron deference fails. In such cases, the better course for the courts may well be to enforce Congress’s role under the separation of powers by refusing deference and returning the question to Congress.