Archives For cost-benefit analysis

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Tim Brennan, (Professor, Economics & Public Policy, University of Maryland; former FCC; former FTC).]

Thinking about how to think about the coronavirus situation I keep coming back to three economic ideas that seem distinct but end up being related. First, a back of the envelope calculation suggests shutting down the economy for a while to reduce the spread of Covid-19. This leads to my second point, that political viability, if not simple fairness, dictates that the winners compensate the losers. The extent of both of these forces my main point, to understand why we can’t just “get the prices right” and let the market take care of it. Insisting that the market works in this situation could undercut the very strong arguments for why we should defer to markets in the vast majority of circumstances.

Is taking action worth it?

The first question is whether shutting down the economy to reduce the spread of Covid-19 is a good bet. Being an economist, I turn to benefit-cost analysis (BCA). All I can offer here is a back-of-the-envelope calculation, which may be an insult to envelopes. (This paper has a more serious calculation with qualitatively similar findings.) With all caveats recognized, the willingness to pay of an average person in the US to social distancing and closure policies, WTP, is

        WTP = X% times Y% times VSL,

where X% is the fraction of the population that might be seriously affected, Y% is the reduction in the likelihood of death for this population from these policies, and VSL is the “value of statistical life” used in BCA calculations, in the ballpark of $9.5M.

For X%, take the percentage of the population over 65 (a demographic including me). This is around 16%. I’m not an epidemiologist, so for Y%, the reduced likelihood of death (either from reduced transmission or reduced hospital overload), I can only speculate. Say it’s 1%, which naively seems pretty small. Even with that, the average willingness to pay would be

        WTP = 16% times 1% times $9.5M = $15,200.

Multiply that by a US population of roughly 330M gives a total national WTP of just over $5 trillion, or about 23% of GDP. Using conventional measures, this looks like a good trade in an aggregate benefit-cost sense, even leaving out willingness to pay to reduce the likelihood of feeling sick and the benefits to those younger than 65. Of course, among the caveats is not just whether to impose distancing and closures, but how long to have them (number of weeks), how severe they should be (gathering size limits, coverage of commercial establishments), and where they should be imposed (closing schools, colleges).  

Actual, not just hypothetical, compensation

The justification for using BCA is that the winners could compensate the losers. In the coronavirus setting, the equity considerations are profound. Especially when I remember that GDP is not a measure of consumer surplus, I ask myself how many months of the disruption (and not just lost wages) from unemployment should low-income waiters, cab drivers, hotel cleaners, and the like bear to reduce my over-65 likelihood of dying. 

Consequently, an important component of this policy to respect equity and quite possibly obtaining public acceptance is that the losers be compensated. In that respect, the justification for packages such as the proposal working (as I write) through Congress is not stimulus—after all, it’s  harder to spend money these days—as much as compensating those who’ve lost jobs as a result of this policy. Stimulus can come when the economy is ready to be jump-started.

Markets don’t always work, perhaps like now 

This brings me to a final point—why is this a public policy matter? My answer to almost any policy question is the glib “just get the prices right and the market will take care of it.” That doesn’t seem all that popular now. Part of that is the politics of fairness: Should the wealthy get the ventilators? Should hoarding of hand sanitizer be rewarded? But much of it may be a useful reminder that markets do not work seamlessly and instantaneously, and may not be the best allocation mechanism in critical times.

That markets are not always best should be a familiar theme to TOTM readers. The cost of using markets is the centerpiece for Ronald Coase’s 1937 Nature of the Firm and 1960 Problem of Social Cost justification for allocation through the courts. Many of us, including me on TOTM, have invoked these arguments to argue against public interventions in the structure of firms, particularly antitrust actions regarding vertical integration. Another common theme is that the common law tends toward efficiency because of the market-like evolutionary processes in property, tort, and contract case law.

This perspective is a useful reminder that the benefits of markets should always be “compared to what?” In one familiar case, the benefits of markets are clear when compared to the snail’s pace, limited information, and political manipulability of administrative price setting. But when one is talking about national emergencies and the inelastic demands, distributional consequences, and the lack of time for the price mechanism to work its wonders, one can understand and justify the use of the plethora of mandates currently imposed or contemplated. 

The common law also appears not to be a good alternative. One can imagine the litigation nightmare if everyone who got the virus attempted to identify and sue some defendant for damages. A similar nightmare awaits if courts were tasked with determning how the risk of a pandemic would have been allocated were contracts ideal.

Much of this may be belaboring the obvious. My concern is that if those of us who appreciate the virtues of markets exaggerate their applicability, those skeptical of markets may use this episode to say that markets inherently fail and more of the economy should be publicly administered. Better to rely on facts rather than ideology, and to regard the current situation as the awful but justifiable exception that proves the general rule.

Last June, in Michigan v. EPA, the Supreme Court commendably recognized cost-benefit analysis as critical to any reasoned evaluation of regulatory proposals by federal agencies.  (For more on the merits and limitations of this holding, see my June 29 blog.)  The White House (Office of Management and Budget) office that evaluates proposed federal regulations, the Office of Information and Regulatory Affairs (OIRA), does not, however, currently assess independent agencies’ regulations (the Heritage Foundation has argued that independent agencies should be subjected to Executive Branch regulatory review).  This is most unfortunate, because the economic impact of independent agencies’ regulations (such as those promulgated by the Federal Communications Commission, the Consumer Financial Protection Bureau, among many other “independent” entities) is enormous.

Recent research lends strong support to the case for OIRA review of independent agency regulations.  As former OIRA Administrator Susan Dudley (currently Director of the George Washington University Regulatory Studies Center) explained in recent testimony before the Senate Homeland Security and Government Affairs Committee, independent agencies have done an extremely poor job in evaluating the economic effects of their regulatory initiatives:

“The Administrative Conference of the United States recommended in 2013 that independent regulatory agencies adopt more transparent and rigorous regulatory analyses practices for major rules.  OIRA observed in its most recent regulatory report to Congress that “the independent agencies still continue to struggle in providing monetized estimates of benefits and costs of regulation.”  According to available government data, more than 40 percent of the rules developed by independent agencies over the last 10 years provided no information on either the costs or the benefits expected from their implementation.”

This poor record provides strong justification for legislative proposals (such as, the Independent Agency Regulatory Analysis Act of 2015 (S. 1607), which explicitly authorizes presidents to require independent regulatory agencies to comply with regulatory analysis requirements.  They also lend further support to congressional proposals (such as the REINS Act, which passed the House in August 2015) that would require congressional approval of new “major” regulations promulgated by federal agencies, including independent agencies.  For a more extensive discussion of the costs of overregulation and needed regulatory reforms, see the Heritage Foundation’s memorandum “Red Tape Rising: Six Years of Escalating Regulation Under Obama.

There is also a substantial constitutional argument that pursuant to the U.S. Constitution’s Executive Vesting Clause (Article II, Section 1, Clause 1) and Take Care Clause (Article II, Section 3), the President could direct that OIRA review independent agencies’ regulatory proposals, but an assessment of that interesting proposition is beyond the scope of this commentary.

As the organizer of this retrospective on Josh Wright’s tenure as FTC Commissioner, I have the (self-conferred) honor of closing out the symposium.

When Josh was confirmed I wrote that:

The FTC will benefit enormously from Josh’s expertise and his error cost approach to antitrust and consumer protection law will be a tremendous asset to the Commission — particularly as it delves further into the regulation of data and privacy. His work is rigorous, empirically grounded, and ever-mindful of the complexities of both business and regulation…. The Commissioners and staff at the FTC will surely… profit from his time there.

Whether others at the Commission have really learned from Josh is an open question, but there’s no doubt that Josh offered an enormous amount from which they could learn. As Tim Muris said, Josh “did not disappoint, having one of the most important and memorable tenures of any non-Chair” at the agency.

Within a month of his arrival at the Commission, in fact, Josh “laid down the cost-benefit-analysis gauntlet” in a little-noticed concurring statement regarding a proposed amendment to the Hart-Scott-Rodino Rules. The technical details of the proposed rule don’t matter for these purposes, but, as Josh noted in his statement, the situation intended to be avoided by the rule had never arisen:

The proposed rulemaking appears to be a solution in search of a problem. The Federal Register notice states that the proposed rules are necessary to prevent the FTC and DOJ from “expend[ing] scarce resources on hypothetical transactions.” Yet, I have not to date been presented with evidence that any of the over 68,000 transactions notified under the HSR rules have required Commission resources to be allocated to a truly hypothetical transaction.

What Josh asked for in his statement was not that the rule be scrapped, but simply that, before adopting the rule, the FTC weigh its costs and benefits.

As I noted at the time:

[I]t is the Commission’s responsibility to ensure that the rules it enacts will actually be beneficial (it is a consumer protection agency, after all). The staff, presumably, did a perfectly fine job writing the rule they were asked to write. Josh’s point is simply that it isn’t clear the rule should be adopted because it isn’t clear that the benefits of doing so would outweigh the costs.

As essentially everyone who has contributed to this symposium has noted, Josh was singularly focused on the rigorous application of the deceptively simple concept that the FTC should ensure that the benefits of any rule or enforcement action it adopts outweigh the costs. The rest, as they say, is commentary.

For Josh, this basic principle should permeate every aspect of the agency, and permeate the way it thinks about everything it does. Only an entirely new mindset can ensure that outcomes, from the most significant enforcement actions to the most trivial rule amendments, actually serve consumers.

While the FTC has a strong tradition of incorporating economic analysis in its antitrust decision-making, its record in using economics in other areas is decidedly mixed, as Berin points out. But even in competition policy, the Commission frequently uses economics — but it’s not clear it entirely understands economics. The approach that others have lauded Josh for is powerful, but it’s also subtle.

Inherent limitations on anyone’s knowledge about the future of technology, business and social norms caution skepticism, as regulators attempt to predict whether any given business conduct will, on net, improve or harm consumer welfare. In fact, a host of factors suggests that even the best-intentioned regulators tend toward overconfidence and the erroneous condemnation of novel conduct that benefits consumers in ways that are difficult for regulators to understand. Coase’s famous admonition in a 1972 paper has been quoted here before (frequently), but bears quoting again:

If an economist finds something – a business practice of one sort or another – that he does not understand, he looks for a monopoly explanation. And as in this field we are very ignorant, the number of ununderstandable practices tends to be very large, and the reliance on a monopoly explanation, frequent.

Simply “knowing” economics, and knowing that it is important to antitrust enforcement, aren’t enough. Reliance on economic formulae and theoretical models alone — to say nothing of “evidence-based” analysis that doesn’t or can’t differentiate between probative and prejudicial facts — doesn’t resolve the key limitations on regulatory decisionmaking that threaten consumer welfare, particularly when it comes to the modern, innovative economy.

As Josh and I have written:

[O]ur theoretical knowledge cannot yet confidently predict the direction of the impact of additional product market competition on innovation, much less the magnitude. Additionally, the multi-dimensional nature of competition implies that the magnitude of these impacts will be important as innovation and other forms of competition will frequently be inversely correlated as they relate to consumer welfare. Thus, weighing the magnitudes of opposing effects will be essential to most policy decisions relating to innovation. Again, at this stage, economic theory does not provide a reliable basis for predicting the conditions under which welfare gains associated with greater product market competition resulting from some regulatory intervention will outweigh losses associated with reduced innovation.

* * *

In sum, the theoretical and empirical literature reveals an undeniably complex interaction between product market competition, patent rules, innovation, and consumer welfare. While these complexities are well understood, in our view, their implications for the debate about the appropriate scale and form of regulation of innovation are not.

Along the most important dimensions, while our knowledge has expanded since 1972, the problem has not disappeared — and it may only have magnified. As Tim Muris noted in 2005,

[A] visitor from Mars who reads only the mathematical IO literature could mistakenly conclude that the U.S. economy is rife with monopoly power…. [Meanwhile, Section 2’s] history has mostly been one of mistaken enforcement.

It may not sound like much, but what is needed, what Josh brought to the agency, and what turns out to be absolutely essential to getting it right, is unflagging awareness of and attention to the institutional, political and microeconomic relationships that shape regulatory institutions and regulatory outcomes.

Regulators must do their best to constantly grapple with uncertainty, problems of operationalizing useful theory, and, perhaps most important, the social losses associated with error costs. It is not (just) technicians that the FTC needs; it’s regulators imbued with the “Economic Way of Thinking.” In short, what is needed, and what Josh brought to the Commission, is humility — the belief that, as Coase also wrote, sometimes the best answer is to “do nothing at all.”

The technocratic model of regulation is inconsistent with the regulatory humility required in the face of fast-changing, unexpected — and immeasurably valuable — technological advance. As Virginia Postrel warns in The Future and Its Enemies:

Technocrats are “for the future,” but only if someone is in charge of making it turn out according to plan. They greet every new idea with a “yes, but,” followed by legislation, regulation, and litigation…. By design, technocrats pick winners, establish standards, and impose a single set of values on the future.

For Josh, the first JD/Econ PhD appointed to the FTC,

economics provides a framework to organize the way I think about issues beyond analyzing the competitive effects in a particular case, including, for example, rulemaking, the various policy issues facing the Commission, and how I weigh evidence relative to the burdens of proof and production. Almost all the decisions I make as a Commissioner are made through the lens of economics and marginal analysis because that is the way I have been taught to think.

A representative example will serve to illuminate the distinction between merely using economics and evidence and understanding them — and their limitations.

In his Nielson/Arbitron dissent Josh wrote:

The Commission thus challenges the proposed transaction based upon what must be acknowledged as a novel theory—that is, that the merger will substantially lessen competition in a market that does not today exist.

[W]e… do not know how the market will evolve, what other potential competitors might exist, and whether and to what extent these competitors might impose competitive constraints upon the parties.

Josh’s straightforward statement of the basis for restraint stands in marked contrast to the majority’s decision to impose antitrust-based limits on economic activity that hasn’t even yet been contemplated. Such conduct is directly at odds with a sensible, evidence-based approach to enforcement, and the economic problems with it are considerable, as Josh also notes:

[I]t is an exceedingly difficult task to predict the competitive effects of a transaction where there is insufficient evidence to reliably answer the[] basic questions upon which proper merger analysis is based.

When the Commission’s antitrust analysis comes unmoored from such fact-based inquiry, tethered tightly to robust economic theory, there is a more significant risk that non-economic considerations, intuition, and policy preferences influence the outcome of cases.

Compare in this regard Josh’s words about Nielsen with Deborah Feinstein’s defense of the majority from such charges:

The Commission based its decision not on crystal-ball gazing about what might happen, but on evidence from the merging firms about what they were doing and from customers about their expectations of those development plans. From this fact-based analysis, the Commission concluded that each company could be considered a likely future entrant, and that the elimination of the future offering of one would likely result in a lessening of competition.

Instead of requiring rigorous economic analysis of the facts, couched in an acute awareness of our necessary ignorance about the future, for Feinstein the FTC fulfilled its obligation in Nielsen by considering the “facts” alone (not economic evidence, mind you, but customer statements and expressions of intent by the parties) and then, at best, casually applying to them the simplistic, outdated structural presumption – the conclusion that increased concentration would lead inexorably to anticompetitive harm. Her implicit claim is that all the Commission needed to know about the future was what the parties thought about what they were doing and what (hardy disinterested) customers thought they were doing. This shouldn’t be nearly enough.

Worst of all, Nielsen was “decided” with a consent order. As Josh wrote, strongly reflecting the essential awareness of the broader institutional environment that he brought to the Commission:

[w]here the Commission has endorsed by way of consent a willingness to challenge transactions where it might not be able to meet its burden of proving harm to competition, and which therefore at best are competitively innocuous, the Commission’s actions may alter private parties’ behavior in a manner that does not enhance consumer welfare.

Obviously in this regard his successful effort to get the Commission to adopt a UMC enforcement policy statement is a most welcome development.

In short, Josh is to be applauded not because he brought economics to the Commission, but because he brought the economic way of thinking. Such a thing is entirely too rare in the modern administrative state. Josh’s tenure at the FTC was relatively short, but he used every moment of it to assiduously advance his singular, and essential, mission. And, to paraphrase the last line of the movie The Right Stuff (it helps to have the rousing film score playing in the background as you read this): “for a brief moment, [Josh Wright] became the greatest [regulator] anyone had ever seen.”

I would like to extend my thanks to everyone who participated in this symposium. The contributions here will stand as a fitting and lasting tribute to Josh and his legacy at the Commission. And, of course, I’d also like to thank Josh for a tenure at the FTC very much worth honoring.

Today, in Michigan v. EPA, a five-Justice Supreme Court majority (Antonin Scalia, joined by Chief Justice John Roberts, and Justices Anthony Kennedy, Clarence Thomas, and Samuel Alito, with Thomas issuing a separate concurrence) held that the Clean Air Act requires the Environmental Protection Agency (EPA) to consider costs, including the cost of compliance, when deciding whether to regulate hazardous air pollutants emitted by power plants.  The Clean Air Act, 42 U. S. C. §7412, authorizes the EPA to regulate emissions of hazardous air pollutants from certain stationary sources, such as refineries and factories.  The EPA may, however, regulate power plants under this program only if it concludes that such regulation is “appropriate and necessary” after studying hazards to public health posed by power-plant emissions, 42 U.S.C. §7412(n)(1)(A).  EPA determined that it was “appropriate and necessary” to regulate oil- and coal-fired power plants, because the plants’ emissions pose risks to public health and the environment and because controls capable of reducing these emissions were available.  (The EPA contended that its regulations would have ancillary benefits (including cutting power plants’ emissions of  particulate matter and sulfur dioxide) not covered by the hazardous air pollutants program, but conceded that its estimate of benefits “played no role” in its finding that regulation was “appropriate and necessary.”)  The EPA refused to consider costs when deciding to regulate, even though it estimated that the cost of its regulations to power plants would be $9.6 billion a year, but the quantifiable benefits from the resulting reduction in hazardous-air-pollutant emissions would be $4 to $6 million a year.  Twenty-three states challenged the EPA’s refusal to consider cost, but the U.S. Court of Appeals for the D.C. Circuit upheld the agency’s decision not to consider costs at the outset.  In reversing the D.C. Circuit, the Court stressed that EPA strayed well beyond the bounds of reasonable interpretation in concluding that cost is not a factor relevant to the appropriateness of regulating power plants.  Read naturally against the backdrop of established administrative law, the phrase “appropriate and necessary” plainly encompasses cost, according to the Court.

In a concurring opinion, Justice Thomas opined that this case “raises serious questions about the constitutionality of our broader practice of deferring to agency interpretations of federal statutes.”  Justice Elena Kagan, joined by Justices Ruth Bader Ginsburg, Stephen Breyer, and Sonya Sotomayor, dissented, reasoning that EPA “acted well within its authority in declining to consider costs at the [beginning] . . . of the regulatory process given that it would do so in every round thereafter.”

Although the Supreme Court’s holding merits praise, it is inherently limited in scope, and should not be expected to significantly constrain regulatory overreach, whether by the EPA or by other agencies.  First, in remanding the case, the Court did not opine on the precise manner in which costs and benefits should be evaluated, potentially leaving EPA broad latitude to try to reach its desired regulatory result with a bit of “cost-benefit” wordsmithing.  Such a result would not be surprising, given that “[t]he U.S. Government has a strong tendency to overregulate.  More specifically, administrative agencies such as EPA, whose staffs are dominated by regulatorily-minded permanent bureaucrats, will have every incentive to skew judicially-required “cost assessments” to justify their actions – based on, for example, “false assumptions and linkages, black-box computer models, secretive collusion with activist groups, outright deception, and supposedly ‘scientific’ reports whose shady data and methodologies the agency refuses to share with industries, citizens or even Congress.”  Since, as a practical matter, appellate courts have neither the resources nor the capacity to sort out legitimate from illegitimate agency claims that regulatory programs truly meet cost-benefit standards, it would be naïve to believe that the Court’s majority opinion will be able to do much to rein in the federal regulatory behemoth.

What, then, is the solution?  The concern that federal administrative agencies are being allowed to arrogate to themselves inherently executive and judicial functions, a theme previously stressed by Justice Thomas, has not led other justices to call for wide-scale judicial nullification or limitation of expansive agency regulatory findings.  Absent an unexpected Executive Branch epiphany, then, the best bet for reform lies primarily in congressional action.

What sort of congressional action?  The Heritage Foundation has described actions needed to help stem the tide of overregulation:  (1) require congressional approval of new major regulations promulgated by agencies; (2) establish a sunset date for federal regulations; (3) subject “independent” agencies to executive branch regulatory review; and (4) develop a congressional regulatory analysis capability.  Legislative proposals such as the REINS Act (Regulations from the Executive in Need of Scrutiny Act of 2015), would meet the first objective, while other discrete measures could advance the other three goals.  Public choice considerations suggest that these reforms will not be easily achieved (beneficiaries of the intrusive regulatory status quo may be expected to vigorously oppose reform), but they nevertheless should be pursued posthaste.

As I mentioned in my previous post, there is a strong effort to regulate the use of information on the web in the name of “privacy.” The basic tradeoff that drives the web is that firms use information for advertising and other purposes,and in return consumers get lots of things free.  Google alone offers about 40 free services, including the original  search engine, gmail, maps, and the increasingly popular android operating system for mobile devices. Facebook is another set of free services. There are hundreds of others, all ultimately funded by advertising and the use of information.  Any effort to regulate information is going to change the terms at which these services are offered.

To justify regulation, two conditions must be met.  First there must be some market failure.  Second, there must be at least an expectation that the benefits of the proposed regulation will outweigh the costs.  In a market economy, we generally put the burden of proof on those proposing regulation, since the default assumption is that markets provide net benefits.  Proponents of regulating the use of information on the internet have met neither of these burdens.

One main justification for regulation is that people do not want to be tracked. I discussed this issue in my previous post.  Let me just add that, while people express a desire not to be tracked, in practice they seem quite willing to trade information for other services.  The other issue is identity theft — the possibility that information will be misused for illegitimate purposes.  Tom Lenard and I have written extensively about this issue. The bottom line, however, is that consumers are not liable for much if any of the costs of identity theft, and since firms must bear these costs there is no obvious market failure.

With respect to the second issue, there has been virtually no effort to undertake any cost benefit analysis of the proposed regulations.  However, if there were such an analysis, it is unlikely that regulations would be cost justified since the benefits of the free stuff are huge and the costs are small at best.  While it is conceivable that some tweaking would pass a cost-benefit test, it is very unlikely that any regulation which could get through the political process and then be administered by an agency such as the FTC would in fact pass this test.  Moreover, the proposed regulations, such as a “do not track” list or shifting from opt out to opt in are well beyond “tweaking” and might fundamentally change the terms of the tradeoff.

The bottom line is this:  Privacy advocates act as if privacy is free.  But increased privacy means reduced use of information, and no one has shown that altering the terms of this tradeoff would be beneficial to consumers.