Archives For

Antitrust populists have a long list of complaints about competition policy, including: laws aren’t broad enough or tough enough, enforcers are lax, and judges tend to favor defendants over plaintiffs or government agencies. The populist push got a bump with the New York Times coverage of Lina Khan’s “Amazon’s Antitrust Paradox” in which she advocated breaking up Amazon and applying public utility regulation to platforms. Khan’s ideas were picked up by Sen. Elizabeth Warren, who has a plan for similar public utility regulation and promised to unwind earlier acquisitions by Amazon (Whole Foods and Zappos), Facebook (WhatsApp and Instagram), and Google (Waze, Nest, and DoubleClick).

Khan, Warren, and the other Break Up Big Tech populists don’t clearly articulate how consumers, suppliers — or anyone for that matter — would be better off with their mandated spinoffs. The Khan/Warren plan, however, requires a unique alignment of many factors: Warren must win the White House, Democrats must control both houses of Congress, and judges must substantially shift their thinking. It’s like turning a supertanker on a dime in the middle of a storm. Instead of publishing manifestos and engaging in antitrust hashtag hipsterism, maybe — just maybe — the populists can do something.

The populists seem to have three main grievances:

  • Small firms cannot enter the market or cannot thrive once they enter;
  • Suppliers, including workers, are getting squeezed; and
  • Speculation that someday firms will wake up, realize they have a monopoly, and begin charging noncompetitive prices to consumers.

Each of these grievances can be, and has been, already addressed by antitrust and competition litigation. And, in many cases these grievances were addressed in private antitrust litigation. For example:

In the US, private actions are available for a wide range of alleged anticompetitive conduct, including coordinated conduct (e.g., price-fixing), single-firm conduct (e.g., predatory pricing), and mergers that would substantially lessen competition. 

If the antitrust populists are so confident that concentration is rising and firms are behaving anticompetitively and consumers/suppliers/workers are being harmed, then why don’t they organize an antitrust lawsuit against the worst of the worst violators? If anticompetitive activity is so obvious and so pervasive, finding compelling cases should be easy.

For example, earlier this year, Shaoul Sussman, a law student at Fordham University, published “Prime Predator: Amazon and the Rationale of Below Average Variable Cost Pricing Strategies Among Negative-Cash Flow Firms” in the Journal of Antitrust Enforcement. Why not put Sussman’s theory to the test by building an antitrust case around it? The discovery process would unleash a treasure trove of cost data and probably more than a few “hot docs.”

Khan argues:

While predatory pricing technically remains illegal, it is extremely difficult to win predatory pricing claims because courts now require proof that the alleged predator would be able to raise prices and recoup its losses. 

However, in her criticism of the court in the Apple e-books litigation, she lays out a clear rationale for courts to revise their thinking on predatory pricing [emphasis added]:

Judge Cote, who presided over the district court trial, refrained from affirming the government’s conclusion. Still, the government’s argument illustrates the dominant framework that courts and enforcers use to analyze predation—and how it falls short. Specifically, the government erred by analyzing the profitability of Amazon’s e-book business in the aggregate and by characterizing the conduct as “loss leading” rather than potentially predatory pricing. These missteps suggest a failure to appreciate two critical aspects of Amazon’s practices: (1) how steep discounting by a firm on a platform-based product creates a higher risk that the firm will generate monopoly power than discounting on non-platform goods and (2) the multiple ways Amazon could recoup losses in ways other than raising the price of the same e-books that it discounted.

Why not put Khan’s cross-subsidy theory to the test by building an antitrust case around it? Surely there’d be a document explaining how the firm expects to recoup its losses. Or, maybe not. Maybe by the firm’s accounting, it’s not losing money on the discounted products. Without evidence, it’s just speculation.

In fairness, one can argue that recent court decisions have made pursuing private antitrust litigation more difficult. For example, the Supreme Court’s decision in Twombly requires an antitrust plaintiff to show more than mere speculation based on circumstantial evidence in order to move forward to discovery. Decisions in matters such as Ashcroft v. Iqbal have made it more difficult for plaintiffs to maintain antitrust claims. Wal-Mart v. Dukes and Comcast Corp v Behrend subject antitrust class actions to more rigorous analysis. In Ohio v. Amex the court ruled antitrust plaintiffs can’t meet the burden of proof by showing only some effect on some part of a two-sided market.

At the same time Jeld-Wen indicates third party plaintiffs can be awarded damages and obtain divestitures, even after mergers clear. In Jeld-Wen, a competitor filed suit to challenge the consummated Jeld-Wen/Craftmaster merger four years after the DOJ approved the merger without conditions. The challenge was lengthy, but successful, and a district court ordered damages and the divestiture of one of the combined firm’s manufacturing facilities six years after the merger was closed.

Despite the possible challenges of pursuing a private antitrust suit, Daniel Crane’s review of US federal court workload statistics concludes the incidence of private antitrust enforcement in the United States has been relatively stable since the mid-1980s — in the range of 600 to 900 new private antitrust filings a year. He also finds resolution by trial has been relatively stable at an average of less than 1 percent a year. Thus, it’s not clear that recent decisions have erected insurmountable barriers to antitrust plaintiffs.

In the US, third parties may fund private antitrust litigation and plaintiffs’ attorneys are allowed to work under a contingency fee arrangement, subject to court approval. A compelling case could be funded by deep-pocketed supporters of the populists’ agenda, big tech haters, or even investors. Perhaps the most well-known example is Peter Thiel’s bankrolling of Hulk Hogan’s takedown of Gawker. Before that, the savings and loan crisis led to a number of forced mergers which were later challenged in court, with the costs partially funded by the issuance of litigation tracking warrants.

The antitrust populist ranks are chock-a-block with economists, policy wonks, and go-getter attorneys. If they are so confident in their claims of rising concentration, bad behavior, and harm to consumers, suppliers, and workers, then they should put those ideas to the test with some slam dunk litigation. The fact that they haven’t suggests they may not have a case.

Wall Street Journal commentator, Greg Ip, reviews Thomas Philippon’s forthcoming book, The Great Reversal: How America Gave Up On Free Markets. Ip describes a “growing mountain” of research on industry concentration in the U.S. and reports that Philippon concludes competition has declined over time, harming U.S. consumers.

In one example, Philippon points to air travel. He notes that concentration in the U.S. has increased rapidly—spiking since the Great Recession—while concentration in the EU has increased modestly. At the same time, Ip reports “U.S. airlines are now far more profitable than their European counterparts.” (Although it’s debatable whether a five percentage point difference in net profit margin is “far more profitable”). 

On first impression, the figures fit nicely with the populist antitrust narrative: As concentration in the U.S. grew, so did profit margins. Closer inspection raises some questions, however. 

For example, the U.S. airline industry had a negative net profit margin in each of the years prior to the spike in concentration. While negative profits may be good for consumers, it would be a stretch to argue that long-run losses are good for competition as a whole. At some point one or more of the money losing firms is going to pull the ripcord. Which raises the issue of causation.

Just looking at the figures from the WSJ article, one could argue that rather than concentration driving profit margins, instead profit margins are driving concentration. Indeed, textbook IO economics would indicate that in the face of losses, firms will exit until economic profit equals zero. Paraphrasing Alfred Marshall, “Which blade of the scissors is doing the cutting?”

While the concentration and profits story fits the antitrust populist narrative, other observations run contrary to Philippon’s conclusion. For example, airline prices, as measured by price indexes, show that changes in U.S. and EU airline prices have fairly closely tracked each other until 2014, when U.S. prices began dropping. Sure, airlines have instituted baggage fees, but the CPI includes taxes, fuel surcharges, airport, security, and baggage fees. It’s not obvious that U.S. consumers are worse off in the so-called era of rising concentration.

Regressing U.S. air fare price index against Philippon’s concentration information in the figure above (and controlling for general inflation) finds that if U.S. concentration in 2015 was the same as in 1995, U.S. airfares would be about 2.8% lower. That a 1,250 point increase in HHI would be associated with a 2.8% increase in prices indicates that the increased concentration in U.S. airlines has led to no significant increase in consumer prices.

Also, if consumers are truly worse off, one would expect to see a drop off or slow down in the use of air travel. An eyeballing of passenger data does not fit the populist narrative. Instead, we see airlines are carrying more passengers and consumers are paying lower prices on average.

While it’s true that low-cost airlines have shaken up air travel in the EU, the differences are not solely explained by differences in market concentration. For example, U.S. regulations prohibit foreign airlines from operating domestic flights while EU carriers compete against operators from other parts of Europe. While the WSJ’s figures tell an interesting story of concentration, prices, and profits, they do not provide a compelling case of anticompetitive conduct.

In the Federal Trade Commission’s recent hearings on competition policy in the 21st century, Georgetown professor Steven Salop urged greater scrutiny of vertical mergers. He argued that regulators should be skeptical of the claim that vertical integration tends to produce efficiencies that can enhance consumer welfare. In his presentation to the FTC, Professor Salop provided what he viewed as exceptions to this long-held theory.

Also, vertical merger efficiencies are not inevitable. I mean, vertical integration is common, but so is vertical non-integration. There is an awful lot of companies that are not vertically integrated. And we have lots of examples in which vertical integration has failed. Pepsi’s acquisition of KFC and Pizza Hut; you know, of course Coca-Cola has not merged with McDonald’s . . . .

Aside from the logical fallacy of cherry picking examples (he also includes Betamax/VHS and the split up of Alcoa and Arconic, as well as “integration and disintegration” “in cable”), Professor Salop misses the fact that PepsiCo’s 20 year venture into restaurants had very little to do with vertical integration.

Popular folklore says PepsiCo got into fast food because it was looking for a way to lock up sales of its fountain sodas. Soda is considered one of the highest margin products sold by restaurants. Vertical integration by a soda manufacturer into restaurants would eliminate double marginalization with the vertically integrated firm reaping most of the gains. The folklore fits nicely with economic theory. But, the facts may not fit the theory.

PepsiCo acquired Pizza Hut in 1977, Taco Bell in 1978, and Kentucky Fried Chicken in 1986. Prior to PepsiCo’s purchase, KFC had been owned by spirits company Heublein and conglomerate RJR Nabisco. This was the period of conglomerates—Pillsbury owned Burger King and General Foods owned Burger Chef (or maybe they were vertically integrated into bun distribution).

In the early 1990s Pepsi also bought California Pizza Kitchen, Chevys Fresh Mex, and D’Angelo Grilled Sandwiches.

In 1997, PepsiCo exited the restaurant business. It spun off Pizza Hut, Taco Bell, and KFC to Tricon Global Restaurants, which would later be renamed Yum! Brands. CPK and Chevy’s were purchased by private equity investors. D’Angelo was sold to Papa Gino’s Holdings, a restaurant chain. Since then, both Chevy’s and Papa Gino’s have filed for bankruptcy and Chevy’s has had some major shake-ups.

Professor Salop’s story focuses on the spin-off as an example of the failure of vertical mergers. But there is also a story of success. PepsiCo was in the restaurant business for two decades. More importantly, it continued its restaurant acquisitions over time. If PepsiCo’s restaurants strategy was a failure, it seems odd that the company would continue acquisitions into the early 1990s.

It’s easy, and largely correct, to conclude that PepsiCo’s restaurant acquisitions involved some degree of vertical integration, with upstream PepsiCo selling beverages to downstream restaurants. At the time PepsiCo bought Kentucky Fried Chicken, the New York Times reported KFC was Coke’s second-largest fountain account, behind McDonald’s.

But, what if vertical efficiencies were not the primary reason for the acquisitions?

Growth in U.S. carbonated beverage sales began slowing in the 1970s. It was also the “decade of the fast-food business.” From 1971 to 1977, Pizza Hut’s profits grew an average of 40% per year. Colonel Sanders sold his ownership in KFC for $2 million in 1964. Seven years later, the company was sold to Heublein for $280 million; PepsiCo paid $850 million in 1986.

Although KFC was Coke’s second largest customer at the time, about 20% of KFC’s stores served Pepsi products, “PepsiCo stressed that the major reason for the acquisition was to expand its restaurant business, which last year accounted for 26 percent of its revenues of $8.1 billion,” according to the New York Times.

Viewed in this light, portfolio diversification goes a much longer way toward explaining PepsiCo’s restaurant purchases than hoped-for vertical efficiencies. In 1997, former PepsiCo chairman Roger Enrico explained to investment analysts that the company entered the restaurant business in the first place, “because it didn’t see future growth in its soft drink and snack” businesses and thought diversification into restaurants would provide expansion opportunities.

Prior to its Pizza Hut and Taco Bell acquisitions, PepsiCo owned companies as diverse as Frito-Lay, North American Van Lines, Wilson Sporting Goods, and Rheingold Brewery. This further supports a diversification theory rather than a vertical integration theory of PepsiCo’s restaurant purchases. 

The mid 1990s and early 2000s were tough times for restaurants. Consumers were demanding healthier foods and fast foods were considered the worst of the worst. This was when Kentucky Fried Chicken rebranded as KFC. Debt hangovers from the leveraged buyout era added financial pressure. Many restaurant groups were filing for bankruptcy and competition intensified among fast food companies. PepsiCo’s restaurants could not cover their cost of capital, and what was once a profitable diversification strategy became a financial albatross, so the restaurants were spun off.

Thus, it seems more reasonable to conclude PepsiCo’s exit from restaurants was driven more by market exigencies than by a failure to achieve vertical efficiencies. While the folklore of locking up distribution channels to eliminate double marginalization fits nicely with theory, the facts suggest a more mundane model of a firm scrambling to deliver shareholder wealth through diversification in the face of changing competition.

The Department of Justice announced it has approved the $26 billion T-Mobile/Sprint merger. Once completed, the deal will create a mobile carrier with around 136 million customers in the U.S., putting it just behind Verizon (158 million) and AT&T (156 million).

While all the relevant federal government agencies have now approved the merger, it still faces a legal challenge from state attorneys general. At the very least, this challenge is likely to delay the merger; if successful, it could scupper it. In this blog post, we evaluate the state AG’s claims (and find them wanting).

Four firms good, three firms bad?

The state AG’s opposition to the T-Mobile/Sprint merger is based on a claim that a competitive mobile market requires four national providers, as articulated in their redacted complaint:

The Big Four MNOs [mobile network operators] compete on many dimensions, including price, network quality, network coverage, and features. The aggressive competition between them has resulted in falling prices and improved quality. The competition that currently takes place across those dimensions, and others, among the Big Four MNOs would be negatively impacted if the Merger were consummated. The effects of the harm to competition on consumers will be significant because the Big Four MNOs have wireless service revenues of more than $160 billion.

. . . 

Market consolidation from four to three MNOs would also serve to increase the possibility of tacit collusion in the markets for retail mobile wireless telecommunications services.

But there are no economic grounds for the assertion that a four firm industry is on a competitive tipping point. Four is an arbitrary number, offered up in order to squelch any further concentration in the industry.

A proper assessment of this transaction—as well as any other telecom merger—requires accounting for the specific characteristics of the markets affected by the merger. The accounting would include, most importantly, the dynamic, fast-moving nature of competition and the key role played by high fixed costs of production and economies of scale. This is especially important given the expectation that the merger will facilitate the launch of a competitive, national 5G network.

Opponents claim this merger takes us from four to three national carriers. But Sprint was never a serious participant in the launch of 5G. Thus, in terms of future investment in general, and the roll-out of 5G in particular, a better characterization is that it this deal takes the U.S. from two to three national carriers investing to build out next-generation networks.

In the past, the capital expenditures made by AT&T and Verizon have dwarfed those of T-Mobile and Sprint. But a combined T-Mobile/Sprint would be in a far better position to make the kinds of large-scale investments necessary to develop a nationwide 5G network. As a result, it is likely that both the urban-rural digital divide and the rich-poor digital divide will decline following the merger. And this investment will drive competition with AT&T and Verizon, leading to innovation, improving service and–over time–lowering the cost of access.

Is prepaid a separate market?

The state AGs complain that the merger would disproportionately affect consumers of prepaid plans, which they claim constitutes a separate product market:

There are differences between prepaid and postpaid service, the most notable being that individuals who cannot pass a credit check and/or who do not have a history of bill payment with a MNO may not be eligible for postpaid service. Accordingly, it is informative to look at prepaid mobile wireless telecommunications services as a separate segment of the market for mobile wireless telecommunications services.

Claims that prepaid services constitute a separate market are questionable, at best. While at one time there might have been a fairly distinct divide between pre and postpaid markets, today the line between them is at least blurry, and may not even be a meaningful divide at all.

To begin with, the arguments regarding any expected monopolization in the prepaid market appear to assume that the postpaid market imposes no competitive constraint on the prepaid market. 

But that can’t literally be true. At the very least, postpaid plans put a ceiling on prepaid prices for many prepaid users. To be sure, there are some prepaid consumers who don’t have the credit history required to participate in the postpaid market at all. But these are inframarginal consumers, and they will benefit from the extent of competition at the margins unless operators can effectively price discriminate in ways they have not in the past, and which has not been demonstrated is possible or likely.

One source of this competition will come from Dish, which has been a vocal critic of the T-Mobile/Sprint merger. Under the deal with DOJ, T-Mobile and Sprint must spin-off Sprint’s prepaid businesses to Dish. The divested products include Boost Mobile, Virgin Mobile, and Sprint prepaid. Moreover the deal requires Dish be allowed to use T-Mobile’s network during a seven-year transition period. 

Will the merger harm low-income consumers?

While the states’ complaint alleges that low-income consumers will suffer, it pays little attention to the so-called “digital divide” separating urban and rural consumers. This seems curious given the attention it was given in submissions to the federal agencies. For example, the Communication Workers of America opined:

the data in the Applicants’ Public Interest Statement demonstrates that even six years after a T-Mobile/Sprint merger, “most of New T-Mobile’s rural customers would be forced to settle for a service that has significantly lower performance than the urban and suburban parts of the network.” The “digital divide” is likely to worsen, not improve, post-merger.

This is merely an assertion, and a misleading assertion. To the extent the “digital divide” would grow following the merger, it would be because urban access will improve more rapidly than rural access would improve. 

Indeed, there is no real suggestion that the merger will impede rural access relative to a world in which T-Mobile and Sprint do not merge. 

And yet, in the absence of a merger, Sprint would be less able to utilize its own spectrum in rural areas than would the merged T-Mobile/Sprint, because utilization of that spectrum would require substantial investment in new infrastructure and additional, different spectrum. And much of that infrastructure and spectrum is already owned by T-Mobile. 

It likely that the combined T-Mobile/Sprint will make that investment, given the cost savings that are expected to be realized through the merger. So, while it might be true that urban customers will benefit more from the merger, rural customers will also benefit. It is impossible to know, of course, by exactly how much each group will benefit. But, prima facie, the prospect of improvement in rural access seems a strong argument in favor of the merger from a public interest standpoint.

The merger is also likely to reduce another digital divide: that between wealthier and poorer consumers in more urban areas. The proportion of U.S. households with access to the Internet has for several years been rising faster among those with lower incomes than those with higher incomes, thereby narrowing this divide. Since 2011, access by households earning $25,000 or less has risen from 52% to 62%, while access among the U.S. population as a whole has risen only from 72% to 78%. In part, this has likely resulted from increased mobile access (a greater proportion of Americans now access the Internet from mobile devices than from laptops), which in turn is the result of widely available, low-cost smartphones and the declining cost of mobile data.

Concluding remarks

By enabling the creation of a true, third national mobile (phone and data) network, the merger will almost certainly drive competition and innovation that will lead to better services at lower prices, thereby expanding access for all and, if current trends hold, especially those on lower incomes. Beyond its effect on the “digital divide” per se, the merger is likely to have broadly positive effects on access more generally.

There’s always a reason to block a merger:

  • If a firm is too big, it will be because it is “a merger for monopoly”;
  • If the firms aren’t that big, it will be for “coordinated effects”;
  • If a firm is small, then it will be because it will “eliminate a maverick”.

It’s a version of Ronald Coase’s complaint about antitrust, as related by William Landes:

Ronald said he had gotten tired of antitrust because when the prices went up the judges said it was monopoly, when the prices went down, they said it was predatory pricing, and when they stayed the same, they said it was tacit collusion.

Of all the reasons to block a merger, the maverick notion is the weakest, and it’s well past time to ditch it.

The Horizontal Merger Guidelines define a “maverick” as “a firm that plays a disruptive role in the market to the benefit of customers.” According to the Guidelines, this includes firms:

  1. With a new technology or business model that threatens to disrupt market conditions;
  2. With an incentive to take the lead in price cutting or other competitive conduct or to resist increases in industry prices;
  3. That resist otherwise prevailing industry norms to cooperate on price setting or other terms of competition; and/or
  4. With an ability and incentive to expand production rapidly using available capacity to “discipline prices.”

There appears to be no formal model of maverick behavior that does not rely on some a priori assumption that the firm is a maverick.

For example, John Kwoka’s 1989 model assumes the maverick firm has different beliefs about how competing firms would react if the maverick varies its output or price. Louis Kaplow and Carl Shapiro developed a simple model in which the firm with the smallest market share may play the role of a maverick. They note, however, that this raises the question—in a model in which every firm faces the same cost and demand conditions—why would there be any variation in market shares? The common solution, according to Kaplow and Shapiro, is cost asymmetries among firms. If that is the case, then “maverick” activity is merely a function of cost, rather than some uniquely maverick-like behavior.

The idea of the maverick firm requires that the firm play a critical role in the market. The maverick must be the firm that outflanks coordinated action or acts as a bulwark against unilateral action. By this loosey goosey definition of maverick, a single firm can make the difference between success or failure of anticompetitive behavior by its competitors. Thus, the ability and incentive to expand production rapidly is a necessary condition for a firm to be considered a maverick. For example, Kaplow and Shapiro explain:

Of particular note is the temptation of one relatively small firm to decline to participate in the collusive arrangement or secretly to cut prices to serve, say, 4% rather than 2% of the market. As long as price cuts by a small firm are less likely to be accurately observed or inferred by the other firms than are price cuts by larger firms, the presence of small firms that are capable of expanding significantly is especially disruptive to effective collusion.

A “maverick” firm’s ability to “discipline prices” depends crucially on its ability to expand output in the face of increased demand for its products. Similarly, the other non-maverick firms can be “disciplined” by the maverick only in the face of a credible threat of (1) a noticeable drop in market share that (2) leads to lower profits.

The government’s complaint in AT&T/T-Mobile’s 2011 proposed merger alleges:

Relying on its disruptive pricing plans, its improved high-speed HSPA+ network, and a variety of other initiatives, T-Mobile aimed to grow its nationwide share to 17 percent within the next several years, and to substantially increase its presence in the enterprise and government market. AT&T’s acquisition of T-Mobile would eliminate the important price, quality, product variety, and innovation competition that an independent T-Mobile brings to the marketplace.

At the time of the proposed merger, T-Mobile accounted for 11% of U.S. wireless subscribers. At the end of 2016, its market share had hit 17%. About half of the increase can be attributed to its 2012 merger with MetroPCS. Over the same period, Verizon’s market share increased from 33% to 35% and AT&T market share remained stable at 32%. It appears that T-Mobile’s so-called maverick behavior did more to disrupt the market shares of smaller competitors Sprint and Leap (which was acquired by AT&T). Thus, it is not clear, ex post, that T-Mobile posed any threat to AT&T or Verizon’s market shares.

Geoffrey Manne raised some questions about the government’s maverick theory which also highlights a fundamental problem with the willy nilly way in which firms are given the maverick label:

. . . it’s just not enough that a firm may be offering products at a lower price—there is nothing “maverick-y” about a firm that offers a different, less valuable product at a lower price. I have seen no evidence to suggest that T-Mobile offered the kind of pricing constraint on AT&T that would be required to make it out to be a maverick.

While T-Mobile had a reputation for lower mobile prices, in 2011, the firm was lagging behind Verizon, Sprint, and AT&T in the rollout of 4G technology. In other words, T-Mobile was offering an inferior product at a lower price. That’s not a maverick, that’s product differentiation with hedonic pricing.

More recently, in his opposition to the proposed T-Mobile/Sprint merger, Gene Kimmelman from Public Knowledge asserts that both firms are mavericks and their combination would cause their maverick magic to disappear:

Sprint, also, can be seen as a maverick. It has offered “unlimited” plans and simplified its rate plans, for instance, driving the rest of the industry forward to more consumer-friendly options. As Sprint CEO Marcelo Claure stated, “Sprint and T-Mobile have similar DNA and have eliminated confusing rate plans, converging into one rate plan: Unlimited.” Whether both or just one of the companies can be seen as a “maverick” today, in either case the newly combined company would simply have the same structural incentives as the larger carriers both Sprint and T-Mobile today work so hard to differentiate themselves from.

Kimmelman provides no mechanism by which the magic would go missing, but instead offers a version of an adversity-builds-character argument:

Allowing T-Mobile to grow to approximately the same size as AT&T, rather than forcing it to fight for customers, will eliminate the combined company’s need to disrupt the market and create an incentive to maintain the existing market structure.

For 30 years, the notion of the maverick firm has been a concept in search of a model. If the concept cannot be modeled decades after being introduced, maybe the maverick can’t be modeled.

What’s left are ad hoc assertions mixed with speculative projections in hopes that some sympathetic judge can be swayed. However, some judges seem to be more skeptical than sympathetic, as in H&R Block/TaxACT :

The parties have spilled substantial ink debating TaxACT’s maverick status. The arguments over whether TaxACT is or is not a “maverick” — or whether perhaps it once was a maverick but has not been a maverick recently — have not been particularly helpful to the Court’s analysis. The government even put forward as supposed evidence a TaxACT promotional press release in which the company described itself as a “maverick.” This type of evidence amounts to little more than a game of semantic gotcha. Here, the record is clear that while TaxACT has been an aggressive and innovative competitor in the market, as defendants admit, TaxACT is not unique in this role. Other competitors, including HRB and Intuit, have also been aggressive and innovative in forcing companies in the DDIY market to respond to new product offerings to the benefit of consumers.

It’s time to send the maverick out of town and into the sunset.

 

The once-mighty Blockbuster video chain is now down to a single store, in Bend, Oregon. It appears to be the only video rental store in Bend, aside from those offering “adult” features. Does that make Blockbuster a monopoly?

It seems almost silly to ask if the last firm in a dying industry is a monopolist. But, it’s just as silly to ask if the first firm in an emerging industry is a monopolist. They’re silly questions because they focus on the monopoly itself, rather than the alternative—what if the firm, and therefore the industry—did not exist at all.

A recent post on CEPR’s Vox blog points out something very obvious, but often forgotten: “The deadweight loss from a monopolist’s not producing at all can be much greater than from charging too high a price.”

The figure below is from the post, by Michael Kremer, Christopher Snyder, and Albert Chen. With monopoly pricing (and no price discrimination), consumer surplus is given by CS, profit is given by ∏, and a deadweight loss given by H.

The authors point out if fixed costs (or entry costs) are so high that the firm does not enter the market, the deadweight loss is equal to CS + H.

Too often, competition authorities fall for the Nirvana Fallacy, a tendency to compare messy, real-world economic circumstances today to idealized potential alternatives and to justify policies on the basis of the discrepancy between the real world and some alternative perfect (or near-perfect) world.

In 2005, Blockbuster dropped its bid to acquire competing Hollywood Entertainment Corporation, the then-second-largest video rental chain. Blockbuster said it expected the Federal Trade Commission would reject the deal on antitrust grounds. The merged companies would have made up more than 50 percent of the home video rental market.

Five years later Blockbuster, Hollywood, and third-place Movie Gallery had all filed for bankruptcy.

Blockbuster’s then-CEO, John Antioco, has been ridiculed for passing up an opportunity to buy Netflix for $50 million in 2005. But, Blockbuster knew its retail world was changing and had thought a consolidation might help it survive that change.

But, just as Antioco can be chided for undervaluing Netflix, so should the FTC. The regulators were so focused on Blockbuster-Hollywood market share that they undervalued the competitive pressure Netflix and other services were bringing. With hindsight, it seems obvious that the Blockbuster’s post-merger market share would not have conveyed any significant power over price. What’s not known is whether the merger would have put off the bankruptcy of the three largest video rental retailers.

Also, what’s not known is the extent to which consumers are better or worse off with the exit of Blockbuster, Hollywood, and Movie Gallery.

Nevertheless, the video rental business highlights a key point in an earlier TOTM post: A great deal of competition comes from the flanks, rather than head-on. Head-on competition from rental kiosks, such as Redbox, nibbled at the sales and margins of Blockbuster, Hollywood, and Movie Gallery. But, the real killer of the bricks-and-mortar stores came from a wide range of streaming services.

The lesson for regulators is that competition is nearly always and everywhere present, even if it’s standing on the sidelines.

Will the merger between T-Mobile and Sprint make consumers better or worse off? A central question in the review of this merger—as it is in all merger reviews—is the likely effects that the transaction will have on consumers. In this post, we look at one study that opponents of the merger have been using to support their claim that the merger will harm consumers.

Along with my earlier posts on data problems and public policy (1, 2, 3, 4, 5), this provides an opportunity to explore why seemingly compelling studies can be used to muddy the discussion and fool observers into seeing something that isn’t there.

This merger—between the third and fourth largest mobile wireless providers in the United States—has been characterized as a “4-to-3” merger, on the grounds that it will reduce the number of large, ostensibly national carriers from four to three. This, in turn, has led to concerns that further concentration in the wireless telecommunications industry will harm consumers. Specifically, some opponents of the merger claim that “it’s going to be hard for someone to make a persuasive case that reducing four firms to three is actually going to improve competition for the benefit of American consumers.”

A number of previous mergers around the world can or have also been characterized as 4-to-3 mergers in the wireless telecommunications industry. Several econometric studies have attempted to evaluate the welfare effects of 4-to-3 mergers in other countries, as well as the effects of market concentration in the wireless industry more generally. These studies have been used by both proponents and opponents of the proposed merger of T-Mobile and Sprint to support their respective contentions that the merger will benefit or harm consumer welfare.

One particular study has risen to prominence among opponents of 4-to-3 mergers in telecom in general and the T-Mobile/Sprint merger in specific. This is worrying because the study has several fundamental flaws. 

This study, by Finnish consultancy Rewheel, has been cited by, among others, Phillip Berenbroick of Public Knowledge, who in Senate testimony, asserted that “Rewheel found that consumers in markets with three facilities-based providers paid twice as much per gigabyte as consumers in four firm markets.”

The Rewheel report upon which Mr. Berenbroick relied, is, however, marred by a number of significant flaws, which undermine its usefulness.

The Rewheel report

Rewheel’s report purports to analyze the state of 4G pricing across 41 countries that are either members of the EU or the OECD or both. The report’s conclusions are based mainly on two measures:

  1. Estimates of the maximum number of gigabytes available under each plan for a specific hypothetical monthly price, ranging from €5 to €80 a month. In other words, for each plan, Rewheel asks, “How many 4G gigabytes would X euros buy?” Rewheel then ranks countries by the median amount of gigabytes available at each hypothetical price for all the plans surveyed in each country.
  2. Estimates of what Rewheel describes as “fully allocated gigabyte prices.” This is the monthly retail price (including VAT) divided by the number of gigabytes included in each plan. Rewheel then ranks countries by the median price per gigabyte across all the plans surveyed in each country.

Rewheel’s convoluted calculations

Rewheel’s use of the country median across all plans is problematic. In particular it gives all plans equal weight, regardless of consumers’ use of each plan. For example, a plan targeted for a consumer with a “high” level of usage is included with a plan targeted for a consumer with a “low” level of usage. Even though a “high” user would not purchase a “low” plan (which would be relatively expensive for a “high” user), all plans are included, thereby skewing upward the median estimates.

But even if that approach made sense as a way of measuring consumers’ willingness to pay, in execution Rewheel’s analysis contains the following key defects:

  • The Rewheel report is essentially limited to quantity effects alone (i.e., how many gigabytes available under each plan for a given hypothetical price) or price effects alone (i.e., price per included gigabyte for each plan). These measures can mislead the analysis by missing, among other things, innovation and quality effects.
  • Rewheel’s analysis is not based on an impartial assessment of relevant price data. Rather, it is based on hypothetical measures. Such comparisons say nothing about the plans actually chosen by consumers or the actual prices paid by consumers in those countries, rendering Rewheel’s comparisons virtually meaningless. As Affeldt & Nitsche (2014) note in their assessment of the effects of concentration in mobile telecom markets:

Such approaches are taken by Rewheel (2013) and also the Austrian regulator rtr (when tracking prices over time, see rtr (2014)). Such studies face the following problems: They may pick tariffs that are relatively meaningless in the country. They will have to assume one or more consumption baskets (voice minutes, data volume etc.) in order to compare tariffs. This may drive results. Apart from these difficulties such comparisons require very careful tracking of tariffs and their changes. Even if one assumes studying a sample of tariffs is potentially meaningful, a comparison across countries (or over time) would still require taking into account key differences across countries (or over time) like differences in demand, costs, network quality etc.

  • The Rewheel report bases its comparison on dissimilar service levels by not taking into account, for instance, relevant features like comparable network capacity, service security, and, perhaps most important, overall quality of service.

Rewheel’s unsupported conclusions

Rewheel uses its analysis to come to some strong conclusions, such as the conclusion on the first page of its report declaring the median gigabyte price in countries with three carriers is twice as high as in countries with four carriers.

The figure below is a revised version of the figure on the first page of Rewheel’s report. The yellow blocks (gray dots) show the range of prices in countries with three carriers the blue blocks (pink dots) shows the range of prices in countries with four carriers. The darker blocks show the overlap of the two. The figure makes clear that there is substantial overlap in pricing among three and four carrier countries. Thus, it is not obvious that three carrier countries have significantly higher prices (as measured by Rewheel) than four carrier countries.

Rewheel

A simple “eyeballing” of the data can lead to incorrect conclusions, in which case statistical analysis can provide some more certainty (or, at least, some measure of uncertainty). Yet, Rewheel provides no statistical analysis of its calculations, such as measures of statistical significance. However, information on page 5 of the Rewheel report can be used to perform some rudimentary statistical analysis.

I took the information from the columns for hypothetical monthly prices of €30 a month and €50 a month, and converted data into a price per gigabyte to generate the dependent variable. Following Rewheel’s assumption, “unlimited” is converted to 250 gigabytes per month. Greece was dropped from the analysis because Rewheel indicates that no data is available at either hypothetical price level.

My rudimentary statistical analysis includes the following independent variables:

  • Number of carriers (or mobile network operators, MNOs) reported by Rewheel in each country, ranging from three to five. Israel is the only country with five MNOs.
  • A dummy variable for EU28 countries. Rewheel performs separate analysis for EU28 countries, suggesting they think this is an important distinction.
  • GDP per capita for each country, adjusted for purchasing power parity. Several articles in the literature suggest higher GDP countries would be expected to have higher wireless prices.
  • Population density, measured by persons per square kilometer. Several articles in the literature argue that countries with lower population density would have higher costs of providing wireless service which would, in turn, be reflected in higher prices.

The tables below confirm what an eyeballing of the figure suggest: Rewheel’s data show number of MNOs in a country have no statistically significant relationship with price per gigabyte, at either the €30 a month level or the €50 a month level.

RewheelRegression

While the signs on the MNO coefficient are negative (i.e., more carriers in a country is associated with lower prices), they are not statistically significantly different from zero at any of the traditional levels of statistical significance.

Also, the regressions suffer from relatively low measures of goodness-of-fit. The independent variables in the regression explain approximately five percent of the variation in the price per gigabyte. This is likely because of the cockamamie way Rewheel measures price, but is also due to the known problems with performing cross-sectional analysis of wireless pricing, as noted by Csorba & Pápai (2015):

Many regulatory policies are based on a comparison of prices between European countries, but these simple cross-sectional analyses can lead to misleading conclusions because of at least two reasons. First, the price difference between countries of n and (n + 1) active mobile operators can be due to other factors, and the analyst can never be sure of having solved the omitted variable bias problem. Second and more importantly, the effect of an additional operator estimated from a cross-sectional comparison cannot be equated with the effect of an actual entry that might have a long-lasting effect on a single market.

The Rewheel report cannot be relied upon in assessing consumer benefits or harm associated with the T-Mobile/Sprint merger, or any other merger

Rewheel apparently has a rich dataset of wireless pricing plans. Nevertheless, the analyses presented in its report are fundamentally flawed. Moreover, Rewheel’s conclusions regarding three vs. four carrier countries are not only baseless, but clearly unsupported by closer inspection of the information presented in its report. The Rewheel report cannot be relied upon to inform regulatory oversight of the T-Mobile/Spring merger or any other. This study isn’t unique and it should serve as a caution to be wary of studies that merely eyeball information.

A recent working paper by Hashmat Khan and Matthew Strathearn attempts to empirically link anticompetitive collusion to the boom and bust cycles of the economy.

The level of collusion is higher during a boom relative to a recession as collusion occurs more frequently when demand is increasing (entering into a collusive arrangement is more profitable and deviating from an existing cartel is less profitable). The model predicts that the number of discovered cartels and hence antitrust filings should be procyclical because the level of collusion is procyclical.

The first sentence—a hypothesis that collusion is more likely during a “boom” than in recession—seems reasonable. At the same time, a case can be made that collusion would be more likely during recession. For example, a reduced risk of entry from competitors would reduce the cost of collusion.

The second sentence, however, seems a stretch. Mainly because it doesn’t recognize the time delay between the collusive activity, the date the collusion is discovered by authorities, and the date the case is filed.

Perhaps, more importantly, it doesn’t acknowledge that many collusive arrangement span months, if not years. That span of time could include times of “boom” and times of recession. Thus, it can be argued that the date of the filing has little (or nothing) to do with the span over which the collusive activity occurred.

I did a very lazy man’s test of my criticisms. I looked at six of the filings cited by Khan and Strathearn for the year 2011, a “boom” year with a high number of horizontal price fixing cases filed.

khanstrathearn

My first suspicion was correct. In these six cases, an average of more than three years passed from the date of the last collusive activity and the date the case was filed. Thus, whether the economy is a boom or bust when the case is filed provides no useful information regarding the state of the economy when the collusion occurred.

Nevertheless, my lazy man’s small sample test provides some interesting—and I hope useful—information regarding Khan and Strathearn’s conclusions.

  1. From July 2001 through September 2009, 24 of the 99 months were in recession. In other words, during this period, there was a 24 percent chance the economy was in recession in any given month.
  2. Five of the six collusive arrangements began when the economy was in recovery. Only one began during a recession. This may seem to support their conclusion that collusive activity is more likely during a recovery. However, even if the arrangements began randomly, there would be a 55 percent chance that that five or more began during a recovery. So, you can’t read too much into the observation that most of the collusive agreements began during a “boom.”
  3. In two of the cases, the collusive activity occurred during a span of time that had no recession. The chances of this happening randomly is less than 1 in 20,000, supporting their conclusion regarding collusive activity and the business cycle.

Khan and Strathearn fall short in linking collusive activity to the business cycle but do a good job of linking antitrust enforcement activities to the business cycle. The information they use from the DOJ website is sufficient to determine when the collusive activity occurred—but it’ll take more vigorous “scrubbing” (their word) of the site to get the relevant data.

The bigger question, however, is the relevance of this research. Naturally, one could argue this line of research indicates that competition authorities should be extra vigilant during a booming economy. Yet, Adam Smith famously noted, “People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices.” This suggests that collusive activity—or the temptation to engage in such activity—is always and everywhere present, regardless of the business cycle.

 

“Our City has become a cesspool,” according Portland police union president, Daryl Turner. He was describing efforts to address the city’s large and growing homelessness crisis.

Portland Mayor Ted Wheeler defended the city’s approach, noting that every major city, “all the way up and down the west coast, in the Midwest, on the East Coast, and frankly, in virtually every large city in the world” has a problem with homelessness. Nevertheless, according to the Seattle Times, Portland is ranked among the 10 worst major cities in the U.S. for homelessness. Wheeler acknowledged, “the problem is getting worse.”

This week, the city’s Budget Office released a “performance report” for some of the city’s bureaus. One of the more eyepopping statistics is the number of homeless camps the city has cleaned up over the years.

PortlandHomelessCampCleanups

Keep in mind, Multnomah County reports there are 4,177 homeless residents in the entire county. But the city reports clearing more than 3,100 camps in one year. Clearly, the number of homeless in the city is much larger than reflected in the annual homeless counts.

The report makes a special note that, “As the number of clean‐ups has increased and program operations have stabilized, the total cost per clean‐up has decreased substantially as well.” Sounds like economies of scale.

Turns out, Budget Office’s simple graphic gives enough information to estimate the economies of scale in homeless camp cleanups. Yes, it’s kinda crappy data. (Could it really be the case that in two years in a row, the city cleaned up exactly the same number of camps at exactly the same cost?) Anyway data is data.

First we plot the total annual costs for cleanups. Of course it’s an awesome fit (R-squared of 0.97), but that’s what happens when you have three observations and two independent variables.

PortlandHomelessTC

Now that we have an estimate of the total cost function, we can plot the marginal cost curve (blue) and average cost curve (orange).

PortlandHomelessMCAC1

That looks like a textbook example of economies of scale: decreasing average cost. It also looks like a textbook example of natural monopoly: marginal cost lower than average cost over the relevant range of output.

What strikes me as curious is how low is the implied marginal cost of a homeless camp cleanup, as shown in the table below.

FY Camps TC AC MC
2014-15 139 $171,109 $1,231 $3,178
2015-16 139 $171,109 $1,231 $3,178
2016-17 571 $578,994 $1,014 $774
2017-18 3,122 $1,576,610 $505 $142

It is somewhat shocking that the marginal cost of an additional camp cleanup is only $142. The hourly wages for the cleanup crew alone would be way more than $142. Something seems fishy with the numbers the city is reporting.

My guess: The city is shifting some of the cleanup costs to other agencies, such as Multnomah County and/or the Oregon Department of Transportation. I also suspect the city is not fully accounting for the costs of the cleanups. And, I am almost certain the city is significantly under reporting how many homeless are living on Portland streets.

A recent NBER working paper by Gutiérrez & Philippon has attracted attention from observers who see oligopoly everywhere and activists who want governments to more actively “manage” competition. The analysis in the paper is fundamentally flawed and should not be relied upon by policymakers, regulators, or anyone else.

As noted in my earlier post, Gutiérrez & Philippon attempt to craft a causal linkage between differences in U.S. and EU antitrust enforcement and product market regulation to differences in market concentration and corporate profits. Their paper’s abstract leads with a bold assertion:

Until the 1990’s, US markets were more competitive than European markets. Today, European markets have lower concentration, lower excess profits, and lower regulatory barriers to entry.

This post focuses on Gutiérrez & Philippon’s claim that EU markets have lower “excess profits.” This is perhaps the most outrageous claim in the paper. If anyone bothers to read the full paper, they’ll see that claims that EU firms have lower excess profits is simply not supported by the paper itself. Aside from a passing mention of someone else’s work in a footnote, the only mention of “excess profits” is in the paper’s headline-grabbing abstract.

What’s even more outrageous is the authors don’t define (or even describe) what they mean by excess profits.

These two factors alone should be enough to toss aside the paper’s assertion about “excess” profits. But, there’s more.

Gutiérrez & Philippon define profit to be gross operating surplus and mixed income (known as “GOPS” in the OECD’s STAN Industrial Analysis dataset). GOPS is not the same thing as gross margin or gross profit as used in business and finance (for example GOPS subtracts wages, but gross margin does not). The EU defines GOPS as (emphasis added):

Operating surplus is the surplus (or deficit) on production activities before account has been taken of the interest, rents or charges paid or received for the use of assets. Mixed income is the remuneration for the work carried out by the owner (or by members of his family) of an unincorporated enterprise. This is referred to as ‘mixed income’ since it cannot be distinguished from the entrepreneurial profit of the owner.

Here’s Figure 1 from Gutiérrez & Philippon plotting GOPS as a share of gross output.

Fig1-GutierrezPhilippon

Look at the huge jump in gross operating surplus for U.S. firms!

Now, look at the scale of the y-axis. Not such a big jump after all.

Over 23 years, from 1992 to 2015, the gross operating surplus rate for U.S. firms grew by 2.5 percentage points. In the EU, the rate increased by about one percentage point.

Using the STAN dataset, I plotted the gross operating surplus rate for each EU country (blue dots) and the U.S. (red dots), along with a time trend. Three takeaways:

  1. There’s not much of a difference between the U.S. and the EU average—they both hover around a gross operating surplus rate of about 19.5 percent; and
  2. There’s a huge variation in gross operating surplus rate across EU countries.
  3. Yes, gross operating surplus is trending slightly upward in the U.S. and slightly downward for the EU average, but there doesn’t appear to be a huge difference in the slope of the trendlines. In fact the slopes of the trendlines are not statistically significantly different from zero and are not statistically significantly different from each other.

GOPSprod

The use of gross profits raises some serious questions. For example, the Stigler Center’s James Traina finds that, after accounting for selling, general, and administrative expenses (SG&A), mark-ups for publicly traded firms in the U.S. have not meaningfully increased since 1980.

The figure below plots net operating surplus (NOPS equals GOPS minus consumption of fixed capital)—which is not the same thing as net income for a business.

Same three takeaways:

  1. There’s not much of a difference between the U.S. and the EU average—they both hover around a net operating surplus rate of a little more than seven percent; and
  2. There’s a huge variation in net operating surplus rate across EU countries.
  3. The slope of the trendlines for net operating surplus in the U.S. and EU are not statistically significantly different from zero and are not statistically significantly different from each other.

NOPSprod

It’s very possible that U.S. firms are achieving higher and growing “excess” profits relative to EU firms. It’s also very possible they’re not. Despite the bold assertions of Gutiérrez & Philippon, the information presented in their paper provides no useful information one way or the other.

 

A recent NBER working paper by Gutiérrez & Philippon attempts to link differences in U.S. and EU antitrust enforcement and product market regulation to differences in market concentration and corporate profits. The paper’s abstract begins with a bold assertion:

Until the 1990’s, US markets were more competitive than European markets. Today, European markets have lower concentration, lower excess profits, and lower regulatory barriers to entry.

The authors are not clear what they mean by lower, however its seems they mean lower today relative to the 1990s.

This blog post focuses on the first claim: “Today, European markets have lower concentration …”

At the risk of being pedantic, Gutiérrez & Philippon’s measures of market concentration for which both U.S. and EU data are reported cover the period from 1999 to 2012. Thus, “the 1990s” refers to 1999, and “today” refers to 2012, or six years ago.

The table below is based on Figure 26 in Gutiérrez & Philippon. In 2012, there appears to be no significant difference in market concentration between the U.S. and the EU, using either the 8-firm concentration ratio or HHI. Based on this information, it cannot be concluded broadly that EU sectors have lower concentration than the U.S.

2012U.S.EU
CR826% (+5%)27% (-7%)
HHI640 (+150)600 (-190)

Gutiérrez & Philippon focus on the change in market concentration to draw their conclusions. However, the levels of market concentration measures are strikingly low. In all but one of the industries (telecommunications) in Figure 27 of their paper, the 8-firm concentration ratios for the U.S. and the EU are below 40 percent. Similarly, the HHI measures reported in the paper are at levels that most observers would presume to be competitive. In addition, in 7 of the 12 sectors surveyed, the U.S. 8-firm concentration ratio is lower than in the EU.

The numbers in parentheses in the table above show the change in the measures of concentration since 1999. The changes suggests that U.S. markets have become more concentrated and EU markets have become less concentrated. But, how significant are the changes in concentration?

A simple regression of the relationship between CR8 and a time trend finds that in the EU, CR8 has decreased an average of 0.5 percentage point a year, while the U.S. CR8 increased by less than 0.4 percentage point a year from 1999 to 2012. Tucked in an appendix to Gutiérrez & Philippon, Figure 30 shows that CR8 in the U.S. had decreased by about 2.5 percentage points from 2012 to 2014.

A closer examination of Gutiérrez & Philippon’s 8-firm concentration ratio for the EU shows that much of the decline in EU market concentration occurred over the 1999-2002 period. After that, the change in CR8 for the EU is not statistically significantly different from zero.

A regression of the relationship between HHI and a time trend finds that in the EU, HHI has decreased an average of 12.5 points a year, while the U.S. HHI increased by less than 16.4 points a year from 1999 to 2012.

As with CR8, a closer examination of Gutiérrez & Philippon’s HHI for the EU shows that much of the decline in EU market concentration occurred over the 1999-2002 period. After that, the change in HHI for the EU is not statistically significantly different from zero.

Readers should be cautious in relying on Gutiérrez & Philippon’s data to conclude that the U.S. is “drifting” toward greater market concentration while the EU is “drifting” toward lower market concentration. Indeed, the limited data presented in the paper point toward a convergence in market concentration between the two regions.

 

 

Over the past few weeks, Truth on the Market has had several posts related to harm reduction policies, with a focus on tobacco, e-cigarettes, and other vapor products:

Harm reduction policies are used to manage a wide range of behaviors including recreational drug use and sexual activity. Needle-exchange programs reduce the spread of infectious diseases among users of heroin and other injected drugs. Opioid replacement therapy substitutes illegal opioids, such as heroin, with a longer acting but less euphoric opioid. Safer sex education and condom distribution in schools are designed to reduce teenage pregnancy and reduce the spread of sexually transmitted infections. None of these harm reduction policies stop the risky behavior, nor do the policies eliminate the potential for harm. Nevertheless, the policies intend to reduce the expected harm.

Carrie Wade, Director of Harm Reduction Policy and Senior Fellow at the R Street Institute, draws a parallel between opiate harm reduction strategies and potential policies related to tobacco harm reduction. She notes that with successful one-year quit rates hovering around 10 percent, harm reduction strategies offer ways to transition more smokers off the most dangerous nicotine delivery device: the combustible cigarette.

Most of the harm from smoking is caused by the inhalation of toxicants released through the combustion of tobacco. Use of non-combustible nicotine delivery systems, such as e-cigarettes and smokeless tobacco generally are considered to be significantly less harmful than smoking cigarettes. UK government agency Public Health England has concluded that e-cigarettes are around 95 percent less harmful than combustible cigarettes.

In the New England Journal of Medicine, Fairchild, et al. (2018) identify a continuum of potential policies regarding the regulation of vapor products, such as e-cigarettes, show in the figure below.  They note that the most restrictive policies would effectively eliminate e-cigarettes as a viable alternative to smoking, while the most permissive may promote e-cigarette usage and potentially encourage young people—who would not do so otherwise—to take-up e-cigarettes. In between these extremes are policies that may discourage young people from initiating use of e-cigarettes, while encouraging current smokers to switch to less harmful vapor products.

nejmp1711991_f1

International Center for Law & Economics chief economist, Eric Fruits, notes in his blog post that more than 20 countries have introduced taxation on e-cigarettes and other vapor products. In the United States, several states and local jurisdictions have enacted e-cigarette taxes. His post is based on a recently released ICLE white paper entitled Vapor products, harm reduction, and taxation: Principles, evidence and a research agenda.

Under a harm reduction principle, Fruits argues that e-cigarettes and other vapor products should face no taxes or low taxes relative to conventional cigarettes, to guide consumers toward a safer alternative to smoking.

In contrast to harm reduction principles,  the precautionary principle as well as principles of tax equity point toward the taxation of vapor products at rates similar to conventional cigarettes.

On the one hand, some policymakers claim that the objective of taxing nicotine products is to reduce nicotine consumption. On the other hand, Dan Mitchell, co-founder of the Center for Freedom and Prosperity, points out that some politicians are concerned that they will lose tax revenue if a substantial number of smokers switch to options such as vaping.

Often missed in the policy discussion is the effect of fiscal policies on innovation and the development and commercialization of harm-reducing products. Also, often missed are the consequences for current consumers of nicotine products, including smokers seeking to quit using harmful conventional cigarettes.

Policy decisions regarding taxation of vapor products should take into account both long-term fiscal effects and broader economic and welfare effects. These effects might (or might not) suggest very different tax policies to those that have been enacted or are under consideration. These considerations, however, are frustrated by unreliable and wildly divergent empirical estimates of consumer demand in the face of changing prices and/or rising taxes.

Along the lines of uncertain—if not surprising—impacts Fritz Laux, professor of economics at Northeastern State University, provides an explanation of why smoke-free air laws have not been found to adversely affect revenues or employment in the restaurant and hospitality industries.

He argues that social norms regarding smoking in restaurants have changed to the point that many smokers themselves support bans on smoking in restaurants. In this way, he hypothesizes, smoke-free air laws do not impose a significant constraint on consumer behavior or business activity. We might likewise infer, by extension, that policies which do not prohibit vaping in public spaces (leaving such decisions to the discretion of business owners and managers) could encourage switching by people who otherwise would have to exit buildings in order to vape or smoke—without adversely affecting businesses.

Principles of harm reduction recognize that every policy proposal has uncertain outcomes as well as potential spillovers and unforeseen consequences. With such high risks and costs associated with cigarette and other combustible use, taxes and regulations must be developed in an environment of uncertainty and with an eye toward a net reduction in harm, rather than an unattainable goal of zero harm or in an overt pursuit of tax revenues.