Archives For truth on the market

Will the merger between T-Mobile and Sprint make consumers better or worse off? A central question in the review of this merger—as it is in all merger reviews—is the likely effects that the transaction will have on consumers. In this post, we look at one study that opponents of the merger have been using to support their claim that the merger will harm consumers.

Along with my earlier posts on data problems and public policy (1, 2, 3, 4, 5), this provides an opportunity to explore why seemingly compelling studies can be used to muddy the discussion and fool observers into seeing something that isn’t there.

This merger—between the third and fourth largest mobile wireless providers in the United States—has been characterized as a “4-to-3” merger, on the grounds that it will reduce the number of large, ostensibly national carriers from four to three. This, in turn, has led to concerns that further concentration in the wireless telecommunications industry will harm consumers. Specifically, some opponents of the merger claim that “it’s going to be hard for someone to make a persuasive case that reducing four firms to three is actually going to improve competition for the benefit of American consumers.”

A number of previous mergers around the world can or have also been characterized as 4-to-3 mergers in the wireless telecommunications industry. Several econometric studies have attempted to evaluate the welfare effects of 4-to-3 mergers in other countries, as well as the effects of market concentration in the wireless industry more generally. These studies have been used by both proponents and opponents of the proposed merger of T-Mobile and Sprint to support their respective contentions that the merger will benefit or harm consumer welfare.

One particular study has risen to prominence among opponents of 4-to-3 mergers in telecom in general and the T-Mobile/Sprint merger in specific. This is worrying because the study has several fundamental flaws. 

This study, by Finnish consultancy Rewheel, has been cited by, among others, Phillip Berenbroick of Public Knowledge, who in Senate testimony, asserted that “Rewheel found that consumers in markets with three facilities-based providers paid twice as much per gigabyte as consumers in four firm markets.”

The Rewheel report upon which Mr. Berenbroick relied, is, however, marred by a number of significant flaws, which undermine its usefulness.

The Rewheel report

Rewheel’s report purports to analyze the state of 4G pricing across 41 countries that are either members of the EU or the OECD or both. The report’s conclusions are based mainly on two measures:

  1. Estimates of the maximum number of gigabytes available under each plan for a specific hypothetical monthly price, ranging from €5 to €80 a month. In other words, for each plan, Rewheel asks, “How many 4G gigabytes would X euros buy?” Rewheel then ranks countries by the median amount of gigabytes available at each hypothetical price for all the plans surveyed in each country.
  2. Estimates of what Rewheel describes as “fully allocated gigabyte prices.” This is the monthly retail price (including VAT) divided by the number of gigabytes included in each plan. Rewheel then ranks countries by the median price per gigabyte across all the plans surveyed in each country.

Rewheel’s convoluted calculations

Rewheel’s use of the country median across all plans is problematic. In particular it gives all plans equal weight, regardless of consumers’ use of each plan. For example, a plan targeted for a consumer with a “high” level of usage is included with a plan targeted for a consumer with a “low” level of usage. Even though a “high” user would not purchase a “low” plan (which would be relatively expensive for a “high” user), all plans are included, thereby skewing upward the median estimates.

But even if that approach made sense as a way of measuring consumers’ willingness to pay, in execution Rewheel’s analysis contains the following key defects:

  • The Rewheel report is essentially limited to quantity effects alone (i.e., how many gigabytes available under each plan for a given hypothetical price) or price effects alone (i.e., price per included gigabyte for each plan). These measures can mislead the analysis by missing, among other things, innovation and quality effects.
  • Rewheel’s analysis is not based on an impartial assessment of relevant price data. Rather, it is based on hypothetical measures. Such comparisons say nothing about the plans actually chosen by consumers or the actual prices paid by consumers in those countries, rendering Rewheel’s comparisons virtually meaningless. As Affeldt & Nitsche (2014) note in their assessment of the effects of concentration in mobile telecom markets:

Such approaches are taken by Rewheel (2013) and also the Austrian regulator rtr (when tracking prices over time, see rtr (2014)). Such studies face the following problems: They may pick tariffs that are relatively meaningless in the country. They will have to assume one or more consumption baskets (voice minutes, data volume etc.) in order to compare tariffs. This may drive results. Apart from these difficulties such comparisons require very careful tracking of tariffs and their changes. Even if one assumes studying a sample of tariffs is potentially meaningful, a comparison across countries (or over time) would still require taking into account key differences across countries (or over time) like differences in demand, costs, network quality etc.

  • The Rewheel report bases its comparison on dissimilar service levels by not taking into account, for instance, relevant features like comparable network capacity, service security, and, perhaps most important, overall quality of service.

Rewheel’s unsupported conclusions

Rewheel uses its analysis to come to some strong conclusions, such as the conclusion on the first page of its report declaring the median gigabyte price in countries with three carriers is twice as high as in countries with four carriers.

The figure below is a revised version of the figure on the first page of Rewheel’s report. The yellow blocks (gray dots) show the range of prices in countries with three carriers the blue blocks (pink dots) shows the range of prices in countries with four carriers. The darker blocks show the overlap of the two. The figure makes clear that there is substantial overlap in pricing among three and four carrier countries. Thus, it is not obvious that three carrier countries have significantly higher prices (as measured by Rewheel) than four carrier countries.

Rewheel

A simple “eyeballing” of the data can lead to incorrect conclusions, in which case statistical analysis can provide some more certainty (or, at least, some measure of uncertainty). Yet, Rewheel provides no statistical analysis of its calculations, such as measures of statistical significance. However, information on page 5 of the Rewheel report can be used to perform some rudimentary statistical analysis.

I took the information from the columns for hypothetical monthly prices of €30 a month and €50 a month, and converted data into a price per gigabyte to generate the dependent variable. Following Rewheel’s assumption, “unlimited” is converted to 250 gigabytes per month. Greece was dropped from the analysis because Rewheel indicates that no data is available at either hypothetical price level.

My rudimentary statistical analysis includes the following independent variables:

  • Number of carriers (or mobile network operators, MNOs) reported by Rewheel in each country, ranging from three to five. Israel is the only country with five MNOs.
  • A dummy variable for EU28 countries. Rewheel performs separate analysis for EU28 countries, suggesting they think this is an important distinction.
  • GDP per capita for each country, adjusted for purchasing power parity. Several articles in the literature suggest higher GDP countries would be expected to have higher wireless prices.
  • Population density, measured by persons per square kilometer. Several articles in the literature argue that countries with lower population density would have higher costs of providing wireless service which would, in turn, be reflected in higher prices.

The tables below confirm what an eyeballing of the figure suggest: Rewheel’s data show number of MNOs in a country have no statistically significant relationship with price per gigabyte, at either the €30 a month level or the €50 a month level.

RewheelRegression

While the signs on the MNO coefficient are negative (i.e., more carriers in a country is associated with lower prices), they are not statistically significantly different from zero at any of the traditional levels of statistical significance.

Also, the regressions suffer from relatively low measures of goodness-of-fit. The independent variables in the regression explain approximately five percent of the variation in the price per gigabyte. This is likely because of the cockamamie way Rewheel measures price, but is also due to the known problems with performing cross-sectional analysis of wireless pricing, as noted by Csorba & Pápai (2015):

Many regulatory policies are based on a comparison of prices between European countries, but these simple cross-sectional analyses can lead to misleading conclusions because of at least two reasons. First, the price difference between countries of n and (n + 1) active mobile operators can be due to other factors, and the analyst can never be sure of having solved the omitted variable bias problem. Second and more importantly, the effect of an additional operator estimated from a cross-sectional comparison cannot be equated with the effect of an actual entry that might have a long-lasting effect on a single market.

The Rewheel report cannot be relied upon in assessing consumer benefits or harm associated with the T-Mobile/Sprint merger, or any other merger

Rewheel apparently has a rich dataset of wireless pricing plans. Nevertheless, the analyses presented in its report are fundamentally flawed. Moreover, Rewheel’s conclusions regarding three vs. four carrier countries are not only baseless, but clearly unsupported by closer inspection of the information presented in its report. The Rewheel report cannot be relied upon to inform regulatory oversight of the T-Mobile/Spring merger or any other. This study isn’t unique and it should serve as a caution to be wary of studies that merely eyeball information.

Near the end of her new proposal to break up Facebook, Google, Amazon, and Apple, Senator Warren asks, “So what would the Internet look like after all these reforms?”

It’s a good question, because, as she herself notes, “Twenty-five years ago, Facebook, Google, and Amazon didn’t exist. Now they are among the most valuable and well-known companies in the world.”

To Warren, our most dynamic and innovative companies constitute a problem that needs solving.

She described the details of that solution in a blog post:

First, [my administration would restore competition to the tech sector] by passing legislation that requires large tech platforms to be designated as “Platform Utilities” and broken apart from any participant on that platform.

* * *

For smaller companies…, their platform utilities would be required to meet the same standard of fair, reasonable, and nondiscriminatory dealing with users, but would not be required to structurally separate….

* * *
Second, my administration would appoint regulators committed to reversing illegal and anti-competitive tech mergers….
I will appoint regulators who are committed to… unwind[ing] anti-competitive mergers, including:

– Amazon: Whole Foods; Zappos;
– Facebook: WhatsApp; Instagram;
– Google: Waze; Nest; DoubleClick

Elizabeth Warren’s brave new world

Let’s consider for a moment what this brave new world will look like — not the nirvana imagined by regulators and legislators who believe that decimating a company’s business model will deter only the “bad” aspects of the model while preserving the “good,” as if by magic, but the inevitable reality of antitrust populism.  

Utilities? Are you kidding? For an overview of what the future of tech would look like under Warren’s “Platform Utility” policy, take a look at your water, electricity, and sewage service. Have you noticed any improvement (or reduction in cost) in those services over the past 10 or 15 years? How about the roads? Amtrak? Platform businesses operating under a similar regulatory regime would also similarly stagnate. Enforcing platform “neutrality” necessarily requires meddling in the most minute of business decisions, inevitably creating unintended and costly consequences along the way.

Network companies, like all businesses, differentiate themselves by offering unique bundles of services to customers. By definition, this means vertically integrating with some product markets and not others. Why are digital assistants like Siri bundled into mobile operating systems? Why aren’t the vast majority of third-party apps also bundled into the OS? If you want utilities regulators instead of Google or Apple engineers and designers making these decisions on the margin, then Warren’s “Platform Utility” policy is the way to go.

Grocery Stores. To take one specific case cited by Warren, how much innovation was there in the grocery store industry before Amazon bought Whole Foods? Since the acquisition, large grocery retailers, like Walmart and Kroger, have increased their investment in online services to better compete with the e-commerce champion. Many industry analysts expect grocery stores to use computer vision technology and artificial intelligence to improve the efficiency of check-out in the near future.

Smartphones. Imagine how forced neutrality would play out in the context of iPhones. If Apple can’t sell its own apps, it also can’t pre-install its own apps. A brand new iPhone with no apps — and even more importantly, no App Store — would be, well, just a phone, out of the box. How would users even access a site or app store from which to download independent apps? Would Apple be allowed to pre-install someone else’s apps? That’s discriminatory, too. Maybe it will be forced to offer a menu of all available apps in all categories (like the famously useless browser ballot screen demanded by the European Commission in its Microsoft antitrust case)? It’s hard to see how that benefits consumers — or even app developers.

Source: Free Software Magazine

Internet Search. Or take search. Calls for “search neutrality” have been bandied about for years. But most proponents of search neutrality fail to recognize that all Google’s search results entail bias in favor of its own offerings. As Geoff Manne and Josh Wright noted in 2011 at the height of the search neutrality debate:

[S]earch engines offer up results in the form not only of typical text results, but also maps, travel information, product pages, books, social media and more. To the extent that alleged bias turns on a search engine favoring its own maps, for example, over another firm’s, the allegation fails to appreciate that text results and maps are variants of the same thing, and efforts to restrain a search engine from offering its own maps is no different than preventing it from offering its own search results.

Nevermind that Google with forced non-discrimination likely means Google offering only the antiquated “ten blue links” search results page it started with in 1998 instead of the far more useful “rich” results it offers today; logically it would also mean Google somehow offering the set of links produced by any and all other search engines’ algorithms, in lieu of its own. If you think Google will continue to invest in and maintain the wealth of services it offers today on the strength of the profits derived from those search results, well, Elizabeth Warren is probably already your favorite politician.

Source: Web Design Museum  

And regulatory oversight of algorithmic content won’t just result in an impoverished digital experience; it will inevitably lead to an authoritarian one, as well:

Any agency granted a mandate to undertake such algorithmic oversight, and override or reconfigure the product of online services, thereby controls the content consumers may access…. This sort of control is deeply problematic… [because it saddles users] with a pervasive set of speech controls promulgated by the government. The history of such state censorship is one which has demonstrated strong harms to both social welfare and rule of law, and should not be emulated.

Digital Assistants. Consider also the veritable cage match among the tech giants to offer “digital assistants” and “smart home” devices with ever-more features at ever-lower prices. Today the allegedly non-existent competition among these companies is played out most visibly in this multi-featured market, comprising advanced devices tightly integrated with artificial intelligence, voice recognition, advanced algorithms, and a host of services. Under Warren’s nondiscrimination principle this market disappears. Each device can offer only a connectivity platform (if such a service is even permitted to be bundled with a physical device…) — and nothing more.

But such a world entails not only the end of an entire, promising avenue of consumer-benefiting innovation, it also entails the end of a promising avenue of consumer-benefiting competition. It beggars belief that anyone thinks consumers would benefit by forcing technology companies into their own silos, ensuring that the most powerful sources of competition for each other are confined to their own fiefdoms by order of law.

Breaking business models

Beyond the product-feature dimension, Sen. Warren’s proposal would be devastating for innovative business models. Why is Amazon Prime Video bundled with free shipping? Because the marginal cost of distribution for video is close to zero and bundling it with Amazon Prime increases the value proposition for customers. Why is almost every Google service free to users? Because Google’s business model is supported by ads, not monthly subscription fees. Each of the tech giants has carefully constructed an ecosystem in which every component reinforces the others. Sen. Warren’s plan would not only break up the companies, it would prohibit their business models — the ones that both created and continue to sustain these products. Such an outcome would manifestly harm consumers.

Both of Warren’s policy “solutions” are misguided and will lead to higher prices and less innovation. Her cause for alarm is built on a multitude of mistaken assumptions, but let’s address just a few (Warren in bold):

  • “Nearly half of all e-commerce goes through Amazon.” Yes, but it has only 5% of total retail in the United States. As my colleague Kristian Stout says, “the Internet is not a market; it’s a distribution channel.”
  • “Amazon has used its immense market power to force smaller competitors like Diapers.com to sell at a discounted rate.” The real story, as the founders of Diapers.com freely admitted, is that they sold diapers as what they hoped would be a loss leader, intending to build out sales of other products once they had a base of loyal customers:

And so we started with selling the loss leader product to basically build a relationship with mom. And once they had the passion for the brand and they were shopping with us on a weekly or a monthly basis that they’d start to fall in love with that brand. We were losing money on every box of diapers that we sold. We weren’t able to buy direct from the manufacturers.

Like all entrepreneurs, Diapers.com’s founders took a calculated risk that didn’t pay off as hoped. Amazon subsequently acquired the company (after it had declined a similar buyout offer from Walmart). (Antitrust laws protect consumers, not inefficient competitors). And no, this was not a case of predatory pricing. After many years of trying to make the business profitable as a subsidiary, Amazon shut it down in 2017.

  • “In the 1990s, Microsoft — the tech giant of its time — was trying to parlay its dominance in computer operating systems into dominance in the new area of web browsing. The federal government sued Microsoft for violating anti-monopoly laws and eventually reached a settlement. The government’s antitrust case against Microsoft helped clear a path for Internet companies like Google and Facebook to emerge.” The government’s settlement with Microsoft is not the reason Google and Facebook were able to emerge. Neither company entered the browser market at launch. Instead, they leapfrogged the browser entirely and created new platforms for the web (only later did Google create Chrome).

    Furthermore, if the Microsoft case is responsible for “clearing a path” for Google is it not also responsible for clearing a path for Google’s alleged depredations? If the answer is that antitrust enforcement should be consistently more aggressive in order to rein in Google, too, when it gets out of line, then how can we be sure that that same more-aggressive enforcement standard wouldn’t have curtailed the extent of the Microsoft ecosystem in which it was profitable for Google to become Google? Warren implicitly assumes that only the enforcement decision in Microsoft was relevant to Google’s rise. But Microsoft doesn’t exist in a vacuum. If Microsoft cleared a path for Google, so did every decision not to intervene, which, all combined, created the legal, business, and economic environment in which Google operates.

Warren characterizes Big Tech as a weight on the American economy. In fact, nothing could be further from the truth. These superstar companies are the drivers of productivity growth, all ranking at or near the top for most spending on research and development. And while data may not be the new oil, extracting value from it may require similar levels of capital expenditure. Last year, Big Tech spent as much or more on capex as the world’s largest oil companies:

Source: WSJ

Warren also faults Big Tech for a decline in startups, saying,

The number of tech startups has slumped, there are fewer high-growth young firms typical of the tech industry, and first financing rounds for tech startups have declined 22% since 2012.

But this trend predates the existence of the companies she criticizes, as this chart from Quartz shows:

The exact causes of the decline in business dynamism are still uncertain, but recent research points to a much more mundane explanation: demographics. Labor force growth has been declining, which has led to an increase in average firm age, nudging fewer workers to start their own businesses.

Furthermore, it’s not at all clear whether this is actually a decline in business dynamism, or merely a change in business model. We would expect to see the same pattern, for example, if would-be startup founders were designing their software for acquisition and further development within larger, better-funded enterprises.

Will Rinehart recently looked at the literature to determine whether there is indeed a “kill zone” for startups around Big Tech incumbents. One paper finds that “an increase in fixed costs explains most of the decline in the aggregate entrepreneurship rate.” Another shows an inverse correlation across 50 countries between GDP and entrepreneurship rates. Robert Lucas predicted these trends back in 1978, pointing out that productivity increases would lead to wage increases, pushing marginal entrepreneurs out of startups and into big companies.

It’s notable that many in the venture capital community would rather not have Sen. Warren’s “help”:

Arguably, it is also simply getting harder to innovate. As economists Nick Bloom, Chad Jones, John Van Reenen and Michael Webb argue,

just to sustain constant growth in GDP per person, the U.S. must double the amount of research effort searching for new ideas every 13 years to offset the increased difficulty of finding new ideas.

If this assessment is correct, it may well be that coming up with productive and profitable innovations is simply becoming more expensive, and thus, at the margin, each dollar of venture capital can fund less of it. Ironically, this also implies that larger firms, which can better afford the additional resources required to sustain exponential growth, are a crucial part of the solution, not the problem.

Warren believes that Big Tech is the cause of our social ills. But Americans have more trust in Amazon, Facebook, and Google than in the political institutions that would break them up. It would be wise for her to reflect on why that might be the case. By punishing our most valuable companies for past successes, Warren would chill competition and decrease returns to innovation.

Finally, in what can only be described as tragic irony, the most prominent political figure who shares Warren’s feelings on Big Tech is President Trump. Confirming the horseshoe theory of politics, far-left populism and far-right populism seem less distinguishable by the day. As our colleague Gus Hurwitz put it, with this proposal Warren is explicitly endorsing the unitary executive theory and implicitly endorsing Trump’s authority to direct his DOJ to “investigate specific cases and reach specific outcomes.” Which cases will he want to have investigated and what outcomes will he be seeking? More good questions that Senator Warren should be asking. The notion that competition, consumer welfare, and growth are likely to increase in such an environment is farcical.

Longtime TOTM blogger, Paul Rubin, has a new book now available for preorder on Amazon.

The book’s description reads:

In spite of its numerous obvious failures, many presidential candidates and voters are in favor of a socialist system for the United States. Socialism is consistent with our primitive evolved preferences, but not with a modern complex economy. One reason for the desire for socialism is the misinterpretation of capitalism.   

The standard definition of free market capitalism is that it’s a system based on unbridled competition. But this oversimplification is incredibly misleading—capitalism exists because human beings have organically developed an elaborate system based on trust and collaboration that allows consumers, producers, distributors, financiers, and the rest of the players in the capitalist system to thrive.

Paul Rubin, the world’s leading expert on cooperative capitalism, explains simply and powerfully how we should think about markets, economics, and business—making this book an indispensable tool for understanding and communicating the vast benefits the free market bestows upon societies and individuals. 

On March 14, the Federal Circuit will hear oral arguments in the case of BTG International v. Amneal Pharmaceuticals that could dramatically influence the future of duplicative patent litigation in the pharmaceutical industry.  The court will determine whether the America Invents Act (AIA) bars patent challengers that succeed in invalidating patents in inter partes review (IPR) proceedings from repeating their winning arguments in district court.  Courts and litigants had previously assumed that the AIA’s estoppel provision only prevented unsuccessful challengers from reusing failed arguments.   However, in an amicus brief filed in the case last month, the U.S. Patent and Trade Office (USPTO) argued that, although it seems counterintuitive, under the AIA, even parties that succeed in getting patents invalidated in IPR cannot reuse their arguments. 

If the Federal Circuit agrees with the USPTO, patent challengers could be strongly deterred from bringing IPR proceedings because it would mean they couldn’t reuse any arguments in district court.  This deterrent effect would be especially strong for generic drug makers, who must prevail in district court in order to get approval for their Abbreviated New Drug Application from the FDA. 

Critics of the USPTO’s position assert that it will frustrate the AIA’s purpose of facilitating generic competition.  However, if the Federal Circuit adopts the position, it would also reduce the amount of duplicative litigation that plagues the pharmaceutical industry and threatens new drug innovation.  According to a 2017 analysis of over 6,500 IPR challenges filed between 2012 and 2017, approximately 80% of IPR challenges were filed during an ongoing district court case challenging the patent.   This duplicative litigation can increase costs for both challengers and patent holders; the median cost for an IPR proceeding that results in a final decision is $500,000 and the median cost for just filing an IPR petition is $100,000.  Moreover, because of duplicative litigation, pharmaceutical patent holders face persistent uncertainty about the validity of their patents. Uncertain patent rights will lead to less innovation because drug companies will not spend the billions of dollars it typically costs to bring a new drug to market when they cannot be certain if the patents for that drug can withstand IPR proceedings that are clearly stacked against them.   And if IPR causes drug innovation to decline, a significant body of research predicts that patients’ health outcomes will suffer as a result.

In addition, deterring IPR challenges would help to reestablish balance between drug patent owners and patent challengers.  As I’ve previously discussed here and here, the pro-challenger bias in IPR proceedings has led to significant deviation in patent invalidation rates under the two pathways; compared to district court challenges, patents are twice as likely to be found invalid in IPR challenges. The challenger is more likely to prevail in IPR proceedings because the Patent Trial and Appeal Board (PTAB) applies a lower standard of proof for invalidity in IPR proceedings than do federal courts. Furthermore, if the challenger prevails in the IPR proceedings, the PTAB’s decision to invalidate a patent can often “undo” a prior district court decision in favor of the patent holder.  Further, although both district court judgments and PTAB decisions are appealable to the Federal Circuit, the court applies a more deferential standard of review to PTAB decisions, increasing the likelihood that they will be upheld compared to the district court decision. 

However, the USPTO acknowledges that its position is counterintuitive because it means that a court could not consider invalidity arguments that the PTAB found persuasive.  It is unclear whether the Federal Circuit will refuse to adopt this counterintuitive position or whether Congress will amend the AIA to limit estoppel to failed invalidity claims.  As a result, a better and more permanent way to eliminate duplicative litigation would be for Congress to enact the Hatch-Waxman Integrity Act of 2019 (HWIA).  The HWIA was introduced by Senator Thom Tillis in the Senate and Congressman Bill Flores In the House, and proposed in the last Congress by Senator Orrin Hatch.  The HWIA eliminates the ability of drug patent challengers to file duplicative claims in both federal court and IPR proceedings.  Instead, they must choose between either district court litigation (which saves considerable costs by allowing generics to rely on the brand company’s safety and efficacy studies for FDA approval) and IPR proceedings (which are faster and provide certain pro-challenger provisions). 

Thus, the HWIA would reduce duplicative litigation that increases costs and uncertainty for drug patent owners.   This will ensure that patent owners achieve clarity on the validity of their patents, which will spur new drug innovation and ensure that consumers continue to have access to life-improving drugs.

In my fifteen years as a law professor, I’ve become convinced that there’s a hole in the law school curriculum.  When it comes to regulation, we focus intently on the process of regulating and the interpretation of rules (see, e.g., typical administrative law and “leg/reg” courses), but we rarely teach students what, as a matter of substance, distinguishes a good regulation from a bad one.  That’s unfortunate, because lawyers often take the lead in crafting regulatory approaches. 

In the fall of 2017, I published a book seeking to fill this hole.  That book, How to Regulate: A Guide for Policymakers, is the inspiration for a symposium that will occur this Friday (Feb. 8) at the University of Missouri Law School.

The symposium, entitled Protecting the Public While Fostering Innovation and Entrepreneurship: First Principles for Optimal Regulation, will bring together policymakers and regulatory scholars who will go back to basics. Participants will consider two primary questions:

(1) How, as a substantive matter, should regulation be structured in particular areas? (Specifically, what regulatory approaches would be most likely to forbid the bad while chilling as little of the good as possible and while keeping administrative costs in check? In other words, what rules would minimize the sum of error and decision costs?), and

(2) What procedures would be most likely to generate such optimal rules?


The symposium webpage includes the schedule for the day (along with a button to Livestream the event), but here’s a quick overview.

I’ll set the stage by discussing the challenge policymakers face in trying to accomplish three goals simultaneously: ban bad instances of behavior, refrain from chilling good ones, and keep rules simple enough to be administrable.

We’ll then hear from a panel of experts about the principles that would best balance those competing concerns in their areas of expertise. Specifically:

  • Jerry Ellig (George Washington University; former chief economist of the FCC) will discuss telecommunications policy;
  • TOTM’s own Gus Hurwitz (Nebraska Law) will consider regulation of Internet platforms; and
  • Erika Lietzan (Mizzou Law) will examine the regulation of therapeutic drugs and medical devices.

Hopefully, we can identify some common threads among the substantive principles that should guide effective regulation in these disparate areas

Before we turn to consider regulatory procedures, we will hear from our keynote speaker, Commissioner Hester Peirce of the SEC. As The Economist recently reported, Commissioner Peirce has been making waves with her speeches, many of which have gone back to basics and asked why the government is intervening and whether it’s doing so in an optimal fashion.

Following Commissioner Peirce’s address, we will hear from the following panelists about how regulatory procedures should be structured in order to generate substantively optimal rules:

  • Bridget Dooling (George Washington University; former official in the White House Office of Information and Regulatory Affairs);
  • Ken Davis (former Deputy Attorney General of Virginia and member of the Federalist Society’s Regulatory Transparency Project);
  • James Broughel (Senior Fellow at the Mercatus Center; expert on state-level regulatory review procedures); and
  • Justin Smith (former counsel to Missouri governor; led the effort to streamline the Missouri regulatory code).

As you can see, this Friday is going to be a great day at Mizzou Law. If you’re close enough to join us in person, please come. Otherwise, please join us via Livestream.

In the opening seconds of what was surely one of the worst oral arguments in a high-profile case that I have ever heard, Pantelis Michalopoulos, arguing for petitioners against the FCC’s 2018 Restoring Internet Freedom Order (RIFO) expertly captured both why the side he was representing should lose and the overall absurdity of the entire net neutrality debate: “This order is a stab in the heart of the Communications Act. It would literally write ‘telecommunications’ out of the law. It would end the communications agency’s oversight over the main communications service of our time.”

The main communications service of our time is the Internet. The Communications and Telecommunications Acts were written before the advent of the modern Internet, for an era when the telephone was the main communications service of our time. The reality is that technological evolution has written “telecommunications” out of these Acts – the “telecommunications services” they were written to regulate are no longer the important communications services of the day.

The basic question of the net neutrality debate is whether we expect Congress to weigh in on how regulators should respond when an industry undergoes fundamental change, or whether we should instead allow those regulators to redefine the scope of their own authority. In the RIFO case, petitioners (and, more generally, net neutrality proponents) argue that agencies should get to define their own authority. Those on the other side of the issue (including me) argue that that it is up to Congress to provide agencies with guidance in response to changing circumstances – and worry that allowing independent and executive branch agencies broad authority to act without Congressional direction is a recipe for unfettered, unchecked, and fundamentally abusive concentrations of power in the hands of the executive branch.

These arguments were central to the DC Circuit’s evaluation of the prior FCC net neutrality order – the Open Internet Order. But rather than consider the core issue of the case, the four hours of oral arguments this past Friday were instead a relitigation of long-ago addressed ephemeral distinctions, padded out with irrelevance and esoterica, and argued with a passion available only to those who believe in faerie tales and monsters under their bed. Perhaps some revelled in hearing counsel for both sides clumsily fumble through strained explanations of the difference between standalone telecommunications services and information services that are by definition integrated with them, or awkward discussions about how ISPs may implement hypothetical prioritization technologies that have not even been developed. These well worn arguments successfully demonstrated, once again, how many angels can dance upon the head of a single pin – only never before have so many angels been so irrelevant.

This time around, petitioners challenging the order were able to scare up some intervenors to make novel arguments on their behalf. Most notably, they were able to scare up a group of public safety officials to argue that the FCC had failed to consider arguments that the RIFO would jeopardize public safety services that rely on communications networks. I keep using the word “scare” because these arguments are based upon incoherent fears peddled by net neutrality advocates in order to find unsophisticated parties to sign on to their policy adventures. The public safety fears are about as legitimate as concerns that the Easter Bunny might one day win the Preakness – and merited as much response from the FCC as a petition from the Racehorse Association of America demanding the FCC regulate rabbits.

In the end, I have no idea how the DC Circuit is going to come down in this case. Public Safety concerns – like declarations of national emergencies – are often given undue and unwise weight. And there is a legitimately puzzling, if fundamentally academic, argument about a provision of the Communications Act (47 USC 257(c)) that Congress repealed after the Order was adopted and that was an noteworthy part of the notice the FCC gave when the Order was proposed that could lead the Court to remand the Order back to the Commission.

In the end, however, this case is unlikely to address the fundamental question of whether the FCC has any business regulating Internet access services. If the FCC loses, we’ll be back here in another year or two; if the FCC wins, we’ll be back here the next time a Democrat is in the White House. And the real tragedy is that every minute the FCC spends on the interminable net neutrality non-debate is a minute not spent on issues like closing the rural digital divide or promoting competitive entry into markets by next generation services.

So much wasted time. So many billable hours. So many angels dancing on the head of a pin. If only they were the better angels of our nature.


Postscript: If I sound angry about the endless fights over net neutrality, it’s because I am. I live in one of the highest-cost, lowest-connectivity states in the country. A state where much of the territory is covered by small rural carriers for whom the cost of just following these debates can mean delaying the replacement of an old switch, upgrading a circuit to fiber, or wiring a street. A state in which if prioritization were to be deployed it would be so that emergency services would be able to work over older infrastructure or so that someone in a rural community could remotely attend classes at the University or consult with a primary care physician (because forget high speed Internet – we have counties without doctors in them). A state in which if paid prioritization were to be developed it would be to help raise capital to build out service to communities that have never had high-speed Internet access.

So yes: the fact that we might be in for another year of rule making followed by more litigation because some firefighters signed up for the wrong wireless service plan and then were duped into believing a technological, economic, and political absurdity about net neutrality ensuring they get free Internet access does make me angry. Worse, unlike the hypothetical harms net neutrality advocates are worried about, the endless discussion of net neutrality causes real, actual, concrete harm to the people net neutrality advocates like to pat themselves on the back as advocating for. We should all be angry about this, and demanding that Congress put this debate out of our misery.

The US Senate Subcommittee on Antitrust, Competition Policy, and Consumer Rights recently held hearings to see what, if anything, the U.S. might learn from the approaches of other countries regarding antitrust and consumer protection. US lawmakers would do well to be wary of examples from other jurisdictions, however, that are rooted in different legal and cultural traditions. Shortly before the hearing, for example, Australia’s Competition and Consumer Protection Commission (ACCC) announced that it was exploring broad new regulations, predicated on theoretical harms, that would threaten both consumer welfare and individuals’ rights to free expression that are completely at odds with American norms.

The ACCC seeks vast discretion to shape the way that online platforms operate — a regulatory venture that threatens to undermine the value which companies provide to consumers. Even more troubling are its plans to regulate free expression on the Internet, which if implemented in the US, would contravene Americans’ First Amendment guarantees to free speech.

The ACCC’s errors are fundamental, starting with the contradictory assertion that:

Australian law does not prohibit a business from possessing significant market power or using its efficiencies or skills to “out compete” its rivals. But when their dominant position is at risk of creating competitive or consumer harm, governments should stay ahead of the game and act to protect consumers and businesses through regulation.

Thus, the ACCC recognizes that businesses may work to beat out their rivals and thus gain in market share. However, this is immediately followed by the caveat that the state may prevent such activity, when such market gains are merely “at risk” of coming at the expense of consumers or business rivals. Thus, the ACCC does not need to show that harm has been done, merely that it might take place — even if the products and services being provided otherwise benefit the public.

The ACCC report then uses this fundamental error as the basis for recommending content regulation of digital platforms like Facebook and Google (who have apparently been identified by Australia’s clairvoyant PreCrime Antitrust unit as being guilty of future violations). It argues that the lack of transparency and oversight in the algorithms these companies employ could result in a range of possible social and economic damages, despite the fact that consumers continue to rely on these products. These potential issues include prioritization of the content and products of the host company, under-serving of ads within their products, and creation of “filter bubbles” that conceal content from particular users thereby limiting their full range of choice.

The focus of these concerns is the kind and quality of  information that users are receiving as a result of the “media market” that results from the “ranking and display of news and journalistic content.” As a remedy for its hypothesised concerns, the ACCC has proposed a new regulatory authority tasked with overseeing the operation of the platforms’ algorithms. The ACCC claims this would ensure that search and newsfeed results are balanced and of high quality. This policy would undermine consumer welfare  in pursuit of remedying speculative harms.

Rather than the search results or news feeds being determined by the interaction between the algorithm and the user, the results would instead be altered to comply with criteria established by the ACCC. Yet, this would substantially undermine the value of these services.  The competitive differentiation between, say, Google and Bing lies in their unique, proprietary search algorithms. The ACCC’s intervention would necessarily remove some of this differentiation between online providers, notionally to improve the “quality” of results. But such second-guessing by regulators would quickly undermine the actual quality–and utility — of these services to users.

A second, but more troubling prospect is the threat of censorship that emerges from this kind of regime. Any agency granted a mandate to undertake such algorithmic oversight, and override or reconfigure the product of online services, thereby controls the content consumers may access. Such regulatory power thus affects not only what users can read, but what media outlets might be able to say in order to successfully offer curated content. This sort of control is deeply problematic since users are no longer merely faced with a potential “filter bubble” based on their own preferences interacting with a single provider, but with a pervasive set of speech controls promulgated by the government. The history of such state censorship is one which has demonstrated strong harms to both social welfare and rule of law, and should not be emulated.

Undoubtedly antitrust and consumer protection laws should be continually reviewed and revised. However, if we wish to uphold the principles upon which the US was founded and continue to protect consumer welfare, the US should avoid following the path Australia proposes to take.

A recent working paper by Hashmat Khan and Matthew Strathearn attempts to empirically link anticompetitive collusion to the boom and bust cycles of the economy.

The level of collusion is higher during a boom relative to a recession as collusion occurs more frequently when demand is increasing (entering into a collusive arrangement is more profitable and deviating from an existing cartel is less profitable). The model predicts that the number of discovered cartels and hence antitrust filings should be procyclical because the level of collusion is procyclical.

The first sentence—a hypothesis that collusion is more likely during a “boom” than in recession—seems reasonable. At the same time, a case can be made that collusion would be more likely during recession. For example, a reduced risk of entry from competitors would reduce the cost of collusion.

The second sentence, however, seems a stretch. Mainly because it doesn’t recognize the time delay between the collusive activity, the date the collusion is discovered by authorities, and the date the case is filed.

Perhaps, more importantly, it doesn’t acknowledge that many collusive arrangement span months, if not years. That span of time could include times of “boom” and times of recession. Thus, it can be argued that the date of the filing has little (or nothing) to do with the span over which the collusive activity occurred.

I did a very lazy man’s test of my criticisms. I looked at six of the filings cited by Khan and Strathearn for the year 2011, a “boom” year with a high number of horizontal price fixing cases filed.

khanstrathearn

My first suspicion was correct. In these six cases, an average of more than three years passed from the date of the last collusive activity and the date the case was filed. Thus, whether the economy is a boom or bust when the case is filed provides no useful information regarding the state of the economy when the collusion occurred.

Nevertheless, my lazy man’s small sample test provides some interesting—and I hope useful—information regarding Khan and Strathearn’s conclusions.

  1. From July 2001 through September 2009, 24 of the 99 months were in recession. In other words, during this period, there was a 24 percent chance the economy was in recession in any given month.
  2. Five of the six collusive arrangements began when the economy was in recovery. Only one began during a recession. This may seem to support their conclusion that collusive activity is more likely during a recovery. However, even if the arrangements began randomly, there would be a 55 percent chance that that five or more began during a recovery. So, you can’t read too much into the observation that most of the collusive agreements began during a “boom.”
  3. In two of the cases, the collusive activity occurred during a span of time that had no recession. The chances of this happening randomly is less than 1 in 20,000, supporting their conclusion regarding collusive activity and the business cycle.

Khan and Strathearn fall short in linking collusive activity to the business cycle but do a good job of linking antitrust enforcement activities to the business cycle. The information they use from the DOJ website is sufficient to determine when the collusive activity occurred—but it’ll take more vigorous “scrubbing” (their word) of the site to get the relevant data.

The bigger question, however, is the relevance of this research. Naturally, one could argue this line of research indicates that competition authorities should be extra vigilant during a booming economy. Yet, Adam Smith famously noted, “People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices.” This suggests that collusive activity—or the temptation to engage in such activity—is always and everywhere present, regardless of the business cycle.

 

Last week, Senator Orrin Hatch, Senator Thom Tillis, and Representative Bill Flores introduced the Hatch-Waxman Integrity Act of 2018 (HWIA) in both the Senate and the House of Representatives.  If enacted, the HWIA would help to ensure that the unbalanced inter partes review (IPR) process does not stifle innovation in the drug industry and jeopardize patients’ access to life-improving drugs.

Created under the America Invents Act of 2012, IPR is a new administrative pathway for challenging patents. It was, in large part, created to fix the problem of patent trolls in the IT industry; the trolls allegedly used questionable or “low quality” patents to extort profits from innovating companies.  IPR created an expedited pathway to challenge patents of dubious quality, thus making it easier for IT companies to invalidate low quality patents.

However, IPR is available for patents in any industry, not just the IT industry.  In the market for drugs, IPR offers an alternative to the litigation pathway that Congress created over three decades ago in the Hatch-Waxman Act. Although IPR seemingly fixed a problem that threatened innovation in the IT industry, it created a new problem that directly threatened innovation in the drug industry. I’ve previously published an article explaining why IPR jeopardizes drug innovation and consumers’ access to life-improving drugs. With Hatch-Waxman, Congress sought to achieve a delicate balance between stimulating innovation from brand drug companies, who hold patents, and facilitating market entry from generic drug companies, who challenge the patents.  However, IPR disrupts this balance as critical differences between IPR proceedings and Hatch-Waxman litigation clearly tilt the balance in the patent challengers’ favor. In fact, IPR has produced noticeably anti-patent results; patents are twice as likely to be found invalid in IPR challenges as they are in Hatch-Waxman litigation.

The Patent Trial and Appeal Board (PTAB) applies a lower standard of proof for invalidity in IPR proceedings than do federal courts in Hatch-Waxman proceedings. In federal court, patents are presumed valid and challengers must prove each patent claim invalid by “clear and convincing evidence.” In IPR proceedings, no such presumption of validity applies and challengers must only prove patent claims invalid by the “preponderance of the evidence.”

Moreover, whereas patent challengers in district court must establish sufficient Article III standing, IPR proceedings do not have a standing requirement.  This has given rise to “reverse patent trolling,” in which entities that are not litigation targets, or even participants in the same industry, threaten to file an IPR petition challenging the validity of a patent unless the patent holder agrees to specific pre-filing settlement demands.  The lack of a standing requirement has also led to the  exploitation of the IPR process by entities that would never be granted standing in traditional patent litigation—hedge funds betting against a company by filing an IPR challenge in hopes of crashing the stock and profiting from the bet.

Finally, patent owners are often forced into duplicative litigation in both IPR proceedings and federal court litigation, leading to persistent uncertainty about the validity of their patents.  Many patent challengers that are unsuccessful in invalidating a patent in district court may pursue subsequent IPR proceedings challenging the same patent, essentially giving patent challengers “two bites at the apple.”  And if the challenger prevails in the IPR proceedings (which is easier to do given the lower standard of proof), the PTAB’s decision to invalidate a patent can often “undo” a prior district court decision.  Further, although both district court judgments and PTAB decisions are appealable to the Federal Circuit, the court applies a more deferential standard of review to PTAB decisions, increasing the likelihood that they will be upheld compared to the district court decision.

The pro-challenger bias in IPR creates significant uncertainty for patent rights in the drug industry.  As an example, just last week patent claims for drugs generating $6.5 billion for drug company Sanofi were invalidated in an IPR proceeding.  Uncertain patent rights will lead to less innovation because drug companies will not spend the billions of dollars it typically costs to bring a new drug to market when they cannot be certain if the patents for that drug can withstand IPR proceedings that are clearly stacked against them.   And, if IPR causes drug innovation to decline, a significant body of research predicts that patients’ health outcomes will suffer as a result.

The HWIA, which applies only to the drug industry, is designed to return the balance established by Hatch-Waxman between branded drug innovators and generic drug challengers. It eliminates challengers’ ability to file duplicative claims in both federal court and through the IPR process. Instead, they must choose between either Hatch-Waxman litigation (which saves considerable costs by allowing generics to rely on the brand company’s safety and efficacy studies for FDA approval) and IPR (which is faster and provides certain pro-challenger provisions). In addition to eliminating generic challengers’ “second bite of the apple,” the HWIA would also eliminate the ability of hedge funds and similar entities to file IPR claims while shorting the stock.

Thus, if enacted, the HWIA would create incentives that reestablish Hatch-Waxman litigation as the standard pathway for generic challenges to brand patents.  Yet, it would preserve IPR proceedings as an option when speed of resolution is a primary concern.  Ultimately, it will restore balance to the drug industry to safeguard competition, innovation, and patients’ access to life-improving drugs.

“Our City has become a cesspool,” according Portland police union president, Daryl Turner. He was describing efforts to address the city’s large and growing homelessness crisis.

Portland Mayor Ted Wheeler defended the city’s approach, noting that every major city, “all the way up and down the west coast, in the Midwest, on the East Coast, and frankly, in virtually every large city in the world” has a problem with homelessness. Nevertheless, according to the Seattle Times, Portland is ranked among the 10 worst major cities in the U.S. for homelessness. Wheeler acknowledged, “the problem is getting worse.”

This week, the city’s Budget Office released a “performance report” for some of the city’s bureaus. One of the more eyepopping statistics is the number of homeless camps the city has cleaned up over the years.

PortlandHomelessCampCleanups

Keep in mind, Multnomah County reports there are 4,177 homeless residents in the entire county. But the city reports clearing more than 3,100 camps in one year. Clearly, the number of homeless in the city is much larger than reflected in the annual homeless counts.

The report makes a special note that, “As the number of clean‐ups has increased and program operations have stabilized, the total cost per clean‐up has decreased substantially as well.” Sounds like economies of scale.

Turns out, Budget Office’s simple graphic gives enough information to estimate the economies of scale in homeless camp cleanups. Yes, it’s kinda crappy data. (Could it really be the case that in two years in a row, the city cleaned up exactly the same number of camps at exactly the same cost?) Anyway data is data.

First we plot the total annual costs for cleanups. Of course it’s an awesome fit (R-squared of 0.97), but that’s what happens when you have three observations and two independent variables.

PortlandHomelessTC

Now that we have an estimate of the total cost function, we can plot the marginal cost curve (blue) and average cost curve (orange).

PortlandHomelessMCAC1

That looks like a textbook example of economies of scale: decreasing average cost. It also looks like a textbook example of natural monopoly: marginal cost lower than average cost over the relevant range of output.

What strikes me as curious is how low is the implied marginal cost of a homeless camp cleanup, as shown in the table below.

FY Camps TC AC MC
2014-15 139 $171,109 $1,231 $3,178
2015-16 139 $171,109 $1,231 $3,178
2016-17 571 $578,994 $1,014 $774
2017-18 3,122 $1,576,610 $505 $142

It is somewhat shocking that the marginal cost of an additional camp cleanup is only $142. The hourly wages for the cleanup crew alone would be way more than $142. Something seems fishy with the numbers the city is reporting.

My guess: The city is shifting some of the cleanup costs to other agencies, such as Multnomah County and/or the Oregon Department of Transportation. I also suspect the city is not fully accounting for the costs of the cleanups. And, I am almost certain the city is significantly under reporting how many homeless are living on Portland streets.

This post was co-authored with Chelsea Boyd

The Food and Drug Administration has spoken, and its words have, once again, ruffled many feathers. Coinciding with the deadline for companies to lay out their plans to prevent youth access to e-cigarettes, the agency has announced new regulatory strategies that are sure to not only make it more difficult for young people to access e-cigarettes, but for adults who benefit from vaping to access them as well.

More surprising than the FDA’s paradoxical strategy of preventing teen smoking by banning not combustible cigarettes, but their distant cousins, e-cigarettes, is that the biggest support for establishing barriers to accessing e-cigarettes seems to come from the tobacco industry itself.

Going above and beyond the FDA’s proposals, both Altria and JUUL are self-restricting flavor sales, creating more — not fewer — barriers to purchasing their products. And both companies now publicly support a 21-to-purchase mandate. Unfortunately, these barriers extend beyond restricting underage access and will no doubt affect adult smokers seeking access to reduced-risk products.

To say there are no benefits to self-regulation by e-cigarette companies would be misguided. Perhaps the biggest benefit is to increase the credibility of these companies in an industry where it has historically been lacking. Proposals to decrease underage use of their product show that these companies are committed to improving the lives of smokers. Going above and beyond the FDA’s regulations also allows them to demonstrate that they take underage use seriously.

Yet regulation, whether imposed by the government or as part of a business plan, comes at a price. This is particularly true in the field of public health. In other health areas, the FDA is beginning to recognize that it needs to balance regulatory prudence with the risks of delaying innovation. For example, by decreasing red tape in medical product development, the FDA aims to help people access novel treatments for conditions that are notoriously difficult to treat. Unfortunately, this mindset has not expanded to smoking.

Good policy, whether imposed by government or voluntarily adopted by private actors, should not help one group while harming another. Perhaps the question that should be asked, then, is not whether these new FDA regulations and self-imposed restrictions will decrease underage use of e-cigarettes, but whether they decrease underage use enough to offset the harm caused by creating barriers to access for adult smokers.

The FDA’s new point-of-sale policy restricts sales of flavored products (not including tobacco flavors or menthol/mint flavors) to either specialty, age-restricted, in-person locations or to online retailers with heightened age-verification systems. JUUL, Reynolds and Altria have also included parts of this strategy in their proposed self-regulations, sometimes going even further by limiting sales of flavored products to their company websites.

To many people, these measures may not seem like a significant barrier to purchasing e-cigarettes, but in fact, online retail is a luxury that many cannot access. Heightened online age-verification processes are likely to require most of the following: a credit or debit card, a Social Security number, a government-issued ID, a cellphone to complete two-factor authorization, and a physical address that matches the user’s billing address. According to a 2017 Federal Deposit Insurance Corp. survey, one in four U.S. households are unbanked or underbanked, which is an indicator of not having a debit or credit card. That factor alone excludes a quarter of the population, including many adults, from purchasing online. It’s also important to note that the demographic characteristics of people who lack the items required to make online purchases are also the characteristics most associated with smoking.

Additionally, it’s likely that these new point-of-sale restrictions won’t have much of an effect at all on the target demographic — those who are underage. According to a 2017 Centers for Disease Control and Prevention study, of the 9 percent of high school students who currently use electronic nicotine delivery systems (ENDS), only 13 percent reported purchasing the device for themselves from a store. This suggests that 87 percent of underage users won’t be deterred by prohibitive measures to move sales to specialty stores or online. Moreover, Reynolds estimates that only 20 percent of its VUSE sales happen online, indicating that more than three-quarters of users — consisting mainly of adults — purchase products in brick-and-mortar retail locations.

Existing enforcement techniques, if properly applied at the point of sale, could have a bigger impact on youth access. Interestingly, a recent analysis by Baker White of FDA inspection reports suggests that the agency’s existing approaches to prevent youth access may be lacking — meaning that there is much room for improvement. Overall, selling to minors is extremely low-risk for stores. The likelihood of a store receiving a fine for violation of the minimum age of sale is once for every 36.7 years of operation, the financial risk is about 2 cents per day, and the risk of receiving a no sales order (the most severe consequence) is 1 for every 2,825 years of operation. Furthermore, for every $279 the FDA receives in fines, it spends over $11,800. With odds like those, it’s no wonder some stores are willing to sell to minors: Their risk is minimal.

Eliminating access to flavored products is the other arm of the FDA’s restrictions. Many people have suggested that flavors are designed to appeal to youth, yet fewer talk about the proportion of adults who use flavored e-cigarettes. In reality, flavors are an important factor for adults who switch from combustible cigarettes to e-cigarettes. A 2018 survey of 20,676 US adults who frequently use e-cigarettes showed that “since 2013, fruit-flavored e-liquids have replaced tobacco-flavored e-liquids as the most popular flavors with which participants had initiated e-cigarette use.” By relegating flavored products to specialty retailers and online sales, the FDA has forced adult smokers, who may switch from combustible cigarettes to e-cigarettes, to go out of their way to initiate use.

It remains to be seen if new regulations, either self- or FDA-imposed, will decrease underage use. However, we already know who is most at risk for negative outcomes from these new regulations: people who are geographically disadvantaged (for instance, people who live far away from adult-only retailers), people who might not have credit to go through an online retailer, and people who rely on new flavors as an incentive to stay away from combustible cigarettes. It’s not surprising or ironic that these are also the people who are most at risk for using combustible cigarettes in the first place.

Given the likelihood that the new way of doing business will have minimal positive effects on youth use but negative effects on adult access, we must question what the benefits of these policies are. Fortunately, we know the answer already: The FDA gets political capital and regulatory clout; industry gets credibility; governments get more excise tax revenue from cigarette sales. And smokers get left behind.

A recent NBER working paper by Gutiérrez & Philippon has attracted attention from observers who see oligopoly everywhere and activists who want governments to more actively “manage” competition. The analysis in the paper is fundamentally flawed and should not be relied upon by policymakers, regulators, or anyone else.

As noted in my earlier post, Gutiérrez & Philippon attempt to craft a causal linkage between differences in U.S. and EU antitrust enforcement and product market regulation to differences in market concentration and corporate profits. Their paper’s abstract leads with a bold assertion:

Until the 1990’s, US markets were more competitive than European markets. Today, European markets have lower concentration, lower excess profits, and lower regulatory barriers to entry.

This post focuses on Gutiérrez & Philippon’s claim that EU markets have lower “excess profits.” This is perhaps the most outrageous claim in the paper. If anyone bothers to read the full paper, they’ll see that claims that EU firms have lower excess profits is simply not supported by the paper itself. Aside from a passing mention of someone else’s work in a footnote, the only mention of “excess profits” is in the paper’s headline-grabbing abstract.

What’s even more outrageous is the authors don’t define (or even describe) what they mean by excess profits.

These two factors alone should be enough to toss aside the paper’s assertion about “excess” profits. But, there’s more.

Gutiérrez & Philippon define profit to be gross operating surplus and mixed income (known as “GOPS” in the OECD’s STAN Industrial Analysis dataset). GOPS is not the same thing as gross margin or gross profit as used in business and finance (for example GOPS subtracts wages, but gross margin does not). The EU defines GOPS as (emphasis added):

Operating surplus is the surplus (or deficit) on production activities before account has been taken of the interest, rents or charges paid or received for the use of assets. Mixed income is the remuneration for the work carried out by the owner (or by members of his family) of an unincorporated enterprise. This is referred to as ‘mixed income’ since it cannot be distinguished from the entrepreneurial profit of the owner.

Here’s Figure 1 from Gutiérrez & Philippon plotting GOPS as a share of gross output.

Fig1-GutierrezPhilippon

Look at the huge jump in gross operating surplus for U.S. firms!

Now, look at the scale of the y-axis. Not such a big jump after all.

Over 23 years, from 1992 to 2015, the gross operating surplus rate for U.S. firms grew by 2.5 percentage points. In the EU, the rate increased by about one percentage point.

Using the STAN dataset, I plotted the gross operating surplus rate for each EU country (blue dots) and the U.S. (red dots), along with a time trend. Three takeaways:

  1. There’s not much of a difference between the U.S. and the EU average—they both hover around a gross operating surplus rate of about 19.5 percent; and
  2. There’s a huge variation in gross operating surplus rate across EU countries.
  3. Yes, gross operating surplus is trending slightly upward in the U.S. and slightly downward for the EU average, but there doesn’t appear to be a huge difference in the slope of the trendlines. In fact the slopes of the trendlines are not statistically significantly different from zero and are not statistically significantly different from each other.

GOPSprod

The use of gross profits raises some serious questions. For example, the Stigler Center’s James Traina finds that, after accounting for selling, general, and administrative expenses (SG&A), mark-ups for publicly traded firms in the U.S. have not meaningfully increased since 1980.

The figure below plots net operating surplus (NOPS equals GOPS minus consumption of fixed capital)—which is not the same thing as net income for a business.

Same three takeaways:

  1. There’s not much of a difference between the U.S. and the EU average—they both hover around a net operating surplus rate of a little more than seven percent; and
  2. There’s a huge variation in net operating surplus rate across EU countries.
  3. The slope of the trendlines for net operating surplus in the U.S. and EU are not statistically significantly different from zero and are not statistically significantly different from each other.

NOPSprod

It’s very possible that U.S. firms are achieving higher and growing “excess” profits relative to EU firms. It’s also very possible they’re not. Despite the bold assertions of Gutiérrez & Philippon, the information presented in their paper provides no useful information one way or the other.