Archives For

Large portions of the country are expected to face a growing threat of widespread electricity blackouts in the coming years. For example, the Western Electricity Coordinating Council—the regional entity charged with overseeing the Western Interconnection grid that covers most of the Western United States and Canada—estimates that the subregion consisting of Colorado, Utah, Nevada, and portions of southern Wyoming, Idaho, and Oregon will, by 2032, see 650 hours (more than 27 days in total) over the course of the year when available enough resources may not be sufficient to accommodate peak demand.

Supply and demand provide the simplest explanation for the region’s rising risk of power outages. Demand is expected to continue to rise, while stable supplies are diminishing. Over the next 10 years, electricity demand across the entire Western Interconnection is expected to grow by 11.4%, while scheduled resource retirements are projected to growing resource-adequacy risk in every subregion of the grid.

The largest decreases in resources are from coal, natural gas, and hydropower. Anticipated additions of highly variable solar and wind resources, as well as battery storage, will not be sufficient to offset the decline from conventional resources. The Wall Street Journal reports that, while 21,000 MW of wind, solar, and battery-storage capacity are anticipated to be added to the grid by 2030, that’s only about half as much as expected fossil-fuel retirements.

In addition to the risk associated with insufficient power generation, many parts of the U.S. are facing another problem: insufficient transmission capacity. The New York Times reports that more than 8,100 energy projects were waiting for permission to connect to electric grids at year-end 2021. That was an increase from the prior year, when 5,600 projects were queued up.

One of the many reasons for the backlog, the Times reports, is the difficulty in determining who will pay for upgrades elsewhere in the system to support the new interconnections. These costs can be huge and unpredictable. Some upgrades that penciled out as profitable when first proposed may become uneconomic in the years it takes to earn regulatory approval, and end up being dropped. According to the Times:

That creates a new problem: When a proposed energy project drops out of the queue, the grid operator often has to redo studies for other pending projects and shift costs to other developers, which can trigger more cancellations and delays.

It also creates perverse incentives, experts said. Some developers will submit multiple proposals for wind and solar farms at different locations without intending to build them all. Instead, they hope that one of their proposals will come after another developer who has to pay for major network upgrades. The rise of this sort of speculative bidding has further jammed up the queue.

“Imagine if we paid for highways this way,” said Rob Gramlich, president of the consulting group Grid Strategies. “If a highway is fully congested, the next car that gets on has to pay for a whole lane expansion. When that driver sees the bill, they drop off. Or, if they do pay for it themselves, everyone else gets to use that infrastructure. It doesn’t make any sense.”

This is not a new problem, nor is it a problem that is unique to the electrical grid. In fact, the Federal Communications Commission (FCC) has been wrestling with this issue for years regarding utility-pole attachments.

Look up at your local electricity pole and you’ll see a bunch of stuff hanging off it. The cable company may be using it to provide cable service and broadband and the telephone company may be using it, too. These companies pay the pole owner to attach their hardware. But sometimes, the poles are at capacity and cannot accommodate new attachments. This raises the question of who should pay for the new, bigger pole: The pole owner, or the company whose attachment is driving the need for a new pole?

It’s not a simple question to answer.

In comments to the FCC, the International Center for Law & Economics (ICLE) notes:

The last-attacher-pays model may encourage both hold-up and hold-out problems that can obscure the economic reasons a pole owner would otherwise have to replace a pole before the end of its useful life. For example, a pole owner may anticipate, after a recent new attachment, that several other companies are also interested in attaching. In this scenario, it may be in the owner’s interest to replace the existing pole with a larger one to accommodate the expected demand. The last-attacher-pays arrangement, however, would diminish the owner’s incentive to do so. The owner could instead simply wait for a new attacher to pay the full cost of replacement, thereby creating a hold-up problem that has been documented in the record. This same dynamic also would create an incentive for some prospective attachers to hold-out before requesting an attachment, in expectation that some other prospective attacher would bear the costs.

This seems to be very similar to the problems facing electricity-transmission markets. In our comments to the FCC, we conclude:

A rule that unilaterally imposes a replacement cost onto an attacher is expedient from an administrative perspective but does not provide an economically optimal outcome. It likely misallocates resources, contributes to hold-outs and holdups, and is likely slowing the deployment of broadband to the regions most in need of expanded deployment. Similarly, depending on the condition of the pole, shifting all or most costs onto the pole owner would not necessarily provide an economically optimal outcome. At the same time, a complex cost-allocation scheme may be more economically efficient, but also may introduce administrative complexity and disputes that could slow broadband deployment. To balance these competing considerations, we recommend the FCC adopt straightforward rules regarding both the allocation of pole-replacement costs and the rates charged to attachers, and that these rules avoid shifting all the costs onto one or another party.

To ensure rapid deployment of new energy and transmission resources, federal, state, and local governments should turn to the lessons the FCC is learning in its pole-attachment rulemaking to develop a system that efficiently and fairly allocates the costs of expanding transmission connections to the electrical grid.

If you wander into an undergraduate economics class on the right day at the right time, you might catch the lecturer talking about Giffen goods: the rare case where demand curves can slope upward. The Irish potato famine is often used as an example. As the story goes, potatoes were a huge part of the Irish diet and consumed a large part of Irish family budgets. A failure of the potato crop reduced the supply of potatoes and potato prices soared. Because families had to spend so much on potatoes, they couldn’t afford much else, so spending on potatoes increased despite rising prices.

It’s a great story of injustice with a nugget of economics: Demand curves can slope upward!

Follow the students around for a few days, and they’ll be looking for Giffen goods everywhere. Surely, packaged ramen and boxed macaroni and cheese are Giffen goods. So are white bread and rice. Maybe even low-end apartments.

While it’s a fun concept to consider, the potato famine story is likely apocryphal. In truth, it’s nearly impossible to find a Giffen good in the real world. My version of Greg Mankiw’s massive “Principles of Economics” textbook devotes five paragraphs to Giffen goods, but it’s not especially relevant, which is perhaps why it’s only five paragraphs.

Wander into another economics class, and you might catch the lecturer talking about monopsony—that is, a market in which a small number of buyers control the price of inputs such as labor. I say “might” because—like Giffen goods—monopsony is an interesting concept to consider, but very hard to find a clear example of in the real world. Mankiw’s textbook devotes only four paragraphs to monopsony, explaining that the book “does not present a formal model of monopsony because, in the world, monopsonies are rare.”

Even so, monopsony is a hot topic these days. It seems that monopsonies are everywhere. Walmart and Amazon are monopsonist employers. So are poultry, pork, and beef companies. Local hospitals monopsonize the market for nurses and physicians. The National Collegiate Athletic Association is a monopsony employer of college athletes. Ultimate Fighting Championship has a monopsony over mixed-martial-arts fighters.

In 1994, David Card and Alan Krueger’s earthshaking study found a minimum wage increase had no measurable effect on fast-food employment and retail prices. They investigated monopsony power as one explanation but concluded that a monopsony model was not supported by their findings. They note:

[W]e find that prices of fast-food meals increased in New Jersey relative to Pennsylvania, suggesting that much of the burden of the minimum-wage rise was passed on to consumers. Within New Jersey, however, we find no evidence that prices increased more in stores that were most affected by the minimum-wage rise. Taken as a whole, these findings are difficult to explain with the standard competitive model or with models in which employers face supply constraints (e.g., monopsony or equilibrium search models). [Emphasis added]

Even so, the monopsony hunt was on and it intensified during President Barack Obama’s administration. During his term, the U.S. Justice Department (DOJ) brought suit against several major Silicon Valley employers for anticompetitively entering into agreements not to “poach” programmers and engineers from each other. The administration also brought suit against a hospital association for an agreement to set uniform billing rates for certain nurses. Both cases settled but the Silicon Valley allegations led to a private class-action lawsuit.

In 2016, Obama’s Council of Economic Advisers published an issue brief on labor-market monopsony. The brief concluded that “evidence suggest[s] that firms may have wage-setting power in a broad range of settings.”

Around the same time, the Obama administration announced that it intended to “criminally investigate naked no-poaching or wage-fixing agreements that are unrelated or unnecessary to a larger legitimate collaboration between the employers.” The DOJ argued that no-poach agreements that allocate employees between companies are per se unlawful restraints of trade that violate Section 1 of the Sherman Act.

If one believes that monopsony power is stifling workers’ wages and benefits, then this would be a good first step to build up a body of evidence and precedence. Go after the low-hanging fruit of a conspiracy that is a per se violation of the Sherman Act, secure some wins, and then start probing the more challenging cases.

After several matters that resulted in settlements, the DOJ brought its first criminal wage-fixing case in late 2020. In United States v. Jindal, the government charged two employees of a Texas health-care staffing company of colluding with another staffing company to decrease pay rates for physical therapists and physical-therapist assistants.

The defense in Jindal conceded that that price-fixing was per se illegal under the Sherman Act but argued that prices and wages are two different concepts. Therefore, the defense claimed that, even if it was engaged in wage-fixing, the conduct would not be per se illegal. That was a stretch, and the district court judge was having none of that in ruling that: “The antitrust laws fully apply to the labor markets, and price-fixing agreements among buyers … are prohibited by the Sherman Act.”

Nevertheless, the jury in Jindal found the defendants not guilty of wage-fixing in violation of the Sherman Act, and also not guilty of a related conspiracy charge.

The DOJ also brought criminal no-poach cases against three other health-care companies and their employees: United States v. Surgical Care Affiliates LLC; United States v. Hee; and United States v. DaVita Inc. Each of the indictments alleged no-poach agreements in which defendants conspired with competitors not to recruit each other’s employees. Hee also included wage-fixing allegations.

Before trial, the defense in DaVita filed a motion to dismiss, arguing that no-poach agreements did not amount to illegal market-allocation agreements. Instead, the defense claimed that no-poach agreements were something less restrictive. Rather than a flat-out refusal to hire competitors’ employees, they were more akin to agreeing not to seek out competitors’ employees. As with Jindal, this was too much of a stretch for the judge who ruled that no-poach agreements could be an illegal market-allocation agreement.

A day after the Jindal verdict, the jury in DaVita acquitted the kidney-dialysis provider and its former CEO of charges that they conspired with competitors to suppress competition for employees through no-poach agreements.

The DaVita jurors appeared to be hung up on the definition of “meaningful competition” in the relevant market. The defense presented information showing that, despite any agreements, employees frequently changed jobs among the companies. Thus, it was argued that any agreement did not amount to an allocation of the market for employees.

The prosecution called several corporate executives who testified that the non-solicitation agreements merely required DaVita employees to tell their bosses they were looking for another job before they could be considered for positions at the three alleged co-conspirator companies. Some witnesses indicated that, by informing their bosses, they were able to obtain promotions and/or increased compensation. This was supported by expert testimony concluding that DaVita salaries changed during the alleged conspiracy period at a rate higher than the health-care industry as a whole. This finding is at-odds with a theory that the non-solicitation agreement was designed to stabilize or suppress compensation.

The Jindal and DaVita cases highlight some of the enormous challenges in mounting a labor-monopsonization case. Even if agencies can “win” or get concessions on defining the relevant markets, they still face challenges in establishing that no-poach agreements amount to a “meaningful” restraint of trade. DaVita suggests that a showing of job turnover and/or increased compensation during an alleged conspiracy period may be sufficient to convince a jury that a no-poach agreement may not be anticompetitive and—under certain circumstances—may even be pro-competitive.

For now, the hunt for a monopsony labor market continues its quest, along with the hunt for the ever-elusive Giffen good.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]

U.S. antitrust regulators have a history of narrowly defining relevant markets—often to the point of absurdity—in order to create market power out of thin air. The Federal Trade Commission (FTC) famously declared that Whole Foods and Wild Oats operated in the “premium natural and organic supermarkets market”—a narrowly defined market designed to exclude other supermarkets carrying premium natural and organic foods, such as Walmart and Kroger. Similarly, for the Staples-Office Depot merger, the FTC

narrowly defined the relevant market as “office superstore” chains, which excluded general merchandisers such as Walmart, K-Mart and Target, who at the time accounted for 80% of office supply sales.

Texas Attorney General Ken Paxton’s complaint against Google’s advertising business, joined by the attorneys general of nine other states, continues this tradition of narrowing market definition to shoehorn market dominance where it may not exist.

For example, one recent paper critical of Google’s advertising business narrows the relevant market first from media advertising to digital advertising, then to the “open” supply of display ads and, finally, even further to the intermediation of the open supply of display ads. Once the market has been sufficiently narrowed, the authors conclude Google’s market share is “perhaps sufficient to confer market power.”

While whittling down market definitions may achieve the authors’ purpose of providing a roadmap to prosecute Google, one byproduct is a mishmash of market definitions that generates as many as 16 relevant markets for digital display and video advertising, in many of which Google doesn’t have anything approaching market power (and in some of which, in fact, Facebook, and not Google, is the most dominant player).

The Texas complaint engages in similar relevant-market gerrymandering. It claims that, within digital advertising, there exist several relevant markets and that Google monopolizes four of them:

  1. Publisher ad servers, which manage the inventory of a publisher’s (e.g., a newspaper’s website or a blog) space for ads;
  2. Display ad exchanges, the “marketplace” in which auctions directly match publishers’ selling of ad space with advertisers’ buying of ad space;
  3. Display ad networks, which are similar to exchanges, except a network acts as an intermediary that collects ad inventory from publishers and sells it to advertisers; and
  4. Display ad-buying tools, which include demand-side platforms that collect bids for ad placement with publishers.

The complaint alleges, “For online publishers and advertisers alike, the different online advertising formats are not interchangeable.” But this glosses over a bigger challenge for the attorneys general: Is online advertising a separate relevant market from offline advertising?

Digital advertising, of which display advertising is a small part, is only one of many channels through which companies market their products. About half of today’s advertising spending in the United States goes to digital channels, up from about 10% a decade ago. Approximately 30% of ad spending goes to television, with the remainder going to radio, newspapers, magazines, billboards and other “offline” forms of media.

Physical newspapers now account for less than 10% of total advertising spending. Traditionally, newspapers obtained substantial advertising revenues from classified ads. As internet usage increased, newspaper classifieds have been replaced by less costly and more effective internet classifieds—such as those offered by Craigslist—or targeted ads on Google Maps or Facebook.

The price of advertising has fallen steadily over the past decade, while output has risen. Spending on digital advertising in the United States grew from $26 billion in 2010 to nearly $130 billion in 2019, an average increase of 20% a year. Over the same period, the producer price index (PPI) for internet advertising sales declined by nearly 40%. Rising spending in the face of falling prices indicates the number of ads bought and sold increased by approximately 27% a year.

Since 2000, advertising spending has been falling as a share of gross domestic product, with online advertising growing as a share of that. The combination of increasing quantity, decreasing cost and increasing total revenues are consistent with a growing and increasingly competitive market, rather than one of rising concentration and reduced competition.

There is little or no empirical data evaluating the extent to which online and offline advertising constitute distinct markets or the extent to which digital display is a distinct submarket of online advertising. As a result, analysis of adtech competition has relied on identifying several technical and technological factors—as well as the say-so of participants in the business—that the analysts assert distinguish online from offline and establish digital display (versus digital search) as a distinct submarket. This approach has been used and accepted, especially in cases in which pricing data has not been available.

But the pricing information that is available raises questions about the extent to which online advertising is a distinct market from offline advertising. For example, Avi Goldfarb and Catherine Tucker find that, when local regulations prohibit offline direct advertising, search advertising is more expensive, indicating that search and offline advertising are substitutes. In other research, they report that online display advertising circumvents, in part, local bans on offline billboard advertising for alcoholic beverages. In both studies, Goldfarb and Tucker conclude their results suggest online and offline advertising are substitutes. They also conclude this substitution suggests that online and offline markets should be considered together in the context of antitrust.

While this information is not sufficient to define a broader relevant market, it raises questions regarding solely relying on the technical or technological distinctions and the say-so of market participants.

In the United States, plaintiffs do not get to define the relevant market. That is up to the judge or the jury. Plaintiffs have the burden to convince the court that a proposed narrow market definition is the correct one. With strong evidence that online and offline ads are substitutes, the court should not blindly accept the gerrymandered market definitions posited by the attorneys general.

Rolled by Rewheel, Redux

Eric Fruits —  15 December 2020

The Finnish consultancy Rewheel periodically issues reports using mobile wireless pricing information to make claims about which countries’ markets are competitive and which are not. For example, Rewheel claims Canada and Greece have the “least competitive monthly prices” while the United Kingdom and Finland have the most competitive.

Rewheel often claims that the number of carriers operating in a country is the key determinant of wireless pricing. 

Their pricing studies attract a great deal of attention. For example, in February 2019 testimony before the U.S. House Energy and Commerce Committee, Phillip Berenbroick of Public Knowledge asserted: “Rewheel found that consumers in markets with three facilities-based providers paid twice as much per gigabyte as consumers in four firm markets.” So, what’s wrong with Rewheel? An earlier post highlights some of the flaws in Rewheel’s methodology. But there’s more.

Rewheel creates fictional market baskets of mobile plans for each provider in a county. Country-by-country comparisons are made by evaluating the lowest-priced basket for each country and the basket with the median price.

Rewheel’s market baskets are hypothetical packages that say nothing about which plans are actually chosen by consumers or what the actual prices paid by those consumers were. This is not a new criticism. In 2014, Pauline Affeldt and Rainer Nitsche called these measures “meaningless”:

Such approaches are taken by Rewheel (2013) and also the Austrian regulator rtr … Such studies face the following problems: They may pick tariffs that are relatively meaningless in the country. They will have to assume one or more consumption baskets (voice minutes, data volume etc.) in order to compare tariffs. This may drive results. Apart from these difficulties such comparisons require very careful tracking of tariffs and their changes. Even if one assumes studying a sample of tariffs is potentially meaningful, a comparison across countries (or over time) would still require taking into account key differences across countries (or over time) like differences in demand, costs, network quality etc.

For example, reporting that the average price of a certain T-Mobile USA smartphone, tablet and home Internet plan is $125 is about as useless as knowing that the average price of a Kroger shopping cart containing a six-pack of Budweiser, a dozen eggs, and a pound of oranges is $10. Is Safeway less “competitive” if the price of the same cart of goods is $12? What could you say about pricing at a store that doesn’t sell Budweiser (e.g., Trader Joe’s)?

Rewheel solves that last problem by doing something bonkers. If a carrier doesn’t offer a plan in one of Rewheel’s baskets, they “assign” the HIGHEST monthly price in the world. 

For example, Rewheel notes that Vodafone India does not offer a fixed wireless broadband plan with at least 1,000GB of data and download speeds of 100 Mbps or faster. So, Rewheel “assigns” Vodafone India the highest price in its dataset. That price belongs to a plan that’s sold in the United Kingdom. It simply makes no sense. 

To return to the supermarket analogy, it would be akin to saying that, if a Trader Joe’s in the United States doesn’t sell six-packs of Budweiser, we should assume the price of Budweiser at Trader Joe’s is equal to the world’s most expensive six-pack of the beer. In reality, Trader Joe’s is known for having relatively low prices. But using the Rewheel approach, the store would be assessed to have some of the highest prices.

Because of Rewheel’s “assignment” of highest monthly prices to many plans, it’s irrelevant whether their analysis is based on a country’s median price or lowest price. The median is skewed and the lowest actual may be missing from the dataset.

Rewheel publishes these reports to support its argument that mobile prices are lower in markets with four carriers than in those with three carriers. But even if we accept Rewheel’s price data as reliable, which it isn’t, their own data show no relationship between the number of carriers and average price.

Notice the huge overlap of observations among markets with three and four carriers. 

Rewheel’s latest report provides a redacted dataset, reporting only data usage and weighted average price for each provider. So, we have to work with what we have. 

A simple regression analysis shows there is no statistically significant difference in the intercept or the slopes for markets with three, four or five carriers (the default is three carriers in the regression). Based on the data Rewheel provides to the public, the number of carriers in a country has no relationship to wireless prices.

Rewheel seems to have a rich dataset of pricing information that could be useful to inform policy. It’s a shame that their topline summaries seem designed to support a predetermined conclusion.

Germán Gutiérrez and Thomas Philippon have released a major rewrite of their paper comparing the U.S. and EU competitive environments. 

Although the NBER website provides an enticing title — “How European Markets Became Free: A Study of Institutional Drift” — the paper itself has a much more yawn-inducing title: “How EU Markets Became More Competitive Than US Markets: A Study of Institutional Drift.”

Having already critiqued the original paper at length (here and here), I wouldn’t normally take much interest in the do-over. However, in a recent episode of Tyler Cowen’s podcast, Jason Furman gave a shout out to Philippon’s work on increasing concentration. So, I thought it might be worth a review.

As with the original, the paper begins with a conclusion: The EU appears to be more competitive than the U.S. The authors then concoct a theory to explain their conclusion. The theory’s a bit janky, but it goes something like this:

  • Because of lobbying pressure and regulatory capture, an individual country will enforce competition policy at a suboptimal level.
  • Because of competing interests among different countries, a “supra-national” body will be more independent and better able to foster pro-competitive policies and to engage in more vigorous enforcement of competition policy.
  • The EU’s supra-national body and its Directorate-General for Competition is more independent than the U.S. Department of Justice and Federal Trade Commission.
  • Therefore, their model explains why the EU is more competitive than the U.S. Q.E.D.

If you’re looking for what this has to do with “institutional drift,” don’t bother. The term only shows up in the title.

The original paper provided evidence from 12 separate “markets,” that they say demonstrated their conclusion about EU vs. U.S. competitiveness. These weren’t really “markets” in the competition policy sense, they were just broad industry categories, such as health, information, trade, and professional services (actually “other business sector services”). 

As pointed out in one of my earlier critiques, In all but one of these industries, the 8-firm concentration ratios for the U.S. and the EU are below 40 percent and the HHI measures reported in the original paper are at levels that most observers would presume to be competitive. 

Sending their original markets to drift in the appendices, Gutiérrez and Philippon’s revised paper focuses its attention on two markets — telecommunications and airlines — to highlight their claims that EU markets are more competitive than the U.S. First, telecoms:

To be more concrete, consider the Telecom industry and the entry of the French Telecom company Free Mobile. Until 2011, the French mobile industry was an oligopoly with three large historical incumbents and weak competition. … Free obtained its 4G license in 2011 and entered the market with a plan of unlimited talk, messaging and data for €20. Within six months, the incumbents Orange, SFR and Bouygues had reacted by launching their own discount brands and by offering €20 contracts as well. … The relative price decline was 40%: France went from being 15% more expensive than the US [in 2011] to being 25% cheaper in about two years [in 2013].

While this is an interesting story about how entry can increase competition, the story of a single firm entering a market in a single country is hardly evidence that the EU as a whole is more competitive than the U.S.

What Gutiérrez and Philippon don’t report is that from 2013 to 2019, prices declined by 12% in the U.S. and only 8% in France. In the EU as a whole, prices decreased by only 5% over the years 2013-2019.

Gutiérrez and Philippon’s passenger airline story is even weaker. Because airline prices don’t fit their narrative, they argue that increasing airline profits are evidence that the U.S. is less competitive than the EU. 

The picture above is from Figure 5 of their paper (“Air Transportation Profits and Concentration, EU vs US”). They claim that the “rise in US concentration and profits aligns closely with a controversial merger wave,” with the vertical line in the figure marking the Delta-Northwest merger.

Sure, profitability among U.S. firms increased. But, before the “merger wave,” profits were negative. Perhaps predatory pricing is pro-competitive after all.

Where Gutiérrez and Philippon really fumble is with airline pricing. Since the merger wave that pulled the U.S. airline industry out of insolvency, ticket prices (as measured by the Consumer Price Index), have decreased by 6%. In France, prices increased by 4% and in the EU, prices increased by 30%. 

The paper relies more heavily on eyeballing graphs than statistical analysis, but something about Table 2 caught my attention — the R-squared statistics. First, they’re all over the place. But, look at column (1): A perfect 1.00 R-squared. Could it be that Gutiérrez and Philippon’s statistical model has (almost) as many parameters as variables?

Notice that all the regressions with an R-squared of 0.9 or higher include country fixed effects. The two regressions with R-squareds of 0.95 and 0.96 also include country-industry fixed effects. It’s very possible that the regressions results are driven entirely by idiosyncratic differences among countries and industries. 

Gutiérrez and Philippon provide no interpretation for their results in Table 2, but it seems to work like this, using column (1): A 10% increase in the 4-firm concentration ratio (which is different from a 10 percentage point increase), would be associated with a 1.8% increase in prices four years later. So, an increase in CR4 from 20% to 22% (or an increase from 60% to 66%) would be associated with a 1.8% increase in prices over four years, or about 0.4% a year. On the one hand, I just don’t buy it. On the other hand, the effect is so small that it seems economically insignificant. 

I’m sure Gutiérrez and Philippon have put a lot of time into this paper and its revision. But there’s an old saying that the best thing about banging your head against the wall is that it feels so good when it stops. Perhaps, it’s time to stop with this paper and let it “drift” into obscurity.

Last month the EU General Court annulled the EU Commission’s decision to block the proposed merger of Telefónica UK by Hutchison 3G UK. 

It what could be seen as a rebuke of the Directorate-General for Competition (DG COMP), the court clarified the proof required to block a merger, which could have a significant effect on future merger enforcement:

In the context of an analysis of a significant impediment to effective competition the existence of which is inferred from a body of evidence and indicia, and which is based on several theories of harm, the Commission is required to produce sufficient evidence to demonstrate with a strong probability the existence of significant impediments following the concentration. Thus, the standard of proof applicable in the present case is therefore stricter than that under which a significant impediment to effective competition is “more likely than not,” on the basis of a “balance of probabilities,” as the Commission maintains. By contrast, it is less strict than a standard of proof based on “being beyond all reasonable doubt.”

Over the relevant time period, there were four retail mobile network operators in the United Kingdom: (1) EE Ltd, (2) O2, (3) Hutchison 3G UK Ltd (“Three”), and (4) Vodafone. The merger would have combined O2 and Three, which would account for 30-40% of the retail market. 

The Commission argued that Three’s growth in market share over time and its classification as a “maverick” demonstrated that Three was an “important competitive force” that would be eliminated with the merger. The court was not convinced: 

The mere growth in gross add shares over several consecutive years of the smallest mobile network operator in an oligopolistic market, namely Three, which has in the past been classified as a “maverick” by the Commission (Case COMP/M.5650 — T-Mobile/Orange) and in the Statement of Objections in the present case, does not in itself constitute sufficient evidence of that operator’s power on the market or of the elimination of the important competitive constraints that the parties to the concentration exert upon each other.

While the Commission classified Three as a maverick, it also claimed that maverick status was not necessary to be an important competitive force. Nevertheless, the Commission pointed to Three’s history of maverick-y behavior by launching its “One Plan” as well as free international roaming and offering 4G at no additional cost. The court, however, noted that those initiatives were “historical in nature,” and provided no evidence of future conduct: 

The Commission’s reasoning in that regard seems to imply that an undertaking which has historically played a disruptive role will necessarily play the same role in the future and cannot reposition itself on the market by adopting a different pricing policy.

The EU General Court appears to express the same frustration with mavericks as the court in in H&R Block/TaxACT: “The arguments over whether TaxACT is or is not a ‘maverick’ — or whether perhaps it once was a maverick but has not been a maverick recently — have not been particularly helpful to the Court’s analysis.”

With the General Court’s recent decision raising the bar of proof required to block a merger, it also provided a “strong probability” that the days of maverick madness may soon be over.  

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Eric Fruits, (Chief Economist, International Center for Law & Economics).]

While much of the world of competition policy has focused on mergers in the COVID-19 era. Some observers see mergers as one way of saving distressed but valuable firms. Others have called for a merger moratorium out of fear that more mergers will lead to increased concentration and market power. In the meantime, there has been a growing push for increased nationalization of a wide range of businesses and industries.

In most cases, the call for a government takeover is not a reaction to the public health and economic crises associated with coronavirus. Instead, COVID-19 is a convenient excuse to pursue long sought after policies.

Last year, well before the pandemic, New York mayor Bill de Blasio called for a government takeover of electrical grid operator ConEd because he was upset over blackouts during a heatwave. Earlier that year, he threatened to confiscate housing units from private landlords, “we will seize their buildings, and we will put them in the hands of a community nonprofit that will treat tenants with the respect they deserve.”

With that sort of track record, it should come as no surprise the mayor proposed a government takeover of key industries to address COVID-19: “This is a case for a nationalization, literally a nationalization, of crucial factories and industries that could produce the medical supplies to prepare this country for what we need.” Dana Brown, director of The Next System Project at The Democracy Collaborative, agrees, “We should nationalize what remains of the American vaccine industry now, thereby assuring that any coronavirus vaccines produced can be made as widely available and as inexpensive soon as possible.” 

Dan Sullivan in the American Prospect suggests the U.S. should nationalize all the airlines. Some have gone so far as calling for nationalization of the U.S. oil industry.

On the one hand, it’s clear that de Blasio and Brown have no confidence in the price system to efficiently allocate resources. Alternatively, they may have overconfidence in the political/bureaucratic system to efficiently, and “equitably,” distribute resources. On the other hand, as Daniel Takash points out in an earlier post, both pharmaceuticals and oil are relatively unpopular industries with many Americans, in which case the threat of a government takeover has a big dose of populist score settling:

Yet last year a Gallup poll found that of 25 major industries, the pharmaceutical industry was the most unpopular–trailing behind fossil fuels, lawyers, and even the federal government. 

In the early days of the pandemic, France’s finance minister Bruno Le Maire promised to protect “big French companies.” The minister identified a range of actions under consideration: “That can be done by recapitalization, that can be done by taking a stake, I can even use the term nationalization if necessary.” While he did not mention any specific companies, it’s been speculated Air France KLM may be a target.

The Italian government is expected to nationalize Alitalia soon. The airline has been in state administration since May 2017, and the Italian government will have 100% control of the airline by June. Last week, the German government took a 20% stake in Lufthansa, in what has been characterized as a “temporary partial nationalization.” In Canada, Prime Minister Justin Trudeau has been coy about speculation that the government might nationalize Air Canada. 

Obviously, these takeovers have “bailout” written all over them, and bailouts have their own anticompetitive consequences that can be worse than those associated with mergers. For example, RyanAir announced it will contest the aid package for Lufthansa. RyanAir chief executive Michael O’Leary claims the aid will allow Lufthansa to “engage in below-cost selling” and make it harder for RyanAir and its rival low-cost carrier EasyJet to compete. 

There is also a bit of a “national champion” aspect to the takeovers. Each of the potential targets are (or were) considered their nation’s flagship airline. World Bank economists Tanja Goodwin and Georgiana Pop highlight the risk of nationalization harming competition: 

These [sic] should avoid rescuing firms that were already failing. …  But governments should also refrain from engaging in production or service delivery in industries that can be served by the private sector. The role of SOEs [state owned enterprises] should be assessed in order to ensure that bailout packages are not exclusively and unnecessarily favoring a dominant SOE.

To be sure, COVID-19 related mergers could raise the specter of increased market power post-pandemic. But, this risk must be balanced against the risks posed by a merger moratorium. These include the risk of widespread bankruptcies (that’s another post) and/or the possibility of nationalization of firms and industries. Either option can reduce competition which can bring harm to consumers, employees, and suppliers.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Eric Fruits, (Chief Economist, International Center for Law & Economics).]

Earlier this week, merger talks between Uber and food delivery service Grubhub surfaced. House Antitrust Subcommittee Chairman David N. Cicilline quickly reacted to the news:

Americans are struggling to put food on the table, and locally owned businesses are doing everything possible to keep serving people in our communities, even under great duress. Uber is a notoriously predatory company that has long denied its drivers a living wage. Its attempt to acquire Grubhub—which has a history of exploiting local restaurants through deceptive tactics and extortionate fees—marks a new low in pandemic profiteering. We cannot allow these corporations to monopolize food delivery, especially amid a crisis that is rendering American families and local restaurants more dependent than ever on these very services. This deal underscores the urgency for a merger moratorium, which I and several of my colleagues have been urging our caucus to support.

Pandemic profiteering rolls nicely off the tongue, and we’re sure to see that phrase much more over the next year or so. 

Grubhub shares jumped 29% Tuesday, the day the merger talks came to light, shown in the figure below. The Wall Street Journal reports companies are considering a deal that would value Grubhub stock at around 1.9 Uber shares, or $60-65 dollars a share, based on Thursday’s price.

But is that “pandemic profiteering?”

After Amazon announced its intended acquisition of Whole Foods, the grocer’s stock price soared by 27%. Rep. Cicilline voiced some convoluted concerns about that merger, but said nothing about profiteering at the time. Different times, different messaging.

Rep. Cicilline and others have been calling for a merger moratorium during the pandemic and used the Uber/Grubhub announcement as Exhibit A in his indictment of merger activity.

A moratorium would make things much easier for regulators. No more fighting over relevant markets, no HHI calculations, no experts debating SSNIPs or GUPPIs, no worries over consumer welfare, no failing firm defenses. Just a clear, brightline “NO!”

Even before the pandemic, it was well known that the food delivery industry was due for a shakeout. NPR reports, even as the business is growing, none of the top food-delivery apps are turning a profit, with one analyst concluding consolidation was “inevitable.” Thus, even if a moratorium slowed or stopped the Uber/Grubhub merger, at some point a merger in the industry will happen and the U.S. antitrust authorities will have to evaluate it.

First, we have to ask, “What’s the relevant market?” The government has a history of defining relevant markets so narrowly that just about any merger can be challenged. For example, for the scuttled Whole Foods/Wild Oats merger, the FTC famously narrowed the market to “premium natural and organic supermarkets.” Surely, similar mental gymnastics will be used for any merger involving food delivery services.

While food delivery has grown in popularity over the past few years, delivery represents less than 10% of U.S. food service sales. While Rep. Cicilline may be correct that families and local restaurants are “more dependent than ever” on food delivery, delivery is only a small fraction of a large market. Even a monopoly of food delivery service would not confer market power on the restaurant and food service industry.

No reasonable person would claim an Uber/Grubhub merger would increase market power in the restaurant and food service industry. But, it might convey market power in the food delivery market. Much attention is paid to the “Big Four”–DoorDash, Grubhub, Uber Eats, and Postmates. But, these platform delivery services are part of the larger food service delivery market, of which platforms account for about half of the industry’s revenues. Pizza accounts for the largest share of restaurant-to-consumer delivery.

This raises the big question of what is the relevant market: Is it the entire food delivery sector, or just the platform-to-consumer sector? 

Based on the information in the figure below, defining the market narrowly would place an Uber/Grubhub merger squarely in the “presumed to be likely to enhance market power” category.

  • 2016 HHI: <3,175
  • 2018 HHI: <1,474
  • 2020 HHI: <2,249 pre-merger; <4,153 post-merger

Alternatively, defining the market to encompass all food delivery would cut the platforms’ shares roughly in half and the merger would be unlikely to harm competition, based on HHI. Choosing the relevant market is, well, relevant.

The Second Measure data suggests that concentration in the platform delivery sector decreased with the entry of Uber Eats, but subsequently increased with DoorDash’s rising share–which included the acquisition of Caviar from Square.

(NB: There seems to be a significant mismatch in the delivery revenue data. Statista reports platform delivery revenues increased by about 40% from 2018 to 2020, but Second Measure indicates revenues have more than doubled.) 

Geoffrey Manne, in an earlier post points out “while national concentration does appear to be increasing in some sectors of the economy, it’s not actually so clear that the same is true for local concentration — which is often the relevant antitrust market.” That may be the case here.

The figure below is a sample of platform delivery shares by city. I added data from an earlier study of 2017 shares. In all but two metro areas, Uber and Grubhub’s combined market share declined from 2017 to 2020. In Boston, the combined shares did not change and in Los Angeles, the combined shares increased by 1%.

(NB: There are some serious problems with this data, notably that it leaves out the restaurant-to-consumer sector and assumes the entire platform-to-consumer sector is comprised of only the “Big Four.”)

Platform-to-consumer delivery is a complex two-sided market in which the platforms link, and compete for, both restaurants and consumers. Platforms compete for restaurants, drivers, and consumers. Restaurants have a choice of using multiple platforms or entering into exclusive arrangements. Many drivers work for multiple platforms, and many consumers use multiple platforms. 

Fundamentally, the rise of platform-to-consumer is an evolution in vertical integration. Restaurants can choose to offer no delivery, use their own in-house delivery drivers, or use a third party delivery service. Every platform faces competition from in-house delivery, placing a limit on their ability to raise prices to restaurants and consumers.

The choice of delivery is not an either-or decision. For example, many pizza restaurants who have their own delivery drivers also use platform delivery service. Their own drivers may serve a limited geographic area, but the platforms allow the restaurant to expand its geographic reach, thereby increasing its sales. Even so, the platforms face competition from in-house delivery.

Mergers or other forms of shake out in the food delivery industry are inevitable. Mergers will raise important questions about relevant product and geographic markets as well as competition in two-sided markets. While there is a real risk of harm to restaurants, drivers, and consumers, there is also a real possibility of welfare enhancing efficiencies. These questions will never be addressed with an across-the-board merger moratorium.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Eric Fruits, (Chief Economist, International Center for Law & Economics).]

In an earlier TOTM post, we argued as the economy emerges from the COVID-19 crisis, perhaps the best policy would allow properly motivated firms and households to themselves balance the benefits, costs, and risks of transitioning to “business as usual.” 

Sometimes, however, well meaning government policies disrupt the balance and realign motivations.

Our post contrasted firms who determined they could remain open by undertaking mitigation efforts with those who determined they could not safely remain open. One of these latter firms was Portland-based ChefStable, which operates more than 20 restaurants and bars. Kurt Huffman, the owner of ChefStable, shut down all the company’s properties one day before the Oregon governor issued her “Stay home, stay safe” order.

An unintended consequence

In a recent Wall Street Journal op-ed, Mr. Huffman reports his business was able to shift to carryout and delivery, which ended up being more successful than anticipated. So successful, in fact, that he needed to bring back some of the laid-off employees. That’s when he ran into one of the stimulus package’s unintended—but not unanticipated—consequences of providing federal-level payments on top of existing state-level guarantees:

We started making the calls last week, just as our furloughed employees began receiving weekly Federal Pandemic Unemployment Compensation checks of $600 under the Cares Act. When we asked our employees to come back, almost all said, “No thanks.” If they return to work, they’ll have to take a pay cut.

***

But as of this week, that same employee receives $1,016 a week, or $376 more than he made as a full time employee. Why on earth would he want to come back to work?

Mr. Huffman’s not alone. NPR reports on a Kentucky coffee shop owner who faces the same difficulty keeping her employees at work:

“The very people we hired have now asked us to be laid off,” Marietta wrote in a blog post. “Not because they did not like their jobs or because they did not want to work, but because it would cost them literally hundreds of dollars per week to be employed.”

With the federal government now offering $600 a week on top of the state’s unemployment benefits, she recognized her former employees could make more money staying home than they did on the job.

Or, a fully intended consequence

The NPR piece indicates the Trump administration opted for the relatively straightforward (if not simplistic) unemployment payments as a way to get the money to unemployed workers as quickly as possible.

On the other hand, maybe the unemployment premium was not an unintended consequence. Perhaps, there was some intention.

If the purpose of the stay-at-home orders is to “flatten the curve” and slow the spread of the coronavirus, then it can be argued the purpose of the stimulus spending is to mitigate some of the economic costs. 

If this is the case, it can also be argued that the unemployment premium paid by the federal government was designed to encourage people to stay at home and delay returning to work. In fact, it may be more effective than a bunch of loophole laden employment regulations that would require an army of enforcers.

Mr. Huffman seems confident his employees will be ready to return to work in August, when the premium runs out. John Cochrane, however, is not so confident, writing on his blog, “Hint to Mr. Huffman: I would not bet too much that this deadline is not extended.”

With the administration’s state-by-state phased re-opening of the economy, the unemployment premium payments could be tweaked so only residents in states in Phase 1 or 2 would be eligible to receive the premium payments.

Of course, this tweak would unleash its own unintended consequences. In particular, it would encourage some states to slow walk the re-opening of their economies as a way to extract more federal money for their residents. My wild guess: The slow walking states will be the same states who have been most affected by the state and local tax deductibility provisions in the Tax Cuts and Jobs Act.

As with all government policies, the unemployment provisions in the COVID-19 stimulus raise the age old question: If a policy generates unintended consequences that are not unanticipated, can those consequences really be unintended?

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Eric Fruits, (Chief Economist, International Center for Law & Economics).]

The Wall Street Journal reports congressional leaders have agreed to impose limits on stock buybacks and dividend payments for companies receiving aid under the COVID-19 disaster relief package. 

Rather than a flat-out ban, the draft legislation forbids any company taking federal emergency loans or loan guarantees from repurchasing its own stock or paying shareholder dividends. The ban lasts for the term of the loans, plus one year after the aid had ended.

In theory, under a strict set of conditions, there is no difference between dividends and buybacks. Both approaches distribute cash from the corporation to shareholders. In practice, there are big differences between dividends and share repurchases.

  • Dividends are publicly visible actions and require authorization by the board of directors. Shareholders have expectations of regular, stable dividends. Buybacks generally lack such transparency. Firms have flexibility in choosing the timing and the amount of repurchases, subject to the details of their repurchase programs.
  • Cash dividends have no effect on the number of shares outstanding. In contrast, share repurchases reduce the number of shares outstanding. By reducing the number of shares outstanding, buybacks increase earnings per share, all other things being equal. 

Over the past 15 years, buybacks have outpaced dividend payouts. The figure above, from Seeking Alpha, shows that while dividends have grown relatively smoothly over time, the aggregate value of buybacks are volatile and vary with the business cycle. In general, firms increase their repurchases relative to dividends when the economy booms and reduce them when the economy slows or shrinks. 

This observation is consistent with a theory that buybacks are associated with periods of greater-than-expected financial performance. On the other hand, dividends are associated with expectations of long-term profitability. Dividends can decrease, but only when profits are expected to be “permanently” lower. 

During the Great Recession, the figure above shows that dividends declined by about 10%, the amount of share repurchases plummeted by approximately 85%. The flexibility afforded by buybacks provided stability in dividends.

There is some logic to dividend and buyback limits imposed by the COVID-19 disaster relief package. If a firm has enough cash on hand to pay dividends or repurchase shares, then it doesn’t need cash assistance from the federal government. Similarly, if a firm is so desperate for cash that it needs a federal loan or loan guarantee, then it doesn’t have enough cash to provide a payout to shareholders. Surely managers understand this and sophisticated shareholders should too.

Because of this understanding, the dividend and buyback limits may be a non-binding constraint. It’s not a “good look” for a corporation to accept millions of dollars in federal aid, only to turn around and hand out those taxpayer dollars to the company’s shareholders. That’s a sure way to get an unflattering profile in the New York Times and an invitation to attend an uncomfortable hearing at the U.S. Capitol. Even if a distressed firm could repurchase its shares, it’s unlikely that it would.

The logic behind the plus-one-year ban on dividends and buybacks is less clear. The relief package is meant to get the U.S. economy back to normal as fast as possible. That means if a firm repays its financial assistance early, the company’s shareholders should be rewarded with a cash payout rather than waiting a year for some arbitrary clock to run out.

The ban on dividends and buybacks may lead to an unintended consequence of increased merger and acquisition activity. Vox reports an email to Goldman Sachs’ investment banking division says Goldman expects to see an increase in hostile takeovers and shareholder activism as the prices of public companies fall. Cash rich firms who are subject to the ban and cannot get that cash to their existing shareholders may be especially susceptible takeover targets.

Desperate times call for desperate measures and these are desperate times. Buyback backlash has been brewing for sometime and the COVID-19 relief package presents a perfect opportunity to ban buybacks. With the pressures businesses are under right now, it’s unlikely there’ll be many buybacks over the next few months. The concern should be over the unintended consequences facing firms once the economy recovers.

Goodhart and Bad Policy

Eric Fruits —  18 March 2020

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Eric Fruits, (Chief Economist, International Center for Law & Economics).]




Wells Fargo faces billions of dollars of fines for creating millions of fraudulent savings, checking, credit, and insurance accounts on behalf of their clients without their customers’ consent. Last weekend, tens of thousands of travelers were likely exposed to coronavirus while waiting hours for screening at crowded airports. Consumers and businesses around the world pay higher energy prices as their governments impose costly programs to reduce carbon emissions.

These seemingly unrelated observations have something in common: They are all victims of some version of Goodhart’s Law.

Being a central banker, Charles Goodhart’s original statement was a bit more dense: “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.”

The simple version of the law is: “When a measure becomes a target it ceases to be a good measure.”

Economist Charles Munger puts it more succinctly: “Show me the incentive and I’ll show you the outcome.”

The Wells Fargo scandal is a case study in Goodhart’s Law. It came from a corporate culture pushed by the CEO, Dick Kovacevich, that emphasized “cross-selling” products to existing customers, as related in a Vanity Fair profile.

As Kovacevich told me in a 1998 profile of him I wrote for Fortune magazine, the key question facing banks was “How do you sell money?” His answer was that financial instruments—A.T.M. cards, checking accounts, credit cards, loans—were consumer products, no different from, say, screwdrivers sold by Home Depot. In Kovacevich’s lingo, bank branches were “stores,” and bankers were “salespeople” whose job was to “cross-sell,” which meant getting “customers”—not “clients,” but “customers”—to buy as many products as possible. “It was his business model,” says a former Norwest executive. “It was a religion. It very much was the culture.”

It was underpinned by the financial reality that customers who had, say, lines of credit and savings accounts with the bank were far more profitable than those who just had checking accounts. In 1997, prior to Norwest’s merger with Wells Fargo, Kovacevich launched an initiative called “Going for Gr-Eight,” which meant getting the customer to buy eight products from the bank. The reason for eight? “It rhymes with GREAT!” he said.

The concept makes sense. It’s easier to get sales from existing customers than trying to find new customers. Also, if revenues are rising, there’s less pressure to reduce costs. 

Kovacevich came to Wells Fargo in the late 1990s by way of its merger with Norwest, where he was CEO. After the merger, he noticed that the Wells unit was dragging down the merged firm’s sales-per-customer numbers. So, Wells upped the pressure. 

One staffer reported that every morning, they’d have a conference call with their managers. Staff were supposed to to explain how they’d make their sales goal for the day. If the goal wasn’t hit at the end of the day, staff had to explain why they missed the goal and how they planned to fix it. Bonuses were offered for hitting their targets, and staffers were let go for missing their targets.

Wells Fargo had rules against “gaming” the system. Yes, it was called “gaming.” But the incentives were so strongly aligned in favor of gaming, that the rules were ignored.

Wells Fargo’s internal investigation estimated between 2011 and 2015 its employees had opened more than 1.5 million deposit accounts and more than 565,000 credit-card accounts that may not have been authorized. Customers were charged fees on the accounts, some accounts were sent to collections over unpaid fees, cars were repossessed, and homes went into foreclosure.

Some customers were charged fees on accounts they didn’t know they had, and some customers had collection agencies calling them due to unpaid fees on accounts they didn’t know existed.

Goodhart’s Law hit Wells Fargo hard. Cross-selling was the bank’s target. Once management placed pressure to hit the target, cross-selling became not just a bad target, it corrupted the entire retail side of the business.

Last Friday, my son came home from his study abroad in Spain. He landed less than eight hours before the travel ban went into effect. He was lucky–he got out of the airport less than an hour after landing. 

The next day was pandemonium. In addition to the travel ban, the U.S. imposed health screening on overseas arrivals. Over the weekend, travelers reported being forced into crowded terminals for up to eight hours to go through customs and receive screening. 

The screening process resulted in exactly the opposite of what health officials are advising, to avoid close contact and large crowds. We still don’t know if the screenings have helped reduce the spread of the coronavirus or if the forced crowding fostered the spread.

The government seemed to forget Goodhart’s Law. Public demand for enhanced screenings made screening the target. Screenings were implemented hastily without any thought of the consequences of clustering potentially infected flyers with the uninfected. Someday, we may learn that a focus on screening came at the expense of slowing the spread.

More and more we’re being told climate change presents an existential threat to our planet. We’re told the main culprit is carbon emissions from economic activity. Toward that end, governments around the world are trying to take extraordinary measures to reduce carbon emissions. 

In Oregon, the legislature has been trying for more than a decade to implement a cap-and-trade program to reduce carbon emissions in the state. A state that accounts for less than one-tenth of one percent of global greenhouse gas emissions. Even if Oregon went to zero GHG emissions, the world would never know.

Legislators pushing cap-and-trade want the state to address climate change immediately. But, when the microphones are turned off, they admit their cap-and-trade program would do nothing to slow global climate change.

In yet another case of Goodhart’s Law, Oregon and other jurisdictions have made carbon emissions the target. As a consequence, if cap-and-trade were ever to become law in the state, businesses and consumers would be paying hundreds or thousands of dollars of dollars a year more in energy prices, with zero effect on global temperatures. Those dollars could be better spent on acknowledging the consequences of climate change and making investments to deal with those consequences.

The funny thing about Goodhart’s Law is that once you know about it, you see it everywhere. And, it’s not just some quirky observation. It’s a failure that can have serious consequences on our health, our livelihoods, and our economy.

In antitrust lore, mavericks are magical creatures that bring order to a world on the verge of monopoly. Because they are so hard to find in the wild, some researchers have attempted to create them in the laboratory. While the alchemists couldn’t turn lead into gold, they did discover zinc. Similarly, although modern day researchers can’t turn students into mavericks, they have created a useful classroom exercise.

In a Cambridge University working paper, Donja Darai, Catherine Roux, and Frédéric Schneider develop a simple experiment to model merger activity in the face of price competition. Based on their observations they conclude (1) firms are more likely to make merger offers when prices are closer to marginal cost and (2) “maverick firms” – firms who charge a lower price – are more likely to be on the receiving end of those merger offers. Based on these conclusions, they suggest “mergers may be used to eliminate mavericks from the market and thus substitute for failed attempts at collusion between firms.”

The experiment is a set of games broken up into “market” phases and “merger” phases.

  • Each experiment has four subjects, with each subject representing a firm.
  • Each firm has marginal cost of zero and no capacity constraints.
  • Each experiment has nine phases: five “market” phases of 10 trading periods and a four “merger” phases.
  • During a trading period, firms simultaneously post their asking prices, ranging from 0 to 100 “currency units.” Subjects cannot communicate their prices to each other.
  • A computerized “buyer” purchases 300 units of the good at the lowest posted price. In the case of identical lowest prices, the sales are split equally among the firms with the lowest posted price.
  • At the end of the market phase, the firms enter a merger phase in which any firm can offer to merge with any other firm. Firms being made an offer to merge can accept or reject the offer. There are no price terms for the merger. Instead, the subject controlling the acquired firm receives an equal share of the acquiring firm’s profits in subsequent trading periods. Each firm can acquire only one other firm in each merger round.
  • The market-merger phases repeat, ending with a final market phase.
  • Subjects receive cash compensation related to the the “profits” their firm earned over the course of the experiment.

Merger to monopoly is a dominant strategy: It is the clearest path to maximizing individual and joint profits. In that way it’s a pretty boring game. Bid low, merge toward monopoly, then bid 100 every turn after that. The only real “trick” is convincing the other players to merge.

The authors attempt to make the paper more interesting by introducing the idea of the “maverick” bidder who bids low. They find that the lowest bidders are more likely to receive merger offers than the other subjects. They also find that these so-called mavericks are more reluctant to accept a merger offer. 

I noted in my earlier post that modeling the “maverick” seems to be a fool’s errand. If firms are assumed to face the same cost and demand conditions, why would any single firm play the role of the maverick? In the standard prisoner’s dilemma problem, every firm has the incentive to be the maverick. If everyone’s a maverick, then no one’s a maverick. On the other hand, if one firm has unique cost or demand conditions or is assumed to have some preference for “mavericky” behavior, then the maverick model is just an ad hoc model where the conclusions are baked into the assumptions.

Darai, et al.’s experiment suffers from these same criticisms. They define the “maverick” as a low bidder who does not accept merger offers. But, they don’t have a model for why they behave the way they do. Some observations:

  • Another name for “low bidder” is “winner.” If the low bidders consistently win in the market phase, then they may believe that they have some special skill or luck that the other subjects don’t have. Why would a winner accept a merger bid from – and share his or her profits with – one or more “losers.”  
  • Another name for “low bidder” could be “newbie.” The low bidder may be the subject who doesn’t understand that the dominant strategy is to merge to monopoly as fast as possible and charge the maximum price. The other players conclude the low bidder doesn’t know how to play the game. In other words, the merger might be viewed more as a hostile takeover to replace “bad” management. Because even bad managers won’t admit they’re bad, they make another bad decision and resist the merger.
  • About 80% of the time, the experiment ends with a monopoly, indicating that even the mavericks eventually merge. 

See what I just did? I created my own ad hoc theories of the maverick. In one theory, the maverick thinks he or she has some unique ability to pick the winning asking price. In the other, the maverick is making decisions counter to its own – and other players’ – long term self-interest. 

Darai, et al. have created a fun game. I played a truncated version of it with my undergraduate class earlier this week and it generated a good discussion about pricing and coordination. But, please don’t call it a model of the maverick.