Archives For mergers & acquisitions

In antitrust lore, mavericks are magical creatures that bring order to a world on the verge of monopoly. Because they are so hard to find in the wild, some researchers have attempted to create them in the laboratory. While the alchemists couldn’t turn lead into gold, they did discover zinc. Similarly, although modern day researchers can’t turn students into mavericks, they have created a useful classroom exercise.

In a Cambridge University working paper, Donja Darai, Catherine Roux, and Frédéric Schneider develop a simple experiment to model merger activity in the face of price competition. Based on their observations they conclude (1) firms are more likely to make merger offers when prices are closer to marginal cost and (2) “maverick firms” – firms who charge a lower price – are more likely to be on the receiving end of those merger offers. Based on these conclusions, they suggest “mergers may be used to eliminate mavericks from the market and thus substitute for failed attempts at collusion between firms.”

The experiment is a set of games broken up into “market” phases and “merger” phases.

  • Each experiment has four subjects, with each subject representing a firm.
  • Each firm has marginal cost of zero and no capacity constraints.
  • Each experiment has nine phases: five “market” phases of 10 trading periods and a four “merger” phases.
  • During a trading period, firms simultaneously post their asking prices, ranging from 0 to 100 “currency units.” Subjects cannot communicate their prices to each other.
  • A computerized “buyer” purchases 300 units of the good at the lowest posted price. In the case of identical lowest prices, the sales are split equally among the firms with the lowest posted price.
  • At the end of the market phase, the firms enter a merger phase in which any firm can offer to merge with any other firm. Firms being made an offer to merge can accept or reject the offer. There are no price terms for the merger. Instead, the subject controlling the acquired firm receives an equal share of the acquiring firm’s profits in subsequent trading periods. Each firm can acquire only one other firm in each merger round.
  • The market-merger phases repeat, ending with a final market phase.
  • Subjects receive cash compensation related to the the “profits” their firm earned over the course of the experiment.

Merger to monopoly is a dominant strategy: It is the clearest path to maximizing individual and joint profits. In that way it’s a pretty boring game. Bid low, merge toward monopoly, then bid 100 every turn after that. The only real “trick” is convincing the other players to merge.

The authors attempt to make the paper more interesting by introducing the idea of the “maverick” bidder who bids low. They find that the lowest bidders are more likely to receive merger offers than the other subjects. They also find that these so-called mavericks are more reluctant to accept a merger offer. 

I noted in my earlier post that modeling the “maverick” seems to be a fool’s errand. If firms are assumed to face the same cost and demand conditions, why would any single firm play the role of the maverick? In the standard prisoner’s dilemma problem, every firm has the incentive to be the maverick. If everyone’s a maverick, then no one’s a maverick. On the other hand, if one firm has unique cost or demand conditions or is assumed to have some preference for “mavericky” behavior, then the maverick model is just an ad hoc model where the conclusions are baked into the assumptions.

Darai, et al.’s experiment suffers from these same criticisms. They define the “maverick” as a low bidder who does not accept merger offers. But, they don’t have a model for why they behave the way they do. Some observations:

  • Another name for “low bidder” is “winner.” If the low bidders consistently win in the market phase, then they may believe that they have some special skill or luck that the other subjects don’t have. Why would a winner accept a merger bid from – and share his or her profits with – one or more “losers.”  
  • Another name for “low bidder” could be “newbie.” The low bidder may be the subject who doesn’t understand that the dominant strategy is to merge to monopoly as fast as possible and charge the maximum price. The other players conclude the low bidder doesn’t know how to play the game. In other words, the merger might be viewed more as a hostile takeover to replace “bad” management. Because even bad managers won’t admit they’re bad, they make another bad decision and resist the merger.
  • About 80% of the time, the experiment ends with a monopoly, indicating that even the mavericks eventually merge. 

See what I just did? I created my own ad hoc theories of the maverick. In one theory, the maverick thinks he or she has some unique ability to pick the winning asking price. In the other, the maverick is making decisions counter to its own – and other players’ – long term self-interest. 

Darai, et al. have created a fun game. I played a truncated version of it with my undergraduate class earlier this week and it generated a good discussion about pricing and coordination. But, please don’t call it a model of the maverick.

On Monday evening, around 6:00 PM Eastern Standard Time, news leaked that the United States District Court for the Southern District of New York had decided to allow the T-Mobile/Sprint merger to go through, giving the companies a victory over a group of state attorneys general trying to block the deal.

Thomas Philippon, a professor of finance at NYU, used this opportunity to conduct a quick-and-dirty event study on Twitter:

Short thread on T-Mobile/Sprint merger. There were 2 theories:

(A) It’s a 4-to-3 merger that will lower competition and increase markups.

(B) The new merged entity will be able to take on the industry leaders AT&T and Verizon.

(A) and (B) make clear predictions. (A) predicts the merger is good news for AT&T and Verizon’s shareholders. (B) predicts the merger is bad news for AT&T and Verizon’s shareholders. The news leaked at 6pm that the judge would approve the merger. Sprint went up 60% as expected. Let’s test the theories. 

Here is Verizon’s after trading price: Up 2.5%.

Here is ATT after hours: Up 2%.

Conclusion 1: Theory B is bogus, and the merger is a transfer of at least 2%*$280B (AT&T) + 2.5%*$240B (Verizon) = $11.6 billion from the pockets of consumers to the pockets of shareholders. 

Conclusion 2: I and others have argued for a long time that theory B was bogus; this was anticipated. But lobbying is very effective indeed… 

Conclusion 3: US consumers already pay two or three times more than those of other rich countries for their cell phone plans. The gap will only increase.

And just a reminder: these firms invest 0% of the excess profits. 

Philippon published his thread about 40 minutes prior to markets opening for regular trading on Tuesday morning. The Court’s official decision was published shortly before markets opened as well. By the time regular trading began at 9:30 AM, Verizon had completely reversed its overnight increase and opened down from the previous day’s close. While AT&T opened up slightly, it too had given back most of its initial gains. By 11:00 AM, AT&T was also in the red. When markets closed at 4:00 PM on Tuesday, Verizon was down more than 2.5 percent and AT&T was down just under 0.5 percent.

Does this mean that, in fact, theory A is the “bogus” one? Was the T-Mobile/Sprint merger decision actually a transfer of “$7.4 billion from the pockets of shareholders to the pockets of consumers,” as I suggested in my own tongue-in-cheek thread later that day? In this post, I will look at the factors that go into conducting a proper event study.  

What’s the appropriate window for a merger event study?

In a response to my thread, Philippon said, “I would argue that an event study is best done at the time of the event, not 16 hours after. Leak of merger approval 6 pm Monday. AT&T up 2 percent immediately. AT&T still up at open Tuesday. Then comes down at 10am.” I don’t disagree that “an event study is best done at the time of the event.” In this case, however, we need to consider two important details: When was the “event” exactly, and what were the conditions in the financial markets at that time?

This event did not begin and end with the leak on Monday night. The official announcement came Tuesday morning when the full text of the decision was published. This additional information answered a few questions for market participants: 

  • Were the initial news reports true?
  • Based on the text of the decision, what is the likelihood it gets reversed on appeal?
    • Wall Street: “Not all analysts are convinced this story is over just yet. In a note released immediately after the judge’s verdict, Nomura analyst Jeff Kvaal warned that ‘we expect the state AGs to appeal.’ RBC Capital analyst Jonathan Atkin noted that such an appeal, if filed, could delay closing of the merger by ‘an additional 4-5’ months — potentially delaying closure until September 2020.”
  • Did the Court impose any further remedies or conditions on the merger?

As stock traders digested all the information from the decision, Verizon and AT&T quickly went negative. There is much debate in the academic literature about the appropriate window for event studies on mergers. But the range in question is always one of days or weeks — not a couple hours in after hours markets. A recent paper using the event study methodology analyzed roughly 5,000 mergers and found abnormal returns of about positive one percent for competitors in the relevant market following a merger announcement. Notably for our purposes, this small abnormal return builds in the first few days following a merger announcement and persists for up to 30 days, as shown in the chart below:

As with the other studies the paper cites in its literature review, this particular research design included a window of multiple weeks both before and after the event occured. When analyzing the T-Mobile/Sprint merger decision, we should similarly expand the window beyond just a few hours of after hours trading.

How liquid is the after hours market?

More important than the length of the window, however, is the relative liquidity of the market during that time. The after hours market is much thinner than the regular hours market and may not reflect all available information. For some rough numbers, let’s look at data from NASDAQ. For the last five after hours trading sessions, total volume was between 80 and 100 million shares. Let’s call it 90 million on average. By contrast, the total volume for the last five regular trading hours sessions was between 2 and 2.5 billion shares. Let’s call it 2.25 billion on average. So, the regular trading hours have roughly 25 times as much liquidity as the after hours market

We could also look at relative liquidity for a single company as opposed to the total market. On Wednesday during regular hours (data is only available for the most recent day), 22.49 million shares of Verizon stock were traded. In after hours trading that same day, fewer than a million shares traded hands. You could change some assumptions and account for other differences in the after market and the regular market when analyzing the data above. But the conclusion remains the same: the regular market is at least an order of magnitude more liquid than the after hours market. This is incredibly important to keep in mind as we compare the after hours price changes (as reported by Philippon) to the price changes during regular trading hours.

What are Wall Street analysts saying about the decision?

To understand the fundamentals behind these stock moves, it’s useful to see what Wall Street analysts are saying about the merger decision. Prior to the ruling, analysts were already worried about Verizon’s ability to compete with the combined T-Mobile/Sprint entity in the short- and medium-term:

Last week analysts at LightShed Partners wrote that if Verizon wins most of the first available tranche of C-band spectrum, it could deploy 60 MHz in 2022 and see capacity and speed benefits starting in 2023.

With that timeline, C-Band still does not answer the questions of what spectrum Verizon will be using for the next three years,” wrote LightShed’s Walter Piecyk and Joe Galone at the time.

Following the news of the decision, analysts were clear in delivering their own verdict on how the decision would affect Verizon:

Verizon looks to us to be a net loser here,” wrote the MoffettNathanson team led by Craig Moffett.

…  

Approval of the T-Mobile/Sprint deal takes not just one but two spectrum options off the table,” wrote Moffett. “Sprint is now not a seller of 2.5 GHz spectrum, and Dish is not a seller of AWS-4. More than ever, Verizon must now bet on C-band.”

LightShed also pegged Tuesday’s merger ruling as a negative for Verizon.

“It’s not great news for Verizon, given that it removes Sprint and Dish’s spectrum as an alternative, created a new competitor in Dish, and has empowered T-Mobile with the tools to deliver a superior network experience to consumers,” wrote LightShed.

In a note following news reports that the court would side with T-Mobile and Sprint, New Street analyst Johnathan Chaplin wrote, “T-Mobile will be far more disruptive once they have access to Sprint’s spectrum than they have been until now.”

However, analysts were more sanguine about AT&T’s prospects:

AT&T, though, has been busy deploying additional spectrum, both as part of its FirstNet build and to support 5G rollouts. This has seen AT&T increase its amount of deployed spectrum by almost 60%, according to Moffett, which takes “some of the pressure off to respond to New T-Mobile.”

Still, while AT&T may be in a better position on the spectrum front compared to Verizon, it faces the “same competitive dynamics,” Moffett wrote. “For AT&T, the deal is probably a net neutral.”

The quantitative evidence from the stock market seems to agree with the qualitative analysis from the Wall Street research firms. Let’s look at the five-day window of trading from Monday morning to Friday (today). Unsurprisingly, Sprint, T-Mobile, and Dish have reacted very favorably to the news:

Consistent with the Wall Street analysis, Verizon stock remains down 2.5 percent over a five-day window while AT&T has been flat over the same period:

How do you separate beta from alpha in an event study?

Philippon argued that after market trading may be more efficient because it is dominated by hedge funds and includes less “noise trading.” In my opinion, the liquidity effect likely outweighs this factor. Also, it’s unclear why we should assume “smart money” is setting the price in the after hours market but not during regular trading when hedge funds are still active. Sophisticated professional traders often make easy profits by picking off panicked retail investors who only read the headlines. When you see a wild swing in the markets that moderates over time, the wild swing is probably the noise and the moderation is probably the signal.

And, as Karl Smith noted, since the aftermarket is thin, price moves in individual stocks might reflect changes in the broader stock market (“beta”) more than changes due to new company-specific information (“alpha”). Here are the last five days for e-mini S&P 500 futures, which track the broader market and are traded after hours:

The market trended up on Monday night and was flat on Tuesday. This slightly positive macro environment means we would need to adjust the returns downward for AT&T and Verizon. Of course, this is counter to Philippon’s conjecture that the merger decision would increase their stock prices. But to be clear, these changes are so minuscule in percentage terms, this adjustment wouldn’t make much of a difference in this case.

Lastly, let’s see what we can learn from a similar historical episode in the stock market.

The parallel to the 2016 presidential election

The type of reversal we saw in AT&T and Verizon is not unprecedented. Some commenters said the pattern reminded them of the market reaction to Trump’s election in 2016:

Much like the T-Mobile/Sprint merger news, the “event” in 2016 was not a single moment in time. It began around 9 PM Tuesday night when Trump started to overperform in early state results. Over the course of the next three hours, S&P 500 futures contracts fell about 5 percent — an enormous drop in such a short period of time. If Philippon had tried to estimate the “Trump effect” in the same manner he did the T-Mobile/Sprint case, he would have concluded that a Trump presidency would reduce aggregate future profits by about 5 percent relative to a Clinton presidency.

But, as you can see in the chart above, if we widen the aperture of the event study to include the hours past midnight, the story flips. Markets started to bounce back even before Trump took the stage to make his victory speech. The themes of his speech were widely regarded as reassuring for markets, which further pared losses from earlier in the night. When regular trading hours resumed on Wednesday, the markets decided a Trump presidency would be very good for certain sectors of the economy, particularly finance, energy, biotech, and private prisons. By the end of the day, the stock market finished up about a percentage point from where it closed prior to the election — near all time highs.

Maybe this is more noise than signal?

As a few others pointed out, these relatively small moves in AT&T and Verizon (less than 3 percent in either direction) may just be noise. That’s certainly possible given the magnitude of the changes. Contra Philippon, I think the methodology in question is too weak to rule out the pro-competitive theory of the case, i.e., that the new merged entity would be a stronger competitor to take on industry leaders AT&T and Verizon. We need much more robust and varied evidence before we can call anything “bogus.” Of course, that means this event study is not sufficient to prove the pro-competitive theory of the case, either.

Olivier Blanchard, a former chief economist of the IMF, shared Philippon’s thread on Twitter and added this comment above: “The beauty of the argument. Simple hypothesis, simple test, clear conclusion.”

If only things were so simple.

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by Jonathan E. Nuechterlein (Partner, Sidley Austin LLP; former General Counsel, FTC; former Deputy General Counsel, FCC).

[Nuechterlein: I represented AT&T in United States v. AT&T, Inc. (“AT&T/Time Warner”), and this essay is based in part on comments I prepared on AT&T’s behalf for the FTC’s recent public hearings on Competition and Consumer Protection in the 21st Century. All views expressed here are my own.]

The draft Vertical Merger Guidelines (“Draft Guidelines”) might well leave ordinary readers with the misimpression that U.S. antitrust authorities have suddenly come to view vertical integration with a jaundiced eye. Such readers might infer from the draft that vertical mergers are a minefield of potential competitive harms; that only sometimes do they “have the potential to create cognizable efficiencies”; and that such efficiencies, even when they exist, often are not “of a character and magnitude” to keep the merger from becoming “anticompetitive.” (Draft Guidelines § 8, at 9). But that impression would be impossible to square with the past forty years of U.S. enforcement policy and with exhaustive empirical work confirming the largely beneficial effects of vertical integration. 

The Draft Guidelines should reflect those realities and thus should incorporate genuine limiting principles — rooted in concerns about two-level market power — to cabin their highly speculative theories of harm. Without such limiting principles, the Guidelines will remain more a theoretical exercise in abstract issue-spotting than what they purport to be: a source of genuine guidance for the public

1. The presumptive benefits of vertical integration

Although the U.S. antitrust agencies (the FTC and DOJ) occasionally attach conditions to their approval of vertical mergers, they have litigated only one vertical merger case to judgment over the past forty years: AT&T/Time Warner. The reason for that paucity of cases is neither a lack of prosecutorial zeal nor a failure to understand “raising rivals’ costs” theories of harm. Instead, in the words of the FTC’s outgoing Bureau of Competition chief, Bruce Hoffman, the reason is the “broad consensus in competition policy and economic theory that the majority of vertical mergers are beneficial because they reduce costs and increase the intensity of interbrand competition.” 

Two exhaustive papers confirm that conclusion with hard empirical facts. The first was published in the International Journal of Industrial Organization in 2005 by FTC economists James Cooper, Luke Froeb, Dan O’Brien, and Michael Vita, who surveyed “multiple studies of vertical mergers and restraints” and “found only one example where vertical integration harmed consumers, and multiple examples where vertical integration unambiguously benefited consumers.” The second paper is a 2007 analysis in the Journal of Economic Literature co-authored by University of Michigan Professor Francine LaFontaine (who served from 2014 to 2015 as Director of the FTC’s Bureau of Economics) and Professor Margaret Slade of the University of British Columbia. Professors LaFontaine and Slade “did not have a particular conclusion in mind when [they] began to collect the evidence,” “tried to be fair in presenting the empirical regularities,” and were “therefore somewhat surprised at what the weight of the evidence is telling us.” They found that:

[U]nder most circumstances, profit-maximizing vertical-integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view. Although there are isolated studies that contradict this claim, the vast majority support it. (p. 680) 

Vertical mergers have this procompetitive track record for two basic reasons. First, by definition, they do not eliminate a competitor or increase market concentration in any market, and they pose fewer competitive concerns than horizontal mergers for that reason alone. Second, as Bruce Hoffman noted, “while efficiencies are often important in horizontal mergers, they are much more intrinsic to a vertical transaction” and “come with a more built-in likelihood of improving competition than horizontal mergers.”

It is widely accepted that vertical mergers often impose downward pricing pressure by eliminating double margins. Beyond that, as the Draft Guidelines observe (at § 8), vertical mergers can also play an indispensable role in “eliminate[ing] contracting frictions,” “streamlin[ing] production, inventory management, or distribution,” and “creat[ing] innovative products in ways that would have been hard to achieve through arm’s length contracts.”

2. Harm to competitors, harm to competition, and the need for limiting principles

Vertical mergers do often disadvantage rivals of the merged firm. For example, a distributor might merge with one of its key suppliers, achieve efficiencies through the combination, and pass some of the savings through to consumers in the form of lower prices. The firm’s distribution rivals will lose profits if they match the price cut and will lose market share to the merged firm if they do not. But that outcome obviously counts in favor of supporting, not opposing, the merger because it makes consumers better off and because “[t]he antitrust laws… were enacted for the protection of competition not competitors.” (Brunswick v Pueblo Bowl-O-Mat). 

This distinction between harm to competition and harm to competitors is fundamental to U.S. antitrust law. Yet key passages in the Draft Guidelines seem to blur this distinction

For example, one passage suggests that a vertical merger will be suspect if the merged firm might “chang[e] the terms of … rivals’ access” to an input, “one or more rivals would [then] lose sales,” and “some portion of those lost sales would be diverted to the merged firm.” Draft Guidelines § 5.a, at 4-5. Of course, the Guidelines’ drafters would never concede that they wish to vindicate the interests of competitors qua competitors. They would say that incremental changes in input prices, even if they do not structurally alter the competitive landscape, might nonetheless result in slightly higher overall consumer prices. And they would insist that speculation about such slight price effects should be sufficient to block a vertical merger. 

That was the precise theory of harm that DOJ pursued in AT&T/Time Warner, which involved a purely vertical merger between a video programmer (Time Warner) and a pay-TV distributor (AT&T/DirecTV). DOJ ultimately conceded that Time Warner was unlikely to withhold programming from (“foreclose”) AT&T’s pay-TV rivals. Instead, using a complex economic model, DOJ tried to show that the merger would increase Time Warner’s bargaining power and induce AT&T’s pay-TV rivals to pay somewhat higher rates for Time Warner programming, some portion of which the rivals would theoretically pass through to their own retail customers. At the same time, DOJ conceded that post-merger efficiencies would cause AT&T to lower its retail rates compared to the but-for world without the merger. DOJ nonetheless asserted that the aggregate effect of the pay-TV rivals’ price increases would exceed the aggregate effect of AT&T’s own price decrease. Without deciding whether such an effect would be sufficient to block the merger — a disputed legal issue — the courts ruled for the merging parties because DOJ could not substantiate its factual prediction that the merger would lead to programming price increases in the first place. 

It is unclear why DOJ picked this, of all cases, as its vehicle for litigating its first vertical merger case in decades. In an archetypal raising-rivals’-costs case, familiar from exclusive dealing law, the defendant forecloses its rivals by depriving them of a critical input or distribution channel and so marginalizes them in the process that it can profitably raise its own retail prices (see, e.g., McWane; Microsoft). AT&T/Time Warner could hardly have been further afield from that archetypal case. Again, DOJ conceded both that the merged firm would not foreclose rivals at all and that the merger would induce the firm to lower its retail prices below what it would charge if the merger were blocked. The draft Guidelines appear to double down on this odd strategy and portend more cases predicated on the same attenuated concerns about mere “chang[es in] the terms of … rivals’ access” to inputs, unaccompanied by any alleged structural changes in the competitive landscape

Bringing such cases would be a mistake, both tactically and doctrinally

“Changes in the terms of inputs” are a constant fact of life in nearly every market, with or without mergers, and have almost never aroused antitrust scrutiny. For example, whenever a firm enters into a long-term preferred-provider agreement with a new business partner in lieu of merging with it, the firm will, by definition, deal on less advantageous terms with the partner’s rivals than it otherwise would. That outcome is virtually never viewed as problematic, let alone unlawful, when it is accomplished through such long-term contracts. The government does not hire a team of economists to pore over documents, interview witnesses, and run abstruse models on whether the preferred-provider agreement can be projected, on balance, to produce incrementally higher downstream prices. There is no obvious reason why the government should treat such preferred provider arrangements differently if they arise through a vertical merger rather than a vertical contract — particularly given the draft Guidelines’ own acknowledgement that vertical mergers produce pro-consumer efficiencies that would be “hard to achieve through arm’s length contracts.” (Draft Guidelines § 8, at 9).

3. Towards a more useful safe harbor

Quoting then-Judge Breyer, the Supreme Court once noted that “antitrust rules ‘must be clear enough for lawyers to explain them to clients.’” That observation rings doubly true when applied to a document by enforcement officials purporting to “guide” business decisions. Firms contemplating a vertical merger need more than assurance that their merger will be cleared two years hence if their economists vanquish the government’s economists in litigation about the fine details of Nash bargaining theory. Instead, firms need true limiting principles, which identify the circumstances where any theory of harm would be so attenuated that litigating to block the merger is not worth the candle, particularly given the empirically validated presumption that most vertical mergers are pro-consumer.

The Agencies cannot meet the need for such limiting principles with the proposed “safe harbor” as it is currently phrased in the draft Guidelines: 

The Agencies are unlikely to challenge a vertical merger where the parties to the merger have a share in the relevant market of less than 20 percent, and the related product is used in less than 20 percent of the relevant market.” (Draft Guidelines § 3, at 3). 

This anodyne assurance, with its arbitrarily low 20 percent thresholds phrased in the conjunctive, seems calculated more to preserve the agencies’ discretion than to provide genuine direction to industry. 

Nonetheless, the draft safe harbor does at least point in the right direction because it reflects a basic insight about two-level market power: vertical mergers are unlikely to create competitive concerns unless the merged firm will have, or could readily obtain, market power in both upstream and downstream markets. (See, e.g., Auburn News v. Providence Journal (“Where substantial market power is absent at any one product or distribution level, vertical integration will not have an anticompetitive effect.”)) This point parallels tying doctrine, which, like vertical merger analysis, addresses how vertical arrangements can affect competition across adjacent markets. As Justice O’Connor noted in Jefferson Parish, tying arrangements threaten competition 

primarily in the rare cases where power in the market for the tying product is used to create additional market power in the market for the tied product.… But such extension of market power is unlikely, or poses no threat of economic harm, unless…, [among other conditions, the seller has] power in the tying-product market… [and there is] a substantial threat that the tying seller will acquire market power in the tied-product market.

As this discussion suggests, the “20 percent” safe harbor in the draft Guidelines misses the mark in three respects

First, as a proxy for the absence of market power, 20 percent is too low: courts have generally refused to infer market power when the seller’s market share was below 30% and sometimes require higher shares. Of course, market share can be a highly overinclusive measure of market power, in that many firms with greater than a 30% share will lack market power. But it is nonetheless appropriate to use market share as a screen for further analysis.

Second, the draft’s safe harbor appears illogically in the conjunctive, applying only “where the parties to the merger have a share in the relevant market of less than 20 percent, and the related product is used in less than 20 percent of the relevant market.” That “and” should be an “or” because, again, vertical arrangements can be problematic only if a firm can use existing market power in a “related products” market to create or increase market power in the “relevant market.” 

Third, the phrase “the related product is used in less than 20 percent of the relevant market” is far too ambiguous to serve a useful role. For example, the “related product” sold by a merging upstream firm could be “used by” 100 percent of downstream buyers even though the firm’s sales account for only one percent of downstream purchases of that product if the downstream buyers multi-home — i.e., source their goods from many different sellers of substitutable products. The relevant proxy for “related product” market power is thus not how many customers “use” the merging firm’s product, but what percentage of overall sales of that product (including reasonable substitutes) it makes. 

Of course, this observation suggests that, when push comes to shove in litigation, the government must usually define two markets: not only (1) a “relevant market” in which competitive harm is alleged to occur, but also (2) an adjacent “related product” market in which the merged firm is alleged to have market power. Requiring such dual market definition is entirely appropriate. Ultimately, any raising-rivals’-costs theory relies on a showing that a vertically integrated firm has some degree of market power in a “related products” market when dealing with its rivals in an adjacent “relevant market.” And market definition is normally an inextricable component of a litigated market power analysis.

If these three changes are made, the safe harbor would read: 

The Agencies are unlikely to challenge a vertical merger where the parties to the merger have a share in the relevant market of less than 30 percent, or the related product sold by one of the parties accounts for less than 30 percent of the overall sales of that related product, including reasonable substitutes.

Like all safe harbors, this one would be underinclusive (in that many mergers outside of the safe harbor are unobjectionable) and may occasionally be overinclusive. But this substitute language would be more useful as a genuine safe harbor because it would impose true limiting principles. And it would more accurately reflect the ways in which market power considerations should inform vertical analysis—whether of contractual arrangements or mergers.

The 2020 Draft Joint Vertical Merger Guidelines:

What’s in, what’s out — and do we need them anyway?

February 6 & 7, 2020

Welcome! We’re delighted to kick off our two-day blog symposium on the recently released Draft Joint Vertical Merger Guidelines from the DOJ Antitrust Division and the Federal Trade Commission. 

If adopted by the agencies, the guidelines would mark the first time since 1984 that U.S. federal antitrust enforcers have provided official, public guidance on their approach to the increasingly important issue of vertical merger enforcement. 

As previously noted, the release of the draft guidelines was controversial from the outset: The FTC vote to issue the draft was mixed, with a dissent from Commissioner Slaughter, an abstention from Commissioner Chopra, and a concurring statement from Commissioner Wilson.

As the antitrust community gears up to debate the draft guidelines, we have assembled an outstanding group of antitrust experts to weigh in with their initial thoughts on the guidelines here at Truth on the Market. We hope this symposium will provide important insights and stand as a useful resource for the ongoing discussion.

The scholars and practitioners who will participate in the symposium are:

  • Timothy J. Brennan (Professor, Public Policy and Economics, University of Maryland; former Chief Economist, FCC; former economist, DOJ Antitrust Division)
  • Steven Cernak (Partner, Bona Law PC; former antitrust counsel, GM)
  • Eric Fruits (Chief Economist, ICLE; Professor of Economics, Portland State University)
  • Herbert Hovenkamp (James G. Dinan University Professor of Law, University of Pennsylvania)
  • Jonathan M. Jacobson (Partner, Wilson Sonsini Goodrich & Rosati) and Kenneth Edelson (Associate, Wilson Sonsini Goodrich & Rosati)
  • William J. Kolasky (Partner, Hughes Hubbard & Reed; former Deputy Assistant Attorney General, DOJ Antitrust Division) and Philip A. Giordano (Partner, Hughes Hubbard & Reed LLP)
  • Geoffrey A. Manne (President & Founder, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics) and Kristian Stout (Associate Director, ICLE)
  • Jonathan E. Nuechterlein (Partner, Sidley Austin LLP; former General Counsel, FTC; former Deputy General Counsel, FCC)
  • Sharis A. Pozen (Partner, Clifford Chance; former Vice President of Global Competition Law and Policy, GE; former Acting Assistant Attorney General, DOJ Antitrust Division), Timothy Cornell (Partner, Clifford Chance), Brian Concklin (Counsel, Clifford Chance), and Michael Van Arsdall (Counsel, Clifford Chance)
  • Jan Rybnicek (Counsel, Freshfields Bruckhaus Deringer; former attorney adviser to Commissioner Joshua D. Wright, FTC)
  • Steven C. Salop (tent.) (Professor of Economics and Law, Georgetown University; former Associate Director, FTC Bureau of Economics)
  • Scott A. Sher (Partner, Wilson Sonsini Goodrich & Rosati) and Matthew McDonald (Associate, Wilson Sonsini Goodrich & Rosati)
  • Margaret Slade (Professor Emeritus, Vancouver School of Economics, University of British Columbia)
  • Gregory Werden (former Senior Economic Counsel, DOJ Antitrust Division) and Luke M. Froeb (William C. Oehmig Chair in Free Enterprise and Entrepreneurship, Vanderbilt University; former Chief Economist, DOJ Antitrust Division; former Chief Economist, FTC)
  • Lawrence J. White (Robert Kavesh Professor of Economics, New York University; former Chief Economist, DOJ Antitrust Division)
  • Joshua D. Wright (University Professor of Law, George Mason University; former Commissioner, FTC), Douglas H. Ginsburg (Senior Circuit Judge, US Court of Appeals for the DC Circuit; Professor of Law, George Mason University; former Assistant Attorney General, DOJ Antitrust Division), Tad Lipsky (Assistant Professor of Law, George Mason University; former Acting Director, FTC Bureau of Competition; former chief antitrust counsel, Coca-Cola; former Deputy Assistant Attorney General, DOJ Antitrust Division), and John M. Yun (Associate Professor of Law, George Mason University; former Acting Deputy Assistant Director, FTC Bureau of Economics)

The first of the participants’ initial posts will appear momentarily, with additional posts appearing throughout the day today and tomorrow. We hope to generate a lively discussion, and expect some of the participants to offer follow up posts and/or comments on their fellow participants’ posts — please be sure to check back throughout the day and be sure to check the comments. We hope our readers will join us in the comments, as well.

Once again, welcome!

Truth on the Market is pleased to announce its next blog symposium:

The 2020 Draft Joint Vertical Merger Guidelines: What’s in, what’s out — and do we need them anyway?

February 6 & 7, 2020

Symposium background

On January 10, 2020, the DOJ Antitrust Division and the Federal Trade Commission released Draft Joint Vertical Merger Guidelines for public comment. If adopted by the agencies, the guidelines would mark the first time since 1984 that U.S. federal antitrust enforcers have provided official, public guidance on their approach to the increasingly important issue of vertical merger enforcement: 

“Challenging anticompetitive vertical mergers is essential to vigorous enforcement. The agencies’ vertical merger policy has evolved substantially since the issuance of the 1984 Non-Horizontal Merger Guidelines, and our guidelines should reflect the current enforcement approach. Greater transparency about the complex issues surrounding vertical mergers will benefit the business community, practitioners, and the courts,” said FTC Chairman Joseph J. Simons.

As evidenced by FTC Commissioner Slaughter’s dissent and FTC Commissioner Chopra’s abstention from the FTC’s vote to issue the draft guidelines, the topic is a contentious one. Similarly, as FTC Commissioner Wilson noted in her concurring statement, the recent FTC hearing on vertical mergers demonstrated that there is a vigorous dispute over what new guidelines should look like (or even if the 1984 Non-Horizontal Guidelines should be updated at all).

The agencies have announced two upcoming workshops to discuss the draft guidelines and have extended the comment period on the draft until February 26.

In advance of the workshops and the imminent discussions over the draft guidelines, we have asked a number of antitrust experts to weigh in here at Truth on the Market: to preview the coming debate by exploring the economic underpinnings of the draft guidelines and their likely role in the future of merger enforcement at the agencies, as well as what is in the guidelines and — perhaps more important — what is left out.  

Beginning the morning of Thursday, February 6, and continuing during business hours through Friday, February 7, Truth on the Market (TOTM) and the International Center for Law & Economics (ICLE) will host a blog symposium on the draft guidelines. 

Symposium participants

As in the past (see examples of previous TOTM blog symposia here), we’ve lined up an outstanding and diverse group of scholars to discuss these issues, including:

  • Timothy J. Brennan (Professor, Public Policy and Economics, University of Maryland; former Chief Economist, FCC; former economist, DOJ Antitrust Division)
  • Steven Cernak (Partner, Bona Law PC; former antitrust counsel, GM)
  • Luke M. Froeb (William C. Oehmig Chair in Free Enterprise and Entrepreneurship, Vanderbilt University; former Chief Economist, DOJ Antitrust Division; former Chief Economist, FTC)
  • Eric Fruits (Chief Economist, ICLE; Professor of Economics, Portland State University)
  • Douglas H. Ginsburg (Senior Circuit Judge, US Court of Appeals for the DC Circuit; Professor of Law, George Mason University; former Assistant Attorney General, DOJ Antitrust Division)
  • Herbert Hovenkamp (James G. Dinan University Professor of Law, University of Pennsylvania)
  • Jonathan M. Jacobson (Partner, Wilson Sonsini Goodrich & Rosati)
  • William J. Kolasky (Partner, Hughes Hubbard & Reed; former Deputy Assistant Attorney General, DOJ Antitrust Division)
  • Tad Lipsky (Assistant Professor of Law, George Mason University; former Acting Director, FTC Bureau of Competition; former chief antitrust counsel, Coca-Cola; former Deputy Assistant Attorney General, DOJ Antitrust Division) 
  • Geoffrey A. Manne (President & Founder, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics)
  • Jonathan E. Nuechterlein (Partner, Sidley Austin LLP; former General Counsel, FTC; former Deputy General Counsel, FCC)
  • Sharis A. Pozen (Partner, Clifford Chance; former Vice President of Global Competition Law and Policy, GE; former Acting Assistant Attorney General, DOJ Antitrust Division) 
  • Jan Rybnicek (Counsel, Freshfields Bruckhaus Deringer; former attorney adviser to Commissioner Joshua D. Wright, FTC)
  • Steven C. Salop (tent.) (Professor of Economics and Law, Georgetown University; former Associate Director, FTC Bureau of Economics)
  • Scott A. Sher (Partner, Wilson Sonsini Goodrich & Rosati)
  • Margaret Slade (Professor Emeritus, Vancouver School of Economics, University of British Columbia)
  • Kristian Stout (Associate Director, ICLE)
  • Gregory Werden (former Senior Economic Counsel, DOJ Antitrust Division)
  • Lawrence J. White (Robert Kavesh Professor of Economics, New York University; former Chief Economist, DOJ Antitrust Division)
  • Joshua D. Wright (University Professor of Law, George Mason University; former Commissioner, FTC)
  • John M. Yun (Associate Professor of Law, George Mason University; former Acting Deputy Assistant Director, FTC Bureau of Economics)

We want to thank all of these excellent panelists for agreeing to take time away from their busy schedules to participate in this symposium. We are hopeful that this discussion will provide invaluable insight and perspective on the Draft Joint Vertical Merger Guidelines.

Look for the first posts starting Thursday, February 6!

Jonathan B. Baker, Nancy L. Rose, Steven C. Salop, and Fiona Scott Morton don’t like vertical mergers:

Vertical mergers can harm competition, for example, through input foreclosure or customer foreclosure, or by the creation of two-level entry barriers.  … Competitive harms from foreclosure can occur from the merged firm exercising its increased bargaining leverage to raise rivals’ costs or reduce rivals’ access to the market. Vertical mergers also can facilitate coordination by eliminating a disruptive or “maverick” competitor at one vertical level, or through information exchange. Vertical mergers also can eliminate potential competition between the merging parties. Regulated firms can use vertical integration to evade rate regulation. These competitive harms normally occur when at least one of the markets has an oligopoly structure. They can lead to higher prices, lower output, quality reductions, and reduced investment and innovation.

Baker et al. go so far as to argue that any vertical merger in which the downstream firm is subject to price regulation should face a presumption that the merger is anticompetitive.

George Stigler’s well-known article on vertical integration identifies several ways in which vertical integration increases welfare by subverting price controls:

The most important of these other forces, I believe, is the failure of the price system (because of monopoly or public regulation) to clear markets at prices within the limits of the marginal cost of the product (to the buyer if he makes it) and its marginal-value product (to the seller if he further fabricates it). This phenomenon was strikingly illustrated by the spate of vertical mergers in the United States during and immediately after World War II, to circumvent public and private price control and allocations. A regulated price of OA was set (Fig. 2), at which an output of OM was produced. This quantity had a marginal value of OB to buyers, who were rationed on a nonprice basis. The gain to buyers  and sellers combined from a free price of NS was the shaded area, RST, and vertical integration was the simple way of obtaining this gain. This was the rationale of the integration of radio manufacturers into cabinet manufacture, of steel firms into fabricated products, etc.

Stigler was on to something:

  • In 1947, Emerson Radio acquired Plastimold, a maker of plastic radio cabinets. The president of Emerson at the time, Benjamin Abrams, stated “Plastimold is an outstanding producer of molded radio cabinets and gives Emerson an assured source of supply of one of the principal components in the production of radio sets.” [emphasis added] 
  • In the same year, the Congressional Record reported, “Admiral Corp. like other large radio manufacturers has reached out to take over a manufacturer of radio cabinets, the Chicago Cabinet Corp.” 
  • In 1948, the Federal Trade Commission ascribed wartime price controls and shortages as reasons for vertical mergers in the textiles industry as well as distillers’ acquisitions of wineries.

While there may have been some public policy rationale for price controls, it’s clear the controls resulted in shortages and a deadweight loss in many markets. As such, it’s likely that vertical integration to avoid the price controls improved consumer welfare (if only slightly, as in the figure above) and reduced the deadweight loss.

Rather than leading to monopolization, Stigler provides examples in which vertical integration was employed to circumvent monopolization by cartel quotas and/or price-fixing: “Almost every raw-material cartel has had trouble with customers who wish to integrate backward, in order to negate the cartel prices.”

In contrast to Stigler’s analysis, Salop and Daniel P. Culley begin from an implied assumption that where price regulation occurs, the controls are good for society. Thus, they argue avoidance of the price controls are harmful or against the public interest:

Example: The classic example is the pre-divestiture behavior of AT&T, which allegedly used its purchases of equipment at inflated prices from its wholly-owned subsidiary, Western Electric, to artificially increase its costs and so justify higher regulated prices.

This claim is supported by the court in U.S. v. AT&T [emphasis added]:

The Operating Companies have taken these actions, it is said, because the existence of rate of return regulation removed from them the burden of such additional expense, for the extra cost could simply be absorbed into the rate base or expenses, allowing extra profits from the higher prices to flow upstream to Western rather than to its non-Bell competition.

Even so, the pass-through of higher costs seems only a minor concern to the court relative to the “three hats” worn by AT&T and its subsidiaries in the (1) setting of standards, (2) counseling of operating companies in their equipment purchases, and (3) production of equipment for sale to the operating companies [emphasis added]:

The government’s evidence has depicted defendants as sole arbiters of what equipment is suitable for use in the Bell System a role that carries with it a power of subjective judgment that can be and has been used to advance the sale of Western Electric’s products at the expense of the general trade. First, AT&T, in conjunction with Bell Labs and Western Electric, sets the technical standards under which the telephone network operates and the compatibility specifications which equipment must meet. Second, Western Electric and Bell Labs … serve as counselors to the Operating Companies in their procurement decisions, ostensibly helping them to purchase equipment that meets network standards. Third, Western also produces equipment for sale to the Operating Companies in competition with general trade manufacturers.

The upshot of this “wearing of three hats” is, according to the government’s evidence, a rather obviously anticompetitive situation. By setting technical or compatibility standards and by either not communicating these standards to the general trade or changing them in mid-stream, AT&T has the capacity to remove, and has in fact removed, general trade products from serious consideration by the Operating Companies on “network integrity” grounds. By either refusing to evaluate general trade products for the Operating Companies or producing biased or speculative evaluations, AT&T has been able to influence the Operating Companies, which lack independent means to evaluate general trade products, to buy Western. And the in-house production and sale of Western equipment provides AT&T with a powerful incentive to exercise its “approval” power to discriminate against Western’s competitors.

It’s important to keep in mind that rate of return regulation was not thrust upon AT&T, it was a quid pro quo in which state and federal regulators acted to eliminate AT&T/Bell competitors in exchange for price regulation. In a floor speech to Congress in 1921, Rep. William J. Graham declared:

It is believed to be better policy to have one telephone system in a community that serves all the people, even though it may be at an advanced rate, property regulated by State boards or commissions, than it is to have two competing telephone systems.

For purposes of Salop and Culley’s integration-to-evade-price-regulation example, it’s important to keep in mind that AT&T acquired Western Electric in 1882, or about two decades before telephone pricing regulation was contemplated and eight years before the Sherman Antitrust Act. While AT&T may have used vertical integration to take advantage of rate-of-return price regulation, it’s simply not true that AT&T acquired Western Electric to evade price controls.

Salop and Culley provide a more recent example:

Example: Potential evasion of regulation concerns were raised in the FTC’s analysis in 2008 of the Fresenius/Daiichi Sankyo exclusive sub-license for a Daiichi Sankyo pharmaceutical used in Fresenius’ dialysis clinics, which potentially could allow evasion of Medicare pricing regulations.

As with the AT&T example, this example is not about evasion of price controls. Rather it raises concerns about taking advantage of Medicare’s pricing formula. 

At the time of the deal, Medicare reimbursed dialysis clinics based on a drug manufacturer’s Average Sales Price (“ASP”) plus six percent, where ASP was calculated by averaging the prices paid by all customers, including any discounts or rebates. 

The FTC argued by setting an artificially high transfer price of the drug to Fresenius, the ASP would increase, thereby increasing the Medicare reimbursement to all clinics providing the same drug (which not only would increase the costs to Medicare but also would increase income to all clinics providing the drug). Although the FTC claims this would be anticompetitive, the agency does not describe in what ways competition would be harmed.

The FTC introduces an interesting wrinkle in noting that a few years after the deal would have been completed, “substantial changes to the Medicare program relating to dialysis services … would eliminate the regulations that give rise to the concerns created by the proposed transaction.” Specifically, payment for dialysis services would shift from fee-for-service to capitation.

This wrinkle highlights a serious problem with a presumption that any purported evasion of price controls is an antitrust violation. Namely, if the controls go away, so does the antitrust violation. 

Conversely–as Salop and Culley seem to argue with their AT&T example–a vertical merger could be retroactively declared anticompetitive if price controls are imposed after the merger is completed (even decades later and even if the price regulations were never anticipated at the time of the merger). 

It’s one thing to argue that avoiding price regulation runs counter to public interest, but it’s another thing to argue that avoiding price regulation is anticompetitive. Indeed, as Stigler argues, if the price controls stifle competition, then avoidance of the controls may enhance competition. Placing such mergers under heightened scrutiny, such as an anticompetitive presumption, is a solution in search of a problem.

An oft-repeated claim of conferences, media, and left-wing think tanks is that lax antitrust enforcement has led to a substantial increase in concentration in the US economy of late, strangling the economy, harming workers, and saddling consumers with greater markups in the process. But what if rising concentration (and the current level of antitrust enforcement) were an indication of more competition, not less?

By now the concentration-as-antitrust-bogeyman story is virtually conventional wisdom, echoed, of course, by political candidates such as Elizabeth Warren trying to cash in on the need for a government response to such dire circumstances:

In industry after industry — airlines, banking, health care, agriculture, tech — a handful of corporate giants control more and more. The big guys are locking out smaller, newer competitors. They are crushing innovation. Even if you don’t see the gears turning, this massive concentration means prices go up and quality goes down for everything from air travel to internet service.  

But the claim that lax antitrust enforcement has led to increased concentration in the US and that it has caused economic harm has been debunked several times (for some of our own debunking, see Eric Fruits’ posts here, here, and here). Or, more charitably to those who tirelessly repeat the claim as if it is “settled science,” it has been significantly called into question

Most recently, several working papers looking at the data on concentration in detail and attempting to identify the likely cause for the observed data, show precisely the opposite relationship. The reason for increased concentration appears to be technological, not anticompetitive. And, as might be expected from that cause, its effects are beneficial. Indeed, the story is both intuitive and positive.

What’s more, while national concentration does appear to be increasing in some sectors of the economy, it’s not actually so clear that the same is true for local concentration — which is often the relevant antitrust market.

The most recent — and, I believe, most significant — corrective to the conventional story comes from economists Chang-Tai Hsieh of the University of Chicago and Esteban Rossi-Hansberg of Princeton University. As they write in a recent paper titled, “The Industrial Revolution in Services”: 

We show that new technologies have enabled firms that adopt them to scale production over a large number of establishments dispersed across space. Firms that adopt this technology grow by increasing the number of local markets that they serve, but on average are smaller in the markets that they do serve. Unlike Henry Ford’s revolution in manufacturing more than a hundred years ago when manufacturing firms grew by concentrating production in a given location, the new industrial revolution in non-traded sectors takes the form of horizontal expansion across more locations. At the same time, multi-product firms are forced to exit industries where their productivity is low or where the new technology has had no effect. Empirically we see that top firms in the overall economy are more focused and have larger market shares in their chosen sectors, but their size as a share of employment in the overall economy has not changed. (pp. 42-43) (emphasis added).

This makes perfect sense. And it has the benefit of not second-guessing structural changes made in response to technological change. Rather, it points to technological change as doing what it regularly does: improving productivity.

The implementation of new technology seems to be conferring benefits — it’s just that these benefits are not evenly distributed across all firms and industries. But the assumption that larger firms are causing harm (or even that there is any harm in the first place, whatever the cause) is unmerited. 

What the authors find is that the apparent rise in national concentration doesn’t tell the relevant story, and the data certainly aren’t consistent with assumptions that anticompetitive conduct is either a cause or a result of structural changes in the economy.

Hsieh and Rossi-Hansberg point out that increased concentration is not happening everywhere, but is being driven by just three industries:

First, we show that the phenomena of rising concentration . . . is only seen in three broad sectors – services, wholesale, and retail. . . . [T]op firms have become more efficient over time, but our evidence indicates that this is only true for top firms in these three sectors. In manufacturing, for example, concentration has fallen.

Second, rising concentration in these sectors is entirely driven by an increase [in] the number of local markets served by the top firms. (p. 4) (emphasis added).

These findings are a gloss on a (then) working paper — The Fall of the Labor Share and the Rise of Superstar Firms — by David Autor, David Dorn, Lawrence F. Katz, Christina Patterson, and John Van Reenan (now forthcoming in the QJE). Autor et al. (2019) finds that concentration is rising, and that it is the result of increased productivity:

If globalization or technological changes push sales towards the most productive firms in each industry, product market concentration will rise as industries become increasingly dominated by superstar firms, which have high markups and a low labor share of value-added.

We empirically assess seven predictions of this hypothesis: (i) industry sales will increasingly concentrate in a small number of firms; (ii) industries where concentration rises most will have the largest declines in the labor share; (iii) the fall in the labor share will be driven largely by reallocation rather than a fall in the unweighted mean labor share across all firms; (iv) the between-firm reallocation component of the fall in the labor share will be greatest in the sectors with the largest increases in market concentration; (v) the industries that are becoming more concentrated will exhibit faster growth of productivity; (vi) the aggregate markup will rise more than the typical firm’s markup; and (vii) these patterns should be observed not only in U.S. firms, but also internationally. We find support for all of these predictions. (emphasis added).

This is alone is quite important (and seemingly often overlooked). Autor et al. (2019) finds that rising concentration is a result of increased productivity that weeds out less-efficient producers. This is a good thing. 

But Hsieh & Rossi-Hansberg drill down into the data to find something perhaps even more significant: the rise in concentration itself is limited to just a few sectors, and, where it is observed, it is predominantly a function of more efficient firms competing in more — and more localized — markets. This means that competition is increasing, not decreasing, whether it is accompanied by an increase in concentration or not. 

No matter how may times and under how many monikers the antitrust populists try to revive it, the Structure-Conduct-Performance paradigm remains as moribund as ever. Indeed, on this point, as one of the new antitrust agonists’ own, Fiona Scott Morton, has written (along with co-authors Martin Gaynor and Steven Berry):

In short, there is no well-defined “causal effect of concentration on price,” but rather a set of hypotheses that can explain observed correlations of the joint outcomes of price, measured markups, market share, and concentration. As Bresnahan (1989) argued three decades ago, no clear interpretation of the impact of concentration is possible without a clear focus on equilibrium oligopoly demand and “supply,” where supply includes the list of the marginal cost functions of the firms and the nature of oligopoly competition. 

Some of the recent literature on concentration, profits, and markups has simply reasserted the relevance of the old-style structure-conduct-performance correlations. For economists trained in subfields outside industrial organization, such correlations can be attractive. 

Our own view, based on the well-established mainstream wisdom in the field of industrial organization for several decades, is that regressions of market outcomes on measures of industry structure like the Herfindahl-Hirschman Index should be given little weight in policy debates. Such correlations will not produce information about the causal estimates that policy demands. It is these causal relationships that will help us understand what, if anything, may be causing markups to rise. (emphasis added).

Indeed! And one reason for the enduring irrelevance of market concentration measures is well laid out in Hsieh and Rossi-Hansberg’s paper:

This evidence is consistent with our view that increasing concentration is driven by new ICT-enabled technologies that ultimately raise aggregate industry TFP. It is not consistent with the view that concentration is due to declining competition or entry barriers . . . , as these forces will result in a decline in industry employment. (pp. 4-5) (emphasis added)

The net effect is that there is essentially no change in concentration by the top firms in the economy as a whole. The “super-star” firms of today’s economy are larger in their chosen sectors and have unleashed productivity growth in these sectors, but they are not any larger as a share of the aggregate economy. (p. 5) (emphasis added)

Thus, to begin with, the claim that increased concentration leads to monopsony in labor markets (and thus unemployment) appears to be false. Hsieh and Rossi-Hansberg again:

[W]e find that total employment rises substantially in industries with rising concentration. This is true even when we look at total employment of the smaller firms in these industries. (p. 4)

[S]ectors with more top firm concentration are the ones where total industry employment (as a share of aggregate employment) has also grown. The employment share of industries with increased top firm concentration grew from 70% in 1977 to 85% in 2013. (p. 9)

Firms throughout the size distribution increase employment in sectors with increasing concentration, not only the top 10% firms in the industry, although by definition the increase is larger among the top firms. (p. 10) (emphasis added)

Again, what actually appears to be happening is that national-level growth in concentration is actually being driven by increased competition in certain industries at the local level:

93% of the growth in concentration comes from growth in the number of cities served by top firms, and only 7% comes from increased employment per city. . . . [A]verage employment per county and per establishment of top firms falls. So necessarily more than 100% of concentration growth has to come from the increase in the number of counties and establishments served by the top firms. (p.13)

The net effect is a decrease in the power of top firms relative to the economy as a whole, as the largest firms specialize more, and are dominant in fewer industries:

Top firms produce in more industries than the average firm, but less so in 2013 compared to 1977. The number of industries of a top 0.001% firm (relative to the average firm) fell from 35 in 1977 to 17 in 2013. The corresponding number for a top 0.01% firm is 21 industries in 1977 and 9 industries in 2013. (p. 17)

Thus, summing up, technology has led to increased productivity as well as greater specialization by large firms, especially in relatively concentrated industries (exactly the opposite of the pessimistic stories):  

[T]op firms are now more specialized, are larger in the chosen industries, and these are precisely the industries that have experienced concentration growth. (p. 18)

Unsurprisingly (except to some…), the increase in concentration in certain industries does not translate into an increase in concentration in the economy as a whole. In other words, workers can shift jobs between industries, and there is enough geographic and firm mobility to prevent monopsony. (Despite rampant assumptions that increased concentration is constraining labor competition everywhere…).

Although the employment share of top firms in an average industry has increased substantially, the employment share of the top firms in the aggregate economy has not. (p. 15)

It is also simply not clearly the case that concentration is causing prices to rise or otherwise causing any harm. As Hsieh and Rossi-Hansberg note:

[T]he magnitude of the overall trend in markups is still controversial . . . and . . . the geographic expansion of top firms leads to declines in local concentration . . . that could enhance competition. (p. 37)

Indeed, recent papers such as Traina (2018), Gutiérrez and Philippon (2017), and the IMF (2019) have found increasing markups over the last few decades but at much more moderate rates than the famous De Loecker and Eeckhout (2017) study. Other parts of the anticompetitive narrative have been challenged as well. Karabarbounis and Neiman (2018) finds that profits have increased, but are still within their historical range. Rinz (2018) shows decreased wages in concentrated markets but also points out that local concentration has been decreasing over the relevant time period.

None of this should be so surprising. Has antitrust enforcement gotten more lax, leading to greater concentration? According to Vita and Osinski (2018), not so much. And how about the stagnant rate of new firms? Are incumbent monopolists killing off new startups? The more likely — albeit mundane — explanation, according to Hopenhayn et al. (2018), is that increased average firm age is due to an aging labor force. Lastly, the paper from Hsieh and Rossi-Hansberg discussed above is only the latest in a series of papers, including Bessen (2017), Van Reenen (2018), and Autor et al. (2019), that shows a rise in fixed costs due to investments in proprietary information technology, which correlates with increased concentration. 

So what is the upshot of all this?

  • First, as noted, employment has not decreased because of increased concentration; quite the opposite. Employment has increased in the industries that have experienced the most concentration at the national level.
  • Second, this result suggests that the rise in concentrated industries has not led to increased market power over labor.
  • Third, concentration itself needs to be understood more precisely. It is not explained by a simple narrative that the economy as a whole has experienced a great deal of concentration and this has been detrimental for consumers and workers. Specific industries have experienced national level concentration, but simultaneously those same industries have become more specialized and expanded competition into local markets. 

Surprisingly (because their paper has been around for a while and yet this conclusion is rarely recited by advocates for more intervention — although they happily use the paper to support claims of rising concentration), Autor et al. (2019) finds the same thing:

Our formal model, detailed below, generates superstar effects from increases in the toughness of product market competition that raise the market share of the most productive firms in each sector at the expense of less productive competitors. . . . An alternative perspective on the rise of superstar firms is that they reflect a diminution of competition, due to a weakening of U.S. antitrust enforcement (Dottling, Gutierrez and Philippon, 2018). Our findings on the similarity of trends in the U.S. and Europe, where antitrust authorities have acted more aggressively on large firms (Gutierrez and Philippon, 2018), combined with the fact that the concentrating sectors appear to be growing more productive and innovative, suggests that this is unlikely to be the primary explanation, although it may important in some specific industries (see Cooper et al, 2019, on healthcare for example). (emphasis added).

The popular narrative among Neo-Brandeisian antitrust scholars that lax antitrust enforcement has led to concentration detrimental to society is at base an empirical one. The findings of these empirical papers severely undermine the persuasiveness of that story.

Today, Reuters reports that Germany-based ThyssenKrupp has received bids from three bidding groups for a majority stake in the firm’s elevator business. Finland’s Kone teamed with private equity firm CVC to bid on the company. Private equity firms Blackstone and Carlyle joined with the Canada Pension Plan Investment Board to submit a bid. A third bid came from Advent, Cinven, and the Abu Dhabi Investment Authority.

Also today — in anticipation of the long-rumored and much-discussed sale of ThyssenKrupp’s elevator business — the International Center for Law & Economics released The Antitrust Risks of Four To Three Mergers: Heightened Scrutiny of a Potential ThyssenKrupp/Kone Merger, by Eric Fruits and Geoffrey A. Manne. This study examines the heightened scrutiny of four to three mergers by competition authorities in the current regulatory environment, using a potential ThyssenKrupp/Kone merger as a case study. 

In recent years, regulators have become more aggressive in merger enforcement in response to populist criticisms that lax merger enforcement has led to the rise of anticompetitive “big business.” In this environment, it is easy to imagine regulators intensely scrutinizing and challenging or conditioning nearly any merger that substantially increases concentration. 

This potential deal provides an opportunity to highlight the likely challenges, complexity, and cost that regulatory scrutiny of such mergers actually entails — and it is likely to be a far cry from the lax review and permissive decisionmaking of antitrust critics’ imagining.

In the case of a potential ThyssenKrupp/Kone merger, the combined entity would face lengthy, costly, and duplicative review in multiple jurisdictions, any one of which could effectively block the merger or impose onerous conditions. It would face the automatic assumption of excessive concentration in several of these, including the US, EU, and Canada. In the US, the deal would also face heightened scrutiny based on political considerations, including the perception that the deal would strengthen a foreign firm at the expense of a domestic supplier. It would also face the risk of politicized litigation from state attorneys general, and potentially the threat of extractive litigation by competitors and customers.

Whether the merger would actually entail anticompetitive risk may, unfortunately, be of only secondary importance in determining the likelihood and extent of a merger challenge or the imposition of onerous conditions.

A “highly concentrated” market

In many jurisdictions, the four to three merger would likely trigger a “highly concentrated” market designation. With the merging firms having a dominant share of the market for elevators, the deal would be viewed as problematic in several areas:

  • The US (share > 35%, HHI > 3,000, HHI increase > 700), 
  • Canada (share of approximately 50%, HHI > 2,900, HHI increase of 1,000), 
  • Australia (share > 40%, HHI > 3,100, HHI increase > 500), 
  • Europe (shares of 33–65%, HHIs in excess of 2,700, and HHI increases of 270 or higher in Sweden, Finland, Netherlands, Austria, France, and Luxembourg).

As with most mergers, a potential ThyssenKrupp/Kone merger would likely generate “hot docs” that would be used to support the assumption of anticompetitive harm from the increase in concentration, especially in light of past allegations of price fixing in the industry and a decision by the European Commission in 2007 to fine certain companies in the industry for alleged anticompetitive conduct.

Political risks

The merger would also surely face substantial political risks in the US and elsewhere from the perception the deal would strengthen a foreign firm at the expense of a domestic supplier. President Trump’s administration has demonstrated a keen interest in protecting what it sees as US interests vis-à-vis foreign competition. As a high-rise and hotel developer who has shown a willingness to intervene in antitrust enforcement to protect his interests, President Trump may have a heightened personal interest in a ThyssenKrupp/Kone merger. 

To the extent that US federal, state, and local governments purchase products from the merging parties, the deal would likely be subjected to increased attention from federal antitrust regulators as well as states’ attorneys general. Indeed, the US Department of Justice (DOJ) has created a “Procurement Collusion Strike Force” focused on “deterring, detecting, investigating and prosecuting antitrust crimes . . . which undermine competition in government procurement. . . .”

The deal may also face scrutiny from EC, UK, Canadian, and Australian competition authorities, each of which has exhibited increased willingness to thwart such mergers. For example, the EU recently blocked a proposed merger between the transport (rail) services of EU firms, Siemens and Alstom. The UK recently blocked a series of major deals that had only limited competitive effects on the UK. In one of these, Thermo Fisher Scientific’s proposed acquisition of Roper Technologies’ Gatan subsidiary was not challenged in the US, but the deal was abandoned after the UK CMA decided to block the deal despite its limited connections to the UK.

Economic risks

In addition to the structural and political factors that may lead to blocking a four to three merger, several economic factors may further exacerbate the problem. While these, too, may be wrongly deemed problematic in particular cases by reviewing authorities, they are — relatively at least — better-supported by economic theory in the abstract. Moreover, even where wrongly applied, they are often impossible to refute successfully given the relevant standards. And such alleged economic concerns can act as an effective smokescreen for blocking a merger based on the sorts of political and structural considerations discussed above. Some of these economic factors include:

  • Barriers to entry. IBISWorld identifies barriers to entry to include economies of scale, long-standing relationships with existing buyers, as well as long records of safety and reliability. Strictly speaking, these are not costs borne only by a new entrant, and thus should not be deemed competitively-relevant entry barriers. Yet merger review authorities the world over fail to recognize this distinction, and routinely scuttle mergers based simply on the costs faced by additional competitors entering the market.
  • Potential unilateral effects. The extent of direct competition between the products and services sold by the merging parties is a key part of the evaluation of unilateral price effects. Competition authorities would likely consider a significant range of information to evaluate the extent of direct competition between the products and services sold by ThyssenKrupp and its merger partner. In addition to “hot docs,” this information could include won/lost bid reports as well as evidence from discount approval processes and customer switching patterns. Because the purchase of elevator and escalator products and services involves negotiation by sophisticated and experienced buyers, it is likely that this type of bid information would be readily available for review.
  • A history of coordinated conduct involving ThyssenKrupp and Kone. Competition authorities will also consider the risk that a four to three merger will increase the ability and likelihood for the remaining, smaller number of firms to collude. In 2007 the European Commission imposed a €992 million cartel fine on five elevator firms: ThyssenKrupp, Kone, Schindler, United Technologies, and Mitsubishi. At the time, it was the largest-ever cartel fine. Several companies, including Kone and UTC, admitted wrongdoing.

Conclusion

As “populist” antitrust gains more traction among enforcers aiming to stave off criticisms of lax enforcement, superficial and non-economic concerns have increased salience. The simple benefit of a resounding headline — “The US DOJ challenges increased concentration that would stifle the global construction boom” — signaling enforcers’ efforts to thwart further increases in concentration and save blue collar jobs is likely to be viewed by regulators as substantial. 

Coupled with the arguably more robust, potential economic arguments involving unilateral and coordinated effects arising from such a merger, a four to three merger like a potential ThyssenKrupp/Kone transaction would be sure to attract significant scrutiny and delay. Any arguments that such a deal might actually decrease prices and increase efficiency are — even if valid — less likely to gain as much traction in today’s regulatory environment.

This guest post is by Jonathan M. Barnett, Torrey H. Webb Professor Law, University of Southern California Gould School of Law.

It has become virtual received wisdom that antitrust law has been subdued by economic analysis into a state of chronic underenforcement. Following this line of thinking, many commentators applauded the Antitrust Division’s unsuccessful campaign to oppose the acquisition of Time-Warner by AT&T and some (unsuccessfully) urged the Division to take stronger action against the acquisition of most of Fox by Disney. The arguments in both cases followed a similar “big is bad” logic. Consolidating control of a large portfolio of creative properties (Fox plus Disney) or integrating content production and distribution capacities (Time-Warner plus AT&T) would exacerbate market concentration, leading to reduced competition and some combination of higher prices and reduced product for consumers. 

Less than 18 months after the closing of both transactions, those concerns seem to have been largely unwarranted. 

Far from precipitating any decline in product output or variety, both transactions have been followed by a vigorous burst of competition in the digital streaming market. In place of the Amazon plus Netflix bottleneck (with Hulu trailing behind), consumers now, or in 2020 will, have a choice of at least four new streaming services with original content, Disney+, AT&T’s “HBO Max”, Apple’s “Apple TV+” and Comcast’s NBCUniversal “Peacock” services. Critically, each service relies on a formidable combination of creative, financing and technological capacities that can only be delivered by a firm of sufficiently large size and scale.  As modern antitrust law has long recognized, it turns out that “big” is sometimes not bad.

Where’s the Harm?

At present, it is hard to see any net consumer harm arising from the concurrence of increased size and increased competition. 

On the supply side, this is just the next episode in the ongoing “Golden Age of Television” in which content producers have enjoyed access to exceptional funding to support high-value productions.  It has been reported that Apple TV+’s new “Morning Show” series will cost $15 million per episode while similar estimates are reported for hit shows such as HBO’s “Game of Thrones” and Netflix’s “The Crown.”  Each of those services is locked in a fierce competition to gain and retain sufficient subscribers to earn a return on those investments, which leads directly to the next happy development.

On the demand side, consumers enjoy a proliferating array of streaming services, ranging from free ad-supported services to subscription ad-free services. Consumers can now easily “cut the cord” and assemble a customized bundle of preferred content from multiple services, each of which is less costly than a traditional cable package and can generally be cancelled at any time.  Current market performance does not plausibly conform to the declining output, limited variety or increasing prices that are the telltale symptoms of a less than competitive market.

Real-World v. Theoretical Markets

The market’s favorable trajectory following these two controversial transactions should not be surprising. When scrutinized against the actual characteristics of real-world digital content markets, rather than stylized theoretical models or antiquated pre-digital content markets, the arguments leveled against these transactions never made much sense. There were two fundamental and related errors. 

Error #1: Content is Scarce

Advocates for antitrust intervention assumed that entry barriers into the content market were high, in which case it followed that the owner of an especially valuable creative portfolio could exert pricing power to consumers’ detriment. Yet, in reality, funding for content production is plentiful and even a service that has an especially popular show is unlikely to have sustained pricing power in the face of a continuous flow of high-value productions being released by formidable competitors. The amounts being spent on content in 2019 by leading streaming services are unprecedented, ranging from a reported $15 billion for Netflix to an estimated $6 billion for Amazon and Apple TV+ to an estimated $3.9 billion for AT&T’s HBO Max. It is also important to note that a hit show is often a mobile asset that a streaming or other video distribution service has licensed from independent production companies and other rights holders. Once the existing deal expires, those rights are available for purchase by the highest bidder. For example, in 2019, Netflix purchased the streaming rights to “Seinfeld”, Viacom purchased the cable rights to “Seinfeld”, and HBO Max purchased the streaming rights to “South Park.” Similarly, the producers behind a hit show are always free to take their talents to competitors once any existing agreement terminates.

Error #2: Home Pay-TV is a “Monopoly”

Advocates of antitrust action were looking at the wrong market—or more precisely, the market as it existed about a decade ago. The theory that AT&T’s acquisition of Time-Warner’s creative portfolio would translate into pricing power in the home pay-TV market mighthave been plausible when consumers had no reasonable alternative to the local cable provider. But this argument makes little sense today when consumers are fleeing bulky home pay-TV bundles for cheaper cord-cutting options that deliver more targeted content packages to a mobile device.  In 2019, a “home” pay-TV market is fast becoming an anachronism and hence a home pay-TV “monopoly” largely reduces to a formalism that, with the possible exception of certain live programming, is unlikely to translate into meaningful pricing power. 

Wait a Second! What About the HBO Blackout?

A skeptical reader might reasonably object that this mostly rosy account of the post-merger home video market is unpersuasive since it does not address the ongoing blackout of HBO (now an AT&T property) on the Dish satellite TV service. Post-merger commentary that remains skeptical of the AT&T/Time-Warner merger has focused on this dispute, arguing that it “proves” that the government was right since AT&T is purportedly leveraging its new ownership of HBO to disadvantage one of its competitors in the pay-TV market. This interpretation tends to miss the forest for the trees (or more precisely, a tree).  

The AT&T/Dish dispute over HBO is only one of over 200 “carriage” disputes resulting in blackouts that have occurred this year, which continues an upward trend since approximately 2011. Some of those include Dish’s dispute with Univision (settled in March 2019 after a nine-month blackout) and AT&T’s dispute (as pay-TV provider) with Nexstar (settled in August 2019 after a nearly two-month blackout). These disputes reflect the fact that the flood of subscriber defections from traditional pay-TV to mobile streaming has made it difficult for pay-TV providers to pass on the fees sought by content owners. As a result, some pay-TV providers adopt the negotiating tactic of choosing to drop certain content until the terms improve, just as AT&T, in its capacity as a pay-TV provider, dropped CBS for three weeks in July and August 2019 pending renegotiation of licensing terms. It is the outward shift in the boundaries of the economically relevant market (from home to home-plus-mobile video delivery), rather than market power concerns, that best accounts for periodic breakdowns in licensing negotiations.  This might even be viewed positively from an antitrust perspective since it suggests that the “over the top” market is putting pressure on the fees that content owners can extract from providers in the traditional pay-TV market.

Concluding Thoughts

It is common to argue today that antitrust law has become excessively concerned about “false positives”– that is, the possibility of blocking a transaction or enjoining a practice that would have benefited consumers. Pending future developments, this early post-mortem on the regulatory and judicial treatment of these two landmark media transactions suggests that there are sometimes good reasons to stay the hand of the court or regulator. This is especially the case when a generational market shift is in progress and any regulator’s or judge’s foresight is likely to be guesswork. Antitrust law’s “failure” to stop these transactions may turn out to have been a ringing success.

Wall Street Journal commentator, Greg Ip, reviews Thomas Philippon’s forthcoming book, The Great Reversal: How America Gave Up On Free Markets. Ip describes a “growing mountain” of research on industry concentration in the U.S. and reports that Philippon concludes competition has declined over time, harming U.S. consumers.

In one example, Philippon points to air travel. He notes that concentration in the U.S. has increased rapidly—spiking since the Great Recession—while concentration in the EU has increased modestly. At the same time, Ip reports “U.S. airlines are now far more profitable than their European counterparts.” (Although it’s debatable whether a five percentage point difference in net profit margin is “far more profitable”). 

On first impression, the figures fit nicely with the populist antitrust narrative: As concentration in the U.S. grew, so did profit margins. Closer inspection raises some questions, however. 

For example, the U.S. airline industry had a negative net profit margin in each of the years prior to the spike in concentration. While negative profits may be good for consumers, it would be a stretch to argue that long-run losses are good for competition as a whole. At some point one or more of the money losing firms is going to pull the ripcord. Which raises the issue of causation.

Just looking at the figures from the WSJ article, one could argue that rather than concentration driving profit margins, instead profit margins are driving concentration. Indeed, textbook IO economics would indicate that in the face of losses, firms will exit until economic profit equals zero. Paraphrasing Alfred Marshall, “Which blade of the scissors is doing the cutting?”

While the concentration and profits story fits the antitrust populist narrative, other observations run contrary to Philippon’s conclusion. For example, airline prices, as measured by price indexes, show that changes in U.S. and EU airline prices have fairly closely tracked each other until 2014, when U.S. prices began dropping. Sure, airlines have instituted baggage fees, but the CPI includes taxes, fuel surcharges, airport, security, and baggage fees. It’s not obvious that U.S. consumers are worse off in the so-called era of rising concentration.

Regressing U.S. air fare price index against Philippon’s concentration information in the figure above (and controlling for general inflation) finds that if U.S. concentration in 2015 was the same as in 1995, U.S. airfares would be about 2.8% lower. That a 1,250 point increase in HHI would be associated with a 2.8% increase in prices indicates that the increased concentration in U.S. airlines has led to no significant increase in consumer prices.

Also, if consumers are truly worse off, one would expect to see a drop off or slow down in the use of air travel. An eyeballing of passenger data does not fit the populist narrative. Instead, we see airlines are carrying more passengers and consumers are paying lower prices on average.

While it’s true that low-cost airlines have shaken up air travel in the EU, the differences are not solely explained by differences in market concentration. For example, U.S. regulations prohibit foreign airlines from operating domestic flights while EU carriers compete against operators from other parts of Europe. While the WSJ’s figures tell an interesting story of concentration, prices, and profits, they do not provide a compelling case of anticompetitive conduct.

In the Federal Trade Commission’s recent hearings on competition policy in the 21st century, Georgetown professor Steven Salop urged greater scrutiny of vertical mergers. He argued that regulators should be skeptical of the claim that vertical integration tends to produce efficiencies that can enhance consumer welfare. In his presentation to the FTC, Professor Salop provided what he viewed as exceptions to this long-held theory.

Also, vertical merger efficiencies are not inevitable. I mean, vertical integration is common, but so is vertical non-integration. There is an awful lot of companies that are not vertically integrated. And we have lots of examples in which vertical integration has failed. Pepsi’s acquisition of KFC and Pizza Hut; you know, of course Coca-Cola has not merged with McDonald’s . . . .

Aside from the logical fallacy of cherry picking examples (he also includes Betamax/VHS and the split up of Alcoa and Arconic, as well as “integration and disintegration” “in cable”), Professor Salop misses the fact that PepsiCo’s 20 year venture into restaurants had very little to do with vertical integration.

Popular folklore says PepsiCo got into fast food because it was looking for a way to lock up sales of its fountain sodas. Soda is considered one of the highest margin products sold by restaurants. Vertical integration by a soda manufacturer into restaurants would eliminate double marginalization with the vertically integrated firm reaping most of the gains. The folklore fits nicely with economic theory. But, the facts may not fit the theory.

PepsiCo acquired Pizza Hut in 1977, Taco Bell in 1978, and Kentucky Fried Chicken in 1986. Prior to PepsiCo’s purchase, KFC had been owned by spirits company Heublein and conglomerate RJR Nabisco. This was the period of conglomerates—Pillsbury owned Burger King and General Foods owned Burger Chef (or maybe they were vertically integrated into bun distribution).

In the early 1990s Pepsi also bought California Pizza Kitchen, Chevys Fresh Mex, and D’Angelo Grilled Sandwiches.

In 1997, PepsiCo exited the restaurant business. It spun off Pizza Hut, Taco Bell, and KFC to Tricon Global Restaurants, which would later be renamed Yum! Brands. CPK and Chevy’s were purchased by private equity investors. D’Angelo was sold to Papa Gino’s Holdings, a restaurant chain. Since then, both Chevy’s and Papa Gino’s have filed for bankruptcy and Chevy’s has had some major shake-ups.

Professor Salop’s story focuses on the spin-off as an example of the failure of vertical mergers. But there is also a story of success. PepsiCo was in the restaurant business for two decades. More importantly, it continued its restaurant acquisitions over time. If PepsiCo’s restaurants strategy was a failure, it seems odd that the company would continue acquisitions into the early 1990s.

It’s easy, and largely correct, to conclude that PepsiCo’s restaurant acquisitions involved some degree of vertical integration, with upstream PepsiCo selling beverages to downstream restaurants. At the time PepsiCo bought Kentucky Fried Chicken, the New York Times reported KFC was Coke’s second-largest fountain account, behind McDonald’s.

But, what if vertical efficiencies were not the primary reason for the acquisitions?

Growth in U.S. carbonated beverage sales began slowing in the 1970s. It was also the “decade of the fast-food business.” From 1971 to 1977, Pizza Hut’s profits grew an average of 40% per year. Colonel Sanders sold his ownership in KFC for $2 million in 1964. Seven years later, the company was sold to Heublein for $280 million; PepsiCo paid $850 million in 1986.

Although KFC was Coke’s second largest customer at the time, about 20% of KFC’s stores served Pepsi products, “PepsiCo stressed that the major reason for the acquisition was to expand its restaurant business, which last year accounted for 26 percent of its revenues of $8.1 billion,” according to the New York Times.

Viewed in this light, portfolio diversification goes a much longer way toward explaining PepsiCo’s restaurant purchases than hoped-for vertical efficiencies. In 1997, former PepsiCo chairman Roger Enrico explained to investment analysts that the company entered the restaurant business in the first place, “because it didn’t see future growth in its soft drink and snack” businesses and thought diversification into restaurants would provide expansion opportunities.

Prior to its Pizza Hut and Taco Bell acquisitions, PepsiCo owned companies as diverse as Frito-Lay, North American Van Lines, Wilson Sporting Goods, and Rheingold Brewery. This further supports a diversification theory rather than a vertical integration theory of PepsiCo’s restaurant purchases. 

The mid 1990s and early 2000s were tough times for restaurants. Consumers were demanding healthier foods and fast foods were considered the worst of the worst. This was when Kentucky Fried Chicken rebranded as KFC. Debt hangovers from the leveraged buyout era added financial pressure. Many restaurant groups were filing for bankruptcy and competition intensified among fast food companies. PepsiCo’s restaurants could not cover their cost of capital, and what was once a profitable diversification strategy became a financial albatross, so the restaurants were spun off.

Thus, it seems more reasonable to conclude PepsiCo’s exit from restaurants was driven more by market exigencies than by a failure to achieve vertical efficiencies. While the folklore of locking up distribution channels to eliminate double marginalization fits nicely with theory, the facts suggest a more mundane model of a firm scrambling to deliver shareholder wealth through diversification in the face of changing competition.

There’s always a reason to block a merger:

  • If a firm is too big, it will be because it is “a merger for monopoly”;
  • If the firms aren’t that big, it will be for “coordinated effects”;
  • If a firm is small, then it will be because it will “eliminate a maverick”.

It’s a version of Ronald Coase’s complaint about antitrust, as related by William Landes:

Ronald said he had gotten tired of antitrust because when the prices went up the judges said it was monopoly, when the prices went down, they said it was predatory pricing, and when they stayed the same, they said it was tacit collusion.

Of all the reasons to block a merger, the maverick notion is the weakest, and it’s well past time to ditch it.

The Horizontal Merger Guidelines define a “maverick” as “a firm that plays a disruptive role in the market to the benefit of customers.” According to the Guidelines, this includes firms:

  1. With a new technology or business model that threatens to disrupt market conditions;
  2. With an incentive to take the lead in price cutting or other competitive conduct or to resist increases in industry prices;
  3. That resist otherwise prevailing industry norms to cooperate on price setting or other terms of competition; and/or
  4. With an ability and incentive to expand production rapidly using available capacity to “discipline prices.”

There appears to be no formal model of maverick behavior that does not rely on some a priori assumption that the firm is a maverick.

For example, John Kwoka’s 1989 model assumes the maverick firm has different beliefs about how competing firms would react if the maverick varies its output or price. Louis Kaplow and Carl Shapiro developed a simple model in which the firm with the smallest market share may play the role of a maverick. They note, however, that this raises the question—in a model in which every firm faces the same cost and demand conditions—why would there be any variation in market shares? The common solution, according to Kaplow and Shapiro, is cost asymmetries among firms. If that is the case, then “maverick” activity is merely a function of cost, rather than some uniquely maverick-like behavior.

The idea of the maverick firm requires that the firm play a critical role in the market. The maverick must be the firm that outflanks coordinated action or acts as a bulwark against unilateral action. By this loosey goosey definition of maverick, a single firm can make the difference between success or failure of anticompetitive behavior by its competitors. Thus, the ability and incentive to expand production rapidly is a necessary condition for a firm to be considered a maverick. For example, Kaplow and Shapiro explain:

Of particular note is the temptation of one relatively small firm to decline to participate in the collusive arrangement or secretly to cut prices to serve, say, 4% rather than 2% of the market. As long as price cuts by a small firm are less likely to be accurately observed or inferred by the other firms than are price cuts by larger firms, the presence of small firms that are capable of expanding significantly is especially disruptive to effective collusion.

A “maverick” firm’s ability to “discipline prices” depends crucially on its ability to expand output in the face of increased demand for its products. Similarly, the other non-maverick firms can be “disciplined” by the maverick only in the face of a credible threat of (1) a noticeable drop in market share that (2) leads to lower profits.

The government’s complaint in AT&T/T-Mobile’s 2011 proposed merger alleges:

Relying on its disruptive pricing plans, its improved high-speed HSPA+ network, and a variety of other initiatives, T-Mobile aimed to grow its nationwide share to 17 percent within the next several years, and to substantially increase its presence in the enterprise and government market. AT&T’s acquisition of T-Mobile would eliminate the important price, quality, product variety, and innovation competition that an independent T-Mobile brings to the marketplace.

At the time of the proposed merger, T-Mobile accounted for 11% of U.S. wireless subscribers. At the end of 2016, its market share had hit 17%. About half of the increase can be attributed to its 2012 merger with MetroPCS. Over the same period, Verizon’s market share increased from 33% to 35% and AT&T market share remained stable at 32%. It appears that T-Mobile’s so-called maverick behavior did more to disrupt the market shares of smaller competitors Sprint and Leap (which was acquired by AT&T). Thus, it is not clear, ex post, that T-Mobile posed any threat to AT&T or Verizon’s market shares.

Geoffrey Manne raised some questions about the government’s maverick theory which also highlights a fundamental problem with the willy nilly way in which firms are given the maverick label:

. . . it’s just not enough that a firm may be offering products at a lower price—there is nothing “maverick-y” about a firm that offers a different, less valuable product at a lower price. I have seen no evidence to suggest that T-Mobile offered the kind of pricing constraint on AT&T that would be required to make it out to be a maverick.

While T-Mobile had a reputation for lower mobile prices, in 2011, the firm was lagging behind Verizon, Sprint, and AT&T in the rollout of 4G technology. In other words, T-Mobile was offering an inferior product at a lower price. That’s not a maverick, that’s product differentiation with hedonic pricing.

More recently, in his opposition to the proposed T-Mobile/Sprint merger, Gene Kimmelman from Public Knowledge asserts that both firms are mavericks and their combination would cause their maverick magic to disappear:

Sprint, also, can be seen as a maverick. It has offered “unlimited” plans and simplified its rate plans, for instance, driving the rest of the industry forward to more consumer-friendly options. As Sprint CEO Marcelo Claure stated, “Sprint and T-Mobile have similar DNA and have eliminated confusing rate plans, converging into one rate plan: Unlimited.” Whether both or just one of the companies can be seen as a “maverick” today, in either case the newly combined company would simply have the same structural incentives as the larger carriers both Sprint and T-Mobile today work so hard to differentiate themselves from.

Kimmelman provides no mechanism by which the magic would go missing, but instead offers a version of an adversity-builds-character argument:

Allowing T-Mobile to grow to approximately the same size as AT&T, rather than forcing it to fight for customers, will eliminate the combined company’s need to disrupt the market and create an incentive to maintain the existing market structure.

For 30 years, the notion of the maverick firm has been a concept in search of a model. If the concept cannot be modeled decades after being introduced, maybe the maverick can’t be modeled.

What’s left are ad hoc assertions mixed with speculative projections in hopes that some sympathetic judge can be swayed. However, some judges seem to be more skeptical than sympathetic, as in H&R Block/TaxACT :

The parties have spilled substantial ink debating TaxACT’s maverick status. The arguments over whether TaxACT is or is not a “maverick” — or whether perhaps it once was a maverick but has not been a maverick recently — have not been particularly helpful to the Court’s analysis. The government even put forward as supposed evidence a TaxACT promotional press release in which the company described itself as a “maverick.” This type of evidence amounts to little more than a game of semantic gotcha. Here, the record is clear that while TaxACT has been an aggressive and innovative competitor in the market, as defendants admit, TaxACT is not unique in this role. Other competitors, including HRB and Intuit, have also been aggressive and innovative in forcing companies in the DDIY market to respond to new product offerings to the benefit of consumers.

It’s time to send the maverick out of town and into the sunset.