Archives For HHI

[The following is adapted from a piece in the Economic Forces newsletter, which you can subscribe to on Substack.]

Everyone is worried about growing concentration in U.S. markets. President Joe Biden’s July 2021 executive order on competition begins with the assertion that “excessive market concentration threatens basic economic liberties, democratic accountability, and the welfare of workers, farmers, small businesses, startups, and consumers.” No word on the threat of concentration to baby puppies, but the takeaway is clear. Concentration is everywhere, and it’s bad.

On the academic side, Ufuk Akcigit and Sina Ates have an interesting paper on “ten facts”—worrisome facts, in my reading—about business dynamism. Fact No. 1: “Market concentration has risen.” Can’t get higher than No. 1, last time I checked.

Unlike most people commenting on concentration, I don’t see any reason to see high or rising concentration itself as a bad thing (although it may be a sign of problems). One key takeaway from industrial organization is that high concentration tells us nothing about levels of competition and so has no direct normative implication. I bring this up all the time (see 1234).

So without worrying about whether rising concentration is a good or bad thing, this post asks, “is rising concentration a thing?” Is there any there there? Where is it rising? For what measures? Just the facts, ma’am.

How to Measure Concentration

I will focus here primarily on product-market concentration and save labor-market concentration for a later post. The following is a brief literature review. I do not cover every paper. If I missed an important one, tell me in the comments.

There are two steps to calculating concentration. First, define the market. In empirical work, a market usually includes the product sold or the input bought (e.g., apples) and a relevant geographic region (United States). With those two bits of information decided, we have a “market” (apples sold in the United States).

Once we have defined the relevant market, we need a measure of concentration within that market. The most straightforward measure to use is to look at the use-concentration ratio of some number of firms. If you see “CR4,” it refers to the percentage of total sales in the market is by the four largest firms? One problem with this measure is that CR4 ignores everything about the fifth largest and smaller firms.

The other option used to quantify concentration is called the Herfindahl-Hirschman index (HHI), which is a number between 0 and 10,000 (or 0 and 1, if it is normalized), with 10,000 meaning all of the sales go to one firm and 0 being the limit as many firms each have smaller and smaller shares. The benefit of the HHI is that it uses information on the whole distribution of firms, not just the top few.[1]

The Biggest Companies

With those preliminaries out of the way, let’s start with concentration among the biggest firms over the longest time-period and work our way to more granular data.

When people think of “corporate concentration,” they think of the giant companies like Standard Oil, Ford, Walmart, and Google. People maybe even picture a guy with a monocle, that sort of thing.

How much of total U.S. sales go to the biggest firms? How has that changed over time? These questions are the focus of Spencer Y. Kwon, Yueran Ma, and Kaspar Zimmermann’s (2022) “100 Years of Rising Corporate Concentration.”

Spoiler alert: they find rising corporate concentration. But what does that mean?

They look at the concentration of assets and sales concentrated among the largest 1% and 0.1% of businesses. For sales, due to data limitations, they need to use net income (excluding firms with negative net income) for the first half and receipts (sales) for the second half.

In 1920, the top 1% of firms had about 60% of total sales. Now, that number is above 80%. For the top 0.1%, the number rose from about 35% to 65%. Asset concentration (blue below) is even more striking, rising to almost 100% for the top 1% of firms.

Kwon, Ma, and Zimmermann (2022)

Is this just mechanical from the definitions? That was my first concern. Suppose you have a bunch of small firms enter that have no effect on the economy. Everyone starts a Substack that makes no money. 🤔 This mechanically bumps big firms in the top 1.1% into the top 1% and raises the share. The authors had thought about this more than my 2 minutes of reading, so they did something simple.

The simple comparison is to limit the economy to just the top 10% of firms. What share goes to the top 1%? In that world, when small firms enter, there is still a bump from the top 1.1% to 1%, but there is also a bump from 10.1% to 10%. Both the numerator and denominator of the ratio are mechanically increasing. That doesn’t perfectly solve the issue, since the bump to the 1.1% firm is, by definition, bigger than the bump from the 10.1% firm, but it’s a quick comparison. Still, we see a similar rise in the top 1%.

Big companies are getting bigger, even relatively.

I’m not sure how much weight to put on this paper for thinking about concentration trends. It’s an interesting paper, and that’s why I started with it. But I’m very hesitant to think of “all goods and services in the United States” as a relevant market for any policy question, especially antitrust-type questions, which is where we see the most talk about concentration. But if you’re interested in corporate concentration influencing politics, these numbers may be super relevant.

At the industry level, which is closer to an antitrust market but still not one, they find similar trends. The paper’s website (yes, the paper has a website. Your papers don’t?) has a simple display of the industry-level trends. They match the aggregate change, but the timing differs.

Industry-Level Concentration Trends, Public Firms

Moving down from big to small, we can start asking about publicly traded firms. These tend to be larger firms, but the category doesn’t capture all firms and is biased, as I’ve pointed out before.

Grullon, Larkin, and Michaely (2019) look at the average HHI at the 3-digit NAICS level (for example, oil and gas is “a market”). Below is the plot of the (sales-weighted) average HHI for publicly traded firms. It dropped in the 80s and early 90s, rose rapidly in the late 90s and early 2000s, and has slowly risen since. I’d say “concentration is rising” is the takeaway.

Average publicly-traded HHI (3-digit NAICS) from Gullon, Larkin, and Michaely (2019)

The average hides how the distribution has changed. For antitrust, we may care whether a few industries have seen a large increase in concentration or all industries have seen a small increase.

The figure below plots from 1997-2012. We’ve seen many industries with a large increase (>40%) in the HHI. We get a similar picture if we look at the share of sales to the top 4 firms.

Distribution of changes in publicly traded HHI (3-digit NAICS) between 1997-2012 from Gullon, Larkin, and Michaely (2019)

One issue with NAICS is that it was designed to lump firms together from a producer’s perspective, not the consumer’s perspective. We will say more about that below.

Another issue in Compustat is that we only have industry at the firm level, not the establishment level. For example, every 3M office or plant gets labeled as “Miscellaneous Manufactured Commodities” and doesn’t separate out the plants that make tape (like my hometown) from those that make surgical gear.

But firms are increasingly doing wider and wider business. That may not matter if you’re worried about political corruption from concentration. But if you’re thinking about markets, it seems problematic that, in Compustat, all of Amazon’s web services (cloud servers) revenue gets lumped into NAICS 454 “Nonstore Retailers,” since that’s Amazon’s firm-level designation.

Hoberg and Phillips (2022) try to account for this increasing “scope” of businesses. They make an adjustment to allow a firm to exist in multiple industries. After making this correction, they find a falling average HHI.

Hoberg and Phillips (2021)

Industry-Level Concentration Trends, All Firms

Why stick to just publicly traded firms? That could be especially problematic since we know that the set of public firms is different from private firms, and the differences have changed over time. Public firms compete with private firms and so are in the same market for many questions.

And we have data on public and private firms. Well, I don’t. I’m stuck with Compustat data. But big names have the data.

Autor, Dorn, Katz, Patterson, and Van Reenen (2020), in their famous “superstar firms” paper, have U.S. Census panel data at the firm and establishment level, covering six major sectors: manufacturing, retail trade, wholesale trade, services, utilities and transportation, and finance. They focus on the share of the top 4 (CR4) or the top 20 (CR20) firms, both in terms of sales and employment. Every series, besides employment in manufacturing, has seen an increase. In retail, there has been nearly a doubling of the sales share to the top 4 firms.

Autor, Dorn, Katz, Patterson, and Van Reenen (2020)

I guess that settles it. Three major papers show the same trend. It’s settled… If only economic trends were so simple.

What About Narrower Product Markets?

For antitrust cases, we define markets slightly differently. We don’t use NAICS codes, since they are designed to lump together similar producers, not similar products. We also don’t use the six “major industries” in the Census, since those are also too large to be meaningful for antitrust. Instead, the product level is much smaller.

Luckily, Benkard, Yurukoglu, and Zhang (2021) construct concentration measures that are intended to capture consumption-based product markets. They have respondent-level data from the annual “Survey of the American Consumer” available from MRI Simmons, a market-research firm. The survey asks specific questions about which brands consumers buy.

They define markets into 457 product-market categories, separated into 29 locations. Product “markets” are then aggregated into “sectors.” Another interesting feature is that they know the ownership of different products, even if the brand name is different. Ownership is what matters for antitrust.

They find falling concentration at the market level (the narrowest product), both at the local and the national level. At the sector level (which aggregates markets), there is a slight increase.

Benkard, Yurukoglu, and Zhang (2021)

If you focus on industries with an HHI above 2500, the level that is considered “highly concentrated” in the U.S. Horizontal Merger Guidelines, the “highly concentrated” fell from 48% in 1994 to 39% in 2019. I’m not sure how seriously to take this threshold, since the merger guidelines take a different approach to defining markets. Overall, the authors say, “we find no evidence that market power (sic) has been getting worse over time in any broad-based way.”

Is the United States a Market?

Markets are local

Benkard, Yurukoglu, and Zhang make an important point about location. In what situations is the United States the appropriate geographic region? The U.S. housing market is not a meaningful market. If my job and family are in Minnesota, I’m not considering buying a house in California. Those are different markets.

While the first few papers above focused on concentration in the United States as a whole or within U.S. companies, is that really the appropriate market? Maybe markets are much more localized, and the trends could be different.

Along comes Rossi-Hansberg, Sarte, and Trachter (2021) with a paper titled “Diverging Trends in National and Local Concentration.” In that paper, they argue that there are, you guessed it, diverging trends in national and local concentration. If we look at concentration at different geographic levels, we get a different story. Their main figure shows that, as we move to smaller geographic regions, concentration goes from rising over time to falling over time.

Figure 1 from Rossi-Hansberg, Sarte, and Trachter (2020)

How is it possible to have such a different story depending on area?

Imagine a world where each town has its own department store. At the national level, concentration is low, but each town has a high concentration. Now Walmart enters the picture and sets up shop in 10,000 towns. That increases national concentration while reducing local concentration, which goes from one store to two. That sort of dynamic seems plausible, and the authors spend a lot of time discussing Walmart.

The paper was really important, because it pushed people to think more carefully about the type of concentration that they wanted to study. Just because data tends to be at the national level doesn’t mean that’s appropriate.

As with all these papers, however, the data source matters. There are a few concerns with the “National Establishment Time Series” (NETS) data used, as outlined in Crane and Decker (2020). Lots of the data is imputed, meaning it was originally missing and then filled in with statistical techniques. Almost every Walmart stores has exactly the median sales to worker ratio. This suggests the data starts with the number of workers and imputes the sales data from there. That’s fine if you are interested in worker concentration, but this paper is about sales.

Instead of relying on NETS data, Smith and Ocampo (2022) have Census data on product-level revenue for all U.S. retail stores between 1992 and 2012. The downside is that it is only retail, but that’s an important sector and can help us make sense of the “Walmart enters town” concentration story.

Unlike Rossi-Hansberg, Sarte, and Trachter, Smith and Ocampo find rising concentration at both the local and national levels. It depends on the exact specification. They find changes in local concentration between -1.5 and 12.6 percentage points. Regardless, the –17 percentage points of Rossi-Hansberg, Sarte, and Trachter is well outside their estimates. To me, that suggests we should be careful with the “declining local concentration” story.

Smith and Ocampo (2022).

Ultimately, for local stories, data is the limitation. Take all of the data issues at the aggregate level and then try to drill down to the ZIP code or city level. It’s tough. It just doesn’t exist in general, outside of Census data for a few sectors. The other option is to dig into a particular industry, Miller, Osborne, Sheu, and Sileo (2022) study the cement industry. 😱 (They find rising concentration.)

Markets are global

Instead of going more local, what if we go the other way? What makes markets unique in 2022 vs. 1980 is not that they are local but that they are global. Who cares if U.S. manufacturing is more concentrated if U.S. firms now compete in a global market?

The standard approach (used in basically all the papers above) computes market shares based on where the good was manufactured and doesn’t look at where the goods end up. (Compustat data is more of a mess because it includes lots of revenue from foreign establishments of U.S. firms.)

What happens when we look at where goods are ultimately sold? Again, that’s relevant for antitrust. Amiti and Heise (2021) augment the usual Census of Manufacturers with transaction-level import data from the Longitudinal Firm Trade Transactions Database (LFTTD) of the Census Bureau. They see U.S. customs forms. That’s “export-adjusted.”

They then do something similar for imports to come up with “market concentration.” That is their measure of concentration for all firms selling in the U.S., irrespective of where the firm is located. That line is completely flat from 1992-2012.

Again, this is only manufacturing, but it is a striking example of how we need to be careful with our measures of concentration. This seems like a very important correction of concentration for most questions and for many industries. Tech is clearly a global market.

Conclusion

If I step back from all of these results, I think it is safe to say that concentration is rising by most measures. However, there are lots of caveats. In a sector like manufacturing, the relevant global market is not more concentrated. The Rossi-Hansberg, Sarte, and Trachter paper suggests, despite data issues, local concentration could be falling. Again, we need to be careful.

Alex Tabarrok says trust literatures, not papers. What does that imply here?

Take the last paper by Amiti and Heise. Yes, it is only one industry, but in the one industry that we have the import/export correction, the concentration results flip. That leaves me unsure of what is going on.


[1] There’s often a third step. If we are interested in what is going on in the overall economy, we need to somehow average across different markets. There is sometimes debate about how to average a bunch of HHIs. Let’s not worry too much about that for purposes of this post. Generally, if you’re looking at the concentration of sales, the industries are weighted by sales.

The Jan. 18 Request for Information on Merger Enforcement (RFI)—issued jointly by the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ)—sets forth 91 sets of questions (subsumed under 15 headings) that provide ample opportunity for public comment on a large range of topics.

Before chasing down individual analytic rabbit holes related to specific questions, it would be useful to reflect on the “big picture” policy concerns raised by this exercise (but not hinted at in the questions). Viewed from a broad policy perspective, the RFI initiative risks undermining the general respect that courts have accorded merger guidelines over the years, as well as disincentivizing economically beneficial business consolidations.

Policy concerns that flow from various features of the RFI, which could undermine effective merger enforcement, are highlighted below. These concerns counsel against producing overly detailed guidelines that adopt a merger-skeptical orientation.

The RFI Reflects the False Premise that Competition is Declining in the United States

The FTC press release that accompanied the RFI’s release made clear that a supposed weakening of competition under the current merger-guidelines regime is a key driver of the FTC and DOJ interest in new guidelines:

Today, the Federal Trade Commission (FTC) and the Justice Department’s Antitrust Division launched a joint public inquiry aimed at strengthening enforcement against illegal mergers. Recent evidence indicates that many industries across the economy are becoming more concentrated and less competitive – imperiling choice and economic gains for consumers, workers, entrepreneurs, and small businesses.

This premise is not supported by the facts. Based on a detailed literature review, Chapter 6 of the 2020 Economic Report of the President concluded that “the argument that the U.S. economy is suffering from insufficient competition is built on a weak empirical foundation and questionable assumptions.” More specifically, the 2020 Economic Report explained:

Research purporting to document a pattern of increasing concentration and increasing markups uses data on segments of the economy that are far too broad to offer any insights about competition, either in specific markets or in the economy at large. Where data do accurately identify issues of concentration or supercompetitive profits, additional analysis is needed to distinguish between alternative explanations, rather than equating these market indicators with harmful market power.

Soon to-be-published quantitative research by Robert Kulick of NERA Economic Consulting and the American Enterprise Institute, presented at the Jan. 26 Mercatus Antitrust Forum, is consistent with the 2020 Economic Report’s findings. Kulick stressed that there was no general trend toward increasing industrial concentration in the U.S. economy from 2002 to 2017. In particular, industrial concentration has been declining since 2007; the Herfindahl–Hirschman index (HHI) for manufacturing has declined significantly since 2002; and the economywide four-firm concentration ratio (CR4) in 2017 was approximately the same as in 2002. 

Even in industries where concentration may have risen, “the evidence does not support claims that concentration is persistent or harmful.” In that regard, Kulick’s research finds that higher-concentration industries tend to become less concentrated, while lower-concentration industries tend to become more concentrated over time; increases in industrial concentration are associated with economic growth and job creation, particularly for high-growth industries; and rising industrial concentration may be driven by increasing market competition.

In short, the strongest justification for issuing new merger guidelines is based on false premises: an alleged decline in competition within the Unites States. Given this reality, the adoption of revised guidelines designed to “ratchet up” merger enforcement would appear highly questionable.

The RFI Strikes a Merger-Skeptical Tone Out of Touch with Modern Mainstream Antitrust Scholarship

The overall tone of the RFI reflects a skeptical view of the potential benefits of mergers. It ignores overarching beneficial aspects of mergers, which include reallocating scarce resources to higher-valued uses (through the market for corporate control) and realizing standard efficiencies of various sorts (including cost-based efficiencies and incentive effects, such as the elimination of double marginalization through vertical integration). Mergers also generate benefits by bringing together complementary assets and by generating synergies of various sorts, including the promotion of innovation and scaling up the fruits of research and development. (See here, for example.)

What’s more, as the Organisation for Economic Co-operation and Development (OECD) has explained, “[e]vidence suggests that vertical mergers are generally pro-competitive, as they are driven by efficiency-enhancing motives such as improving vertical co-ordination and realizing economies of scope.”

Given the manifold benefits of mergers in general, the negative and merger-skeptical tone of the RFI is regrettable. It not only ignores sound economics, but it is at odds with recent pronouncements by the FTC and DOJ. Notably, the 2010 DOJ-FTC Horizontal Merger Guidelines (issued by Obama administration enforcers) struck a neutral tone. Those guidelines recognized the duty to challenge anticompetitive mergers while noting the public interest in avoiding unnecessary interference with non-anticompetitive mergers (“[t]he Agencies seek to identify and challenge competitively harmful mergers while avoiding unnecessary interference with mergers that are either competitively beneficial or neutral”). The same neutral approach is found in the 2020 DOJ-FTC Vertical Merger Guidelines (“the Agencies use a consistent set of facts and assumptions to evaluate both the potential competitive harm from a vertical merger and the potential benefits to competition”).

The RFI, however, expresses no concern about unnecessary government interference, and strongly emphasizes the potential shortcomings of the existing guidelines in questioning whether they “adequately equip enforcers to identify and proscribe unlawful, anticompetitive mergers.” Merger-skepticism is also reflected throughout the RFI’s 91 sets of questions. A close reading reveals that they are generally phrased in ways that implicitly assume competitive problems or reject potential merger justifications.

For example, the questions addressing efficiencies, under RFI heading 14, casts efficiencies in a generally negative light. Thus, the RFI asks whether “the [existing] guidelines’ approach to efficiencies [is] consistent with the prevailing legal framework as enacted by Congress and interpreted by the courts,” citing the statement in FTC v. Procter & Gamble (1967) that “[p]ossible economies cannot be used as a defense to illegality.”

The view that antitrust disfavors mergers that enhance efficiencies (the “efficiencies offense”) has been roundly rejected by mainstream antitrust scholarship (see, for example, here, here, and here). It may be assumed that today’s Supreme Court (which has deemed consumer welfare to be the lodestone of antitrust enforcement since Reiter v. Sonotone (1979)) would give short shrift to an “efficiencies offense” justification for a merger challenge.

Another efficiencies-related question, under RFI heading 14.d, may in application fly in the face of sound market-oriented economics: “Where a merger is expected to generate cost savings via the elimination of ‘excess’ or ‘redundant’ capacity or workers, should the guidelines treat these savings as cognizable ‘efficiencies’?”

Consider a merger that generates synergies and thereby expands and/or raises the quality of goods and services produced with reduced capacity and fewer workers. This merger would allow these resources to be allocated to higher-valued uses elsewhere in the economy, yielding greater economic surplus for consumers and producers. But there is the risk that such a merger could be viewed unfavorably under new merger guidelines that were revised in light of this question. (Although heading 14.d includes a separate question regarding capacity reductions that have the potential to reduce supply resilience or product or service quality, it is not stated that this provision should be viewed as a limitation on the first sentence.)

The RFI’s discussion of topics other than efficiencies similarly sends the message that existing guidelines are too “pro-merger.” Thus, for example, under RFI heading 5 (“presumptions”), one finds the rhetorical question: “[d]o the [existing] guidelines adequately identify mergers that are presumptively unlawful under controlling case law?”

This question answers itself, by citing to the Philadelphia National Bank (1963) statement that “[w]ithout attempting to specify the smallest market share which would still be considered to threaten undue concentration, we are clear that 30% presents that threat.” This statement predates all of the merger guidelines and is out of step with the modern economic analysis of mergers, which the existing guidelines embody. It would, if taken seriously, threaten a huge number of proposed mergers that, until now, have not been subject to second-request review by the DOJ and FTC. As Judge Douglas Ginsburg and former Commissioner Joshua Wright have explained:

The practical effect of the PNB presumption is to shift the burden of proof from the plaintiff, where it rightfully resides, to the defendant, without requiring evidence – other than market shares – that the proposed merger is likely to harm competition. . . . The presumption ought to go the way of the agencies’ policy decision to drop reliance upon the discredited antitrust theories approved by the courts in such cases as Brown Shoe, Von’s Grocery, and Utah Pie. Otherwise, the agencies will ultimately have to deal with the tension between taking advantage of a favorable presumption in litigation and exerting a reformative influence on the direction of merger law.

By inviting support for PNB-style thinking, RFI heading 5’s lead question effectively rejects the economic effects-based analysis that has been central to agency merger analysis for decades. Guideline revisions that downplay effects in favor of mere concentration would likely be viewed askance by reviewing courts (and almost certainly would be rejected by the Supreme Court, as currently constituted, if the occasion arose).

These particularly striking examples are illustrative of the questioning tone regarding existing merger analysis that permeates the RFI.

New Merger Guidelines, if Issued, Should Not Incorporate the Multiplicity of Issues Embodied in the RFI

The 91 sets of questions in the RFI read, in large part, like a compendium of theoretical harms to the working of markets that might be associated with mergers. While these questions may be of general academic interest, and may shed some light on particular merger investigations, most of them should not be incorporated into guidelines.

As Justice Stephen Breyer has pointed out, antitrust is a legal regime that must account for administrative practicalities. Then-Judge Breyer described the nature of the problem in his 1983 Barry Wright opinion (affirming the dismissal of a Sherman Act Section 2 complaint based on “unreasonably low” prices):

[W]hile technical economic discussion helps to inform the antitrust laws, those laws cannot precisely replicate the economists’ (sometimes conflicting) views. For, unlike economics, law is an administrative system the effects of which depend upon the content of rules and precedents only as they are applied by judges and juries in courts and by lawyers advising their clients. Rules that seek to embody every economic complexity and qualification may well, through the vagaries of administration, prove counter-productive, undercutting the very economic ends they seek to serve.

It follows that any effort to include every theoretical merger-related concern in new merger guidelines would undercut their (presumed) overarching purpose, which is providing useful guidance to the private sector. All-inclusive “guidelines” in reality provide no guidance at all. Faced with a laundry list of possible problems that might prompt the FTC or DOJ to oppose a merger, private parties would face enormous uncertainty, which could deter them from proposing a large number of procompetitive, welfare-enhancing or welfare-neutral consolidations. This would “undercut the very economic ends” of promoting competition that is served by Section 7 enforcement.

Furthermore, all-inclusive merger guidelines could be seen by judges as undermining the rule of law (see here, for example). If DOJ and FTC were able to “pick and choose” at will from an enormously wide array of considerations to justify opposing a proposed merger, they could be seen as engaged in arbitrary enforcement, rather than in a careful weighing of evidence aimed at condemning only anticompetitive transactions. This would be at odds with the promise of fair and dispassionate enforcement found in the 2010 Horizontal Merger Guidelines, namely, to “seek to identify and challenge competitively harmful mergers while avoiding unnecessary interference with mergers that are either competitively beneficial or neutral.”

Up until now, federal courts have virtually always implicitly deferred to (and not questioned) the application of merger-guideline principles by the DOJ and FTC. The agencies have won or lost cases based on courts’ weighing of particular factual and economic evidence, not on whether guideline principles should have been applied by the enforcers.

One would expect courts to react very differently, however, to cases brought in light of ridiculously detailed “guidelines” that did not provide true guidance (particularly if they were heavy on competitive harm possibilities and discounted efficiencies). The agencies’ selective reliance on particular anticompetitive theories could be seen as exercises in arbitrary “pre-cooked” condemnations, not dispassionate enforcement. As such, the courts would tend to be far more inclined to reject (or accord far less deference to) the new guidelines in evaluating agency merger challenges. Even transactions that would have been particularly compelling candidates for condemnation under prior guidelines could be harder to challenge successfully, due to the taint of the new guidelines.

In short, the adoption of highly detailed guidelines that emphasize numerous theories of harm would likely undermine the effectiveness of DOJ and FTC merger enforcement, the precise opposite of what the agencies would have intended.

New Merger Guidelines, if Issued, Should Avoid Relying on Outdated Case Law and Novel Section 7 Theories, and Should Give Due Credit to Economic Efficiencies

The DOJ and FTC could, of course, acknowledge the problem of administrability  and issue more straightforward guideline revisions, of comparable length and detail to prior guidelines. If they choose to do so, they would be well-advised to eschew relying on dated precedents and novel Section 7 theories. They should also give due credit to efficiencies. Seemingly biased guidelines would undermine merger enforcement, not strengthen it.

As discussed above, the RFI’s implicitly favorable references to Philadelphia National Bank and Procter & Gamble are at odds with contemporary economics-based antitrust thinking, which has been accepted by the federal courts. The favorable treatment of those antediluvian holdings, and Brown Shoe Co. v. United States (1962) (another horribly dated case cited multiple times in the RFI), would do much to discredit new guidelines.

In that regard, the suggestion in RFI heading 1 that existing merger guidelines may not “faithfully track the statutory text, legislative history, and established case law around merger enforcement” touts the Brown Shoe and PNB concerns with a “trend toward concentration” and “the danger of subverting congressional intent by permitting a too-broad economic investigation.”

New guidelines that focus on (or even give lip service to) a “trend” toward concentration and eschew overly detailed economic analyses (as opposed, perhaps, to purely concentration-based negative rules of thumb?) would predictably come in for judicial scorn as economically unfounded. Such references would do as much (if not more) to ensure judicial rejection of enforcement-agency guidelines as endless lists of theoretically possible sources of competitive harm, discussed previously.

Of particular concern are those references that implicitly reject the need to consider efficiencies, which is key to modern enlightened merger evaluations. It is ludicrous to believe that a majority of the current Supreme Court would have a merger-analysis epiphany and decide that the RFI’s preferred interventionist reading of Section 7 statutory language and legislative history trumps decades of economically centered consumer-welfare scholarship and agency guidelines.

Herbert Hovenkamp, author of the leading American antitrust treatise and a scholar who has been cited countless times by the Supreme Court, recently put it well (in an article coauthored with Carl Shapiro):

When the FTC investigates vertical and horizontal mergers will it now take the position that efficiencies are irrelevant, even if they are proven? If so, the FTC will face embarrassing losses in court.

Reviewing courts wound no doubt take heed of this statement in assessing any future merger guidelines that rely on dated and discredited cases or that minimize efficiencies.

New Guidelines, if Issued, Should Give Due Credit to Efficiencies

Heading 14 of the RFI—listing seven sets of questions that deal with efficiencies—is in line with the document’s implicitly negative portrayal of mergers. The heading begins inauspiciously, with a question that cites Procter & Gamble in suggesting that the current guidelines’ approach to efficiencies is “[in]consistent with the prevailing legal framework as enacted by Congress and interpreted by the courts.” As explained above, such an anti-efficiencies reference would be viewed askance by most, if not all, reviewing judges.

Other queries in heading 14 also view efficiencies as problematic. They suggest that efficiency claims should be treated negatively because efficiency claims are not always realized after the fact. But merger activity is a private-sector search process, and the ability to predict ex post effects with perfect accuracy is an inevitable part of market activity. Using such a natural aspect of markets as an excuse to ignore efficiencies would prevent many economically desirable consolidations from being achieved.

Furthermore, the suggestion under heading 14 that parties should have to show with certainty that cognizable efficiencies could not have been achieved through alternative means asks the impossible. Theoreticians may be able to dream up alternative means by which efficiencies might have been achieved (say, through convoluted contracts), but such constructs may not be practical in real-world settings. Requiring businesses to follow dubious theoretical approaches to achieve legitimate business ends, rather than allowing them to enter into arrangements they favor that appear efficient, would manifest inappropriate government interference in markets. (It would be just another example of the “pretense of knowledge” that Friedrich Hayek brilliantly described in his 1974 Nobel Prize lecture.)

Other questions under heading 14 raise concerns about the lack of discussion of possible “inefficiencies” in current guidelines, and speculate about possible losses of “product or service quality” due to otherwise efficient reductions in physical capacity and employment. Such theoretical musings offer little guidance to the private sector, and further cast in a negative light potential real resource savings.

Rather than incorporate the unhelpful theoretical efficiencies critiques under heading 14, the agencies should consider a more helpful approach to clarifying the evaluation of efficiencies in new guidelines. Such a clarification could be based on Commissioner Christine Wilson’s helpful discussion of merger efficiencies in recent writings (see, for example, here and here). Wilson has appropriately called for the symmetric treatment of both the potential harms and benefits arising from mergers, explaining that “the agencies readily credit harms but consistently approach potential benefits with extreme skepticism.”

She and Joshua Wright have also explained (see here, here, and here) that overly narrow product-market definitions may sometimes preclude consideration of substantial “out-of-market” efficiencies that arise from certain mergers. The consideration of offsetting “out-of-market” efficiencies that greatly outweigh competitive harms might warrant inclusion in new guidelines.

The FTC and DOJ could be heading for a merger-enforcement train wreck if they adopt new guidelines that incorporate the merger-skeptical tone and excruciating level of detail found in the RFI. This approach would yield a lengthy and uninformative laundry list of potential competitive problems that would allow the agencies to selectively pick competitive harm “stories” best adapted to oppose particular mergers, in tension with the rule of law.

Far from “strengthening” merger enforcement, such new guidelines would lead to economically harmful business uncertainty and would severely undermine judicial respect for the federal merger-enforcement process. The end result would be a “lose-lose” for businesses, for enforcers, and for the American economy.

Conclusion

If the agencies enact new guidelines, they should be relatively short and straightforward, designed to give private parties the clearest possible picture of general agency enforcement intentions. In particular, new guidelines should:

  1. Eschew references to dated and discredited case law;
  2. Adopt a neutral tone that acknowledges the beneficial aspects of mergers;
  3. Recognize the duty to challenge anticompetitive mergers, while at the same time noting the public interest in avoiding unnecessary interference with non-anticompetitive mergers (consistent with the 2010 Horizontal Merger Guidelines); and
  4. Acknowledge the importance of efficiencies, treating them symmetrically with competitive harm and according appropriate weight to countervailing out-of-market efficiencies (a distinct improvement over existing enforcement policy).

Merger enforcement should continue to be based on fact-based case-specific evaluations, informed by sound economics. Populist nostrums that treat mergers with suspicion and that ignore their beneficial aspects should be rejected. Such ideas are at odds with current scholarly thinking and judicial analysis, and should be relegated to the scrap heap of outmoded and bad public policies.

Germán Gutiérrez and Thomas Philippon have released a major rewrite of their paper comparing the U.S. and EU competitive environments. 

Although the NBER website provides an enticing title — “How European Markets Became Free: A Study of Institutional Drift” — the paper itself has a much more yawn-inducing title: “How EU Markets Became More Competitive Than US Markets: A Study of Institutional Drift.”

Having already critiqued the original paper at length (here and here), I wouldn’t normally take much interest in the do-over. However, in a recent episode of Tyler Cowen’s podcast, Jason Furman gave a shout out to Philippon’s work on increasing concentration. So, I thought it might be worth a review.

As with the original, the paper begins with a conclusion: The EU appears to be more competitive than the U.S. The authors then concoct a theory to explain their conclusion. The theory’s a bit janky, but it goes something like this:

  • Because of lobbying pressure and regulatory capture, an individual country will enforce competition policy at a suboptimal level.
  • Because of competing interests among different countries, a “supra-national” body will be more independent and better able to foster pro-competitive policies and to engage in more vigorous enforcement of competition policy.
  • The EU’s supra-national body and its Directorate-General for Competition is more independent than the U.S. Department of Justice and Federal Trade Commission.
  • Therefore, their model explains why the EU is more competitive than the U.S. Q.E.D.

If you’re looking for what this has to do with “institutional drift,” don’t bother. The term only shows up in the title.

The original paper provided evidence from 12 separate “markets,” that they say demonstrated their conclusion about EU vs. U.S. competitiveness. These weren’t really “markets” in the competition policy sense, they were just broad industry categories, such as health, information, trade, and professional services (actually “other business sector services”). 

As pointed out in one of my earlier critiques, In all but one of these industries, the 8-firm concentration ratios for the U.S. and the EU are below 40 percent and the HHI measures reported in the original paper are at levels that most observers would presume to be competitive. 

Sending their original markets to drift in the appendices, Gutiérrez and Philippon’s revised paper focuses its attention on two markets — telecommunications and airlines — to highlight their claims that EU markets are more competitive than the U.S. First, telecoms:

To be more concrete, consider the Telecom industry and the entry of the French Telecom company Free Mobile. Until 2011, the French mobile industry was an oligopoly with three large historical incumbents and weak competition. … Free obtained its 4G license in 2011 and entered the market with a plan of unlimited talk, messaging and data for €20. Within six months, the incumbents Orange, SFR and Bouygues had reacted by launching their own discount brands and by offering €20 contracts as well. … The relative price decline was 40%: France went from being 15% more expensive than the US [in 2011] to being 25% cheaper in about two years [in 2013].

While this is an interesting story about how entry can increase competition, the story of a single firm entering a market in a single country is hardly evidence that the EU as a whole is more competitive than the U.S.

What Gutiérrez and Philippon don’t report is that from 2013 to 2019, prices declined by 12% in the U.S. and only 8% in France. In the EU as a whole, prices decreased by only 5% over the years 2013-2019.

Gutiérrez and Philippon’s passenger airline story is even weaker. Because airline prices don’t fit their narrative, they argue that increasing airline profits are evidence that the U.S. is less competitive than the EU. 

The picture above is from Figure 5 of their paper (“Air Transportation Profits and Concentration, EU vs US”). They claim that the “rise in US concentration and profits aligns closely with a controversial merger wave,” with the vertical line in the figure marking the Delta-Northwest merger.

Sure, profitability among U.S. firms increased. But, before the “merger wave,” profits were negative. Perhaps predatory pricing is pro-competitive after all.

Where Gutiérrez and Philippon really fumble is with airline pricing. Since the merger wave that pulled the U.S. airline industry out of insolvency, ticket prices (as measured by the Consumer Price Index), have decreased by 6%. In France, prices increased by 4% and in the EU, prices increased by 30%. 

The paper relies more heavily on eyeballing graphs than statistical analysis, but something about Table 2 caught my attention — the R-squared statistics. First, they’re all over the place. But, look at column (1): A perfect 1.00 R-squared. Could it be that Gutiérrez and Philippon’s statistical model has (almost) as many parameters as variables?

Notice that all the regressions with an R-squared of 0.9 or higher include country fixed effects. The two regressions with R-squareds of 0.95 and 0.96 also include country-industry fixed effects. It’s very possible that the regressions results are driven entirely by idiosyncratic differences among countries and industries. 

Gutiérrez and Philippon provide no interpretation for their results in Table 2, but it seems to work like this, using column (1): A 10% increase in the 4-firm concentration ratio (which is different from a 10 percentage point increase), would be associated with a 1.8% increase in prices four years later. So, an increase in CR4 from 20% to 22% (or an increase from 60% to 66%) would be associated with a 1.8% increase in prices over four years, or about 0.4% a year. On the one hand, I just don’t buy it. On the other hand, the effect is so small that it seems economically insignificant. 

I’m sure Gutiérrez and Philippon have put a lot of time into this paper and its revision. But there’s an old saying that the best thing about banging your head against the wall is that it feels so good when it stops. Perhaps, it’s time to stop with this paper and let it “drift” into obscurity.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Eric Fruits, (Chief Economist, International Center for Law & Economics).]

Earlier this week, merger talks between Uber and food delivery service Grubhub surfaced. House Antitrust Subcommittee Chairman David N. Cicilline quickly reacted to the news:

Americans are struggling to put food on the table, and locally owned businesses are doing everything possible to keep serving people in our communities, even under great duress. Uber is a notoriously predatory company that has long denied its drivers a living wage. Its attempt to acquire Grubhub—which has a history of exploiting local restaurants through deceptive tactics and extortionate fees—marks a new low in pandemic profiteering. We cannot allow these corporations to monopolize food delivery, especially amid a crisis that is rendering American families and local restaurants more dependent than ever on these very services. This deal underscores the urgency for a merger moratorium, which I and several of my colleagues have been urging our caucus to support.

Pandemic profiteering rolls nicely off the tongue, and we’re sure to see that phrase much more over the next year or so. 

Grubhub shares jumped 29% Tuesday, the day the merger talks came to light, shown in the figure below. The Wall Street Journal reports companies are considering a deal that would value Grubhub stock at around 1.9 Uber shares, or $60-65 dollars a share, based on Thursday’s price.

But is that “pandemic profiteering?”

After Amazon announced its intended acquisition of Whole Foods, the grocer’s stock price soared by 27%. Rep. Cicilline voiced some convoluted concerns about that merger, but said nothing about profiteering at the time. Different times, different messaging.

Rep. Cicilline and others have been calling for a merger moratorium during the pandemic and used the Uber/Grubhub announcement as Exhibit A in his indictment of merger activity.

A moratorium would make things much easier for regulators. No more fighting over relevant markets, no HHI calculations, no experts debating SSNIPs or GUPPIs, no worries over consumer welfare, no failing firm defenses. Just a clear, brightline “NO!”

Even before the pandemic, it was well known that the food delivery industry was due for a shakeout. NPR reports, even as the business is growing, none of the top food-delivery apps are turning a profit, with one analyst concluding consolidation was “inevitable.” Thus, even if a moratorium slowed or stopped the Uber/Grubhub merger, at some point a merger in the industry will happen and the U.S. antitrust authorities will have to evaluate it.

First, we have to ask, “What’s the relevant market?” The government has a history of defining relevant markets so narrowly that just about any merger can be challenged. For example, for the scuttled Whole Foods/Wild Oats merger, the FTC famously narrowed the market to “premium natural and organic supermarkets.” Surely, similar mental gymnastics will be used for any merger involving food delivery services.

While food delivery has grown in popularity over the past few years, delivery represents less than 10% of U.S. food service sales. While Rep. Cicilline may be correct that families and local restaurants are “more dependent than ever” on food delivery, delivery is only a small fraction of a large market. Even a monopoly of food delivery service would not confer market power on the restaurant and food service industry.

No reasonable person would claim an Uber/Grubhub merger would increase market power in the restaurant and food service industry. But, it might convey market power in the food delivery market. Much attention is paid to the “Big Four”–DoorDash, Grubhub, Uber Eats, and Postmates. But, these platform delivery services are part of the larger food service delivery market, of which platforms account for about half of the industry’s revenues. Pizza accounts for the largest share of restaurant-to-consumer delivery.

This raises the big question of what is the relevant market: Is it the entire food delivery sector, or just the platform-to-consumer sector? 

Based on the information in the figure below, defining the market narrowly would place an Uber/Grubhub merger squarely in the “presumed to be likely to enhance market power” category.

  • 2016 HHI: <3,175
  • 2018 HHI: <1,474
  • 2020 HHI: <2,249 pre-merger; <4,153 post-merger

Alternatively, defining the market to encompass all food delivery would cut the platforms’ shares roughly in half and the merger would be unlikely to harm competition, based on HHI. Choosing the relevant market is, well, relevant.

The Second Measure data suggests that concentration in the platform delivery sector decreased with the entry of Uber Eats, but subsequently increased with DoorDash’s rising share–which included the acquisition of Caviar from Square.

(NB: There seems to be a significant mismatch in the delivery revenue data. Statista reports platform delivery revenues increased by about 40% from 2018 to 2020, but Second Measure indicates revenues have more than doubled.) 

Geoffrey Manne, in an earlier post points out “while national concentration does appear to be increasing in some sectors of the economy, it’s not actually so clear that the same is true for local concentration — which is often the relevant antitrust market.” That may be the case here.

The figure below is a sample of platform delivery shares by city. I added data from an earlier study of 2017 shares. In all but two metro areas, Uber and Grubhub’s combined market share declined from 2017 to 2020. In Boston, the combined shares did not change and in Los Angeles, the combined shares increased by 1%.

(NB: There are some serious problems with this data, notably that it leaves out the restaurant-to-consumer sector and assumes the entire platform-to-consumer sector is comprised of only the “Big Four.”)

Platform-to-consumer delivery is a complex two-sided market in which the platforms link, and compete for, both restaurants and consumers. Platforms compete for restaurants, drivers, and consumers. Restaurants have a choice of using multiple platforms or entering into exclusive arrangements. Many drivers work for multiple platforms, and many consumers use multiple platforms. 

Fundamentally, the rise of platform-to-consumer is an evolution in vertical integration. Restaurants can choose to offer no delivery, use their own in-house delivery drivers, or use a third party delivery service. Every platform faces competition from in-house delivery, placing a limit on their ability to raise prices to restaurants and consumers.

The choice of delivery is not an either-or decision. For example, many pizza restaurants who have their own delivery drivers also use platform delivery service. Their own drivers may serve a limited geographic area, but the platforms allow the restaurant to expand its geographic reach, thereby increasing its sales. Even so, the platforms face competition from in-house delivery.

Mergers or other forms of shake out in the food delivery industry are inevitable. Mergers will raise important questions about relevant product and geographic markets as well as competition in two-sided markets. While there is a real risk of harm to restaurants, drivers, and consumers, there is also a real possibility of welfare enhancing efficiencies. These questions will never be addressed with an across-the-board merger moratorium.

A recent NBER working paper by Gutiérrez & Philippon attempts to link differences in U.S. and EU antitrust enforcement and product market regulation to differences in market concentration and corporate profits. The paper’s abstract begins with a bold assertion:

Until the 1990’s, US markets were more competitive than European markets. Today, European markets have lower concentration, lower excess profits, and lower regulatory barriers to entry.

The authors are not clear what they mean by lower, however its seems they mean lower today relative to the 1990s.

This blog post focuses on the first claim: “Today, European markets have lower concentration …”

At the risk of being pedantic, Gutiérrez & Philippon’s measures of market concentration for which both U.S. and EU data are reported cover the period from 1999 to 2012. Thus, “the 1990s” refers to 1999, and “today” refers to 2012, or six years ago.

The table below is based on Figure 26 in Gutiérrez & Philippon. In 2012, there appears to be no significant difference in market concentration between the U.S. and the EU, using either the 8-firm concentration ratio or HHI. Based on this information, it cannot be concluded broadly that EU sectors have lower concentration than the U.S.

2012U.S.EU
CR826% (+5%)27% (-7%)
HHI640 (+150)600 (-190)

Gutiérrez & Philippon focus on the change in market concentration to draw their conclusions. However, the levels of market concentration measures are strikingly low. In all but one of the industries (telecommunications) in Figure 27 of their paper, the 8-firm concentration ratios for the U.S. and the EU are below 40 percent. Similarly, the HHI measures reported in the paper are at levels that most observers would presume to be competitive. In addition, in 7 of the 12 sectors surveyed, the U.S. 8-firm concentration ratio is lower than in the EU.

The numbers in parentheses in the table above show the change in the measures of concentration since 1999. The changes suggests that U.S. markets have become more concentrated and EU markets have become less concentrated. But, how significant are the changes in concentration?

A simple regression of the relationship between CR8 and a time trend finds that in the EU, CR8 has decreased an average of 0.5 percentage point a year, while the U.S. CR8 increased by less than 0.4 percentage point a year from 1999 to 2012. Tucked in an appendix to Gutiérrez & Philippon, Figure 30 shows that CR8 in the U.S. had decreased by about 2.5 percentage points from 2012 to 2014.

A closer examination of Gutiérrez & Philippon’s 8-firm concentration ratio for the EU shows that much of the decline in EU market concentration occurred over the 1999-2002 period. After that, the change in CR8 for the EU is not statistically significantly different from zero.

A regression of the relationship between HHI and a time trend finds that in the EU, HHI has decreased an average of 12.5 points a year, while the U.S. HHI increased by less than 16.4 points a year from 1999 to 2012.

As with CR8, a closer examination of Gutiérrez & Philippon’s HHI for the EU shows that much of the decline in EU market concentration occurred over the 1999-2002 period. After that, the change in HHI for the EU is not statistically significantly different from zero.

Readers should be cautious in relying on Gutiérrez & Philippon’s data to conclude that the U.S. is “drifting” toward greater market concentration while the EU is “drifting” toward lower market concentration. Indeed, the limited data presented in the paper point toward a convergence in market concentration between the two regions.

 

 

As Thom previously posted, he and I have a new paper explaining The Case for Doing Nothing About Common Ownership of Small Stakes in Competing Firms. Our paper is a response to cries from the likes of Einer Elhauge and of Eric Posner, Fiona Scott Morton, and Glen Weyl, who have called for various types of antitrust action to reign in what they claim is an “economic blockbuster” and “the major new antitrust challenge of our time,” respectively. This is the first in a series of posts that will unpack some of the issues and arguments we raise in our paper.

At issue is the growth in the incidence of common-ownership across firms within various industries. In particular, institutional investors with broad portfolios frequently report owning small stakes in a number of firms within a given industry. Although small, these stakes may still represent large block holdings relative to other investors. This intra-industry diversification, critics claim, changes the managerial objectives of corporate executives from aggressively competing to increase their own firm’s profits to tacitly colluding to increase industry-level profits instead. The reason for this change is that competition by one firm comes at a cost of profits from other firms in the industry. If investors own shares across firms, then any competitive gains in one firm’s stock are offset by competitive losses in the stocks of other firms in the investor’s portfolio. If one assumes corporate executives aim to maximize total value for their largest shareholders, then managers would have incentive to soften competition against firms with which they share common ownership. Or so the story goes (more on that in a later post.)

Elhague and Posner, et al., draw their motivation for new antitrust offenses from a handful of papers that purport to establish an empirical link between the degree of common ownership among competing firms and various measures of softened competitive behavior, including airline prices, banking fees, executive compensation, and even corporate disclosure patterns. The paper of most note, by José Azar, Martin Schmalz, and Isabel Tecu and forthcoming in the Journal of Finance, claims to identify a causal link between the degree of common ownership among airlines competing on a given route and the fares charged for flights on that route.

Measuring common ownership with MHHI

Azar, et al.’s airline paper uses a metric of industry concentration called a Modified Herfindahl–Hirschman Index, or MHHI, to measure the degree of industry concentration taking into account the cross-ownership of investors’ stakes in competing firms. The original Herfindahl–Hirschman Index (HHI) has long been used as a measure of industry concentration, debuting in the Department of Justice’s Horizontal Merger Guidelines in 1982. The HHI is calculated by squaring the market share of each firm in the industry and summing the resulting numbers.

The MHHI is rather more complicated. MHHI is composed of two parts: the HHI measuring product market concentration and the MHHI_Delta measuring the additional concentration due to common ownership. We offer a step-by-step description of the calculations and their economic rationale in an appendix to our paper. For this post, I’ll try to distill that down. The MHHI_Delta essentially has three components, each of which is measured relative to every possible competitive pairing in the market as follows:

  1. A measure of the degree of common ownership between Company A and Company -A (Not A). This is calculated by multiplying the percentage of Company A shares owned by each Investor I with the percentage of shares Investor I owns in Company -A, then summing those values across all investors in Company A. As this value increases, MHHI_Delta goes up.
  2. A measure of the degree of ownership concentration in Company A, calculated by squaring the percentage of shares owned by each Investor I and summing those numbers across investors. As this value increases, MHHI_Delta goes down.
  3. A measure of the degree of product market power exerted by Company A and Company -A, calculated by multiplying the market shares of the two firms. As this value increases, MHHI_Delta goes up.

This process is repeated and aggregated first for every pairing of Company A and each competing Company -A, then repeated again for every other company in the market relative to its competitors (e.g., Companies B and -B, Companies C and -C, etc.). Mathematically, MHHI_Delta takes the form:

where the Ss represent the firm market shares of, and Betas represent ownership shares of Investor I in, the respective companies A and -A.

As the relative concentration of cross-owning investors to all investors in Company A increases (i.e., the ratio on the right increases), managers are assumed to be more likely to soften competition with that competitor. As those two firms control more of the market, managers’ ability to tacitly collude and increase joint profits is assumed to be higher. Consequently, the empirical research assumes that as MHHI_Delta increases, we should observe less competitive behavior.

And indeed that is the “blockbuster” evidence giving rise to Elhauge’s and Posner, et al.,’s arguments  For example, Azar, et. al., calculate HHI and MHHI_Delta for every US airline market–defined either as city-pairs or departure-destination pairs–for each quarter of the 14-year time period in their study. They then regress ticket prices for each route against the HHI and the MHHI_Delta for that route, controlling for a number of other potential factors. They find that airfare prices are 3% to 7% higher due to common ownership. Other papers using the same or similar measures of common ownership concentration have likewise identified positive correlations between MHHI_Delta and their respective measures of anti-competitive behavior.

Problems with the problem and with the measure

We argue that both the theoretical argument underlying the empirical research and the empirical research itself suffer from some serious flaws. On the theoretical side, we have two concerns. First, we argue that there is a tremendous leap of faith (if not logic) in the idea that corporate executives would forgo their own self-interest and the interests of the vast majority of shareholders and soften competition simply because a small number of small stakeholders are intra-industry diversified. Second, we argue that even if managers were so inclined, it clearly is not the case that softening competition would necessarily be desirable for institutional investors that are both intra- and inter-industry diversified, since supra-competitive pricing to increase profits in one industry would decrease profits in related industries that may also be in the investors’ portfolios.

On the empirical side, we have concerns both with the data used to calculate the MHHI_Deltas and with the nature of the MHHI_Delta itself. First, the data on institutional investors’ holdings are taken from Schedule 13 filings, which report aggregate holdings across all the institutional investor’s funds. Using these data masks the actual incentives of the institutional investors with respect to investments in any individual company or industry. Second, the construction of the MHHI_Delta suffers from serious endogeneity concerns, both in investors’ shareholdings and in market shares. Finally, the MHHI_Delta, while seemingly intuitive, is an empirical unknown. While HHI is theoretically bounded in a way that lends to interpretation of its calculated value, the same is not true for MHHI_Delta. This makes any inference or policy based on nominal values of MHHI_Delta completely arbitrary at best.

We’ll expand on each of these concerns in upcoming posts. We will then take on the problems with the policy proposals being offered in response to the common ownership ‘problem.’

 

 

 

 

 

 

I just posted a new ICLE white paper, co-authored with former ICLE Associate Director, Ben Sperry:

When Past Is Not Prologue: The Weakness of the Economic Evidence Against Health Insurance Mergers.

Yesterday the hearing in the DOJ’s challenge to stop the Aetna-Humana merger got underway, and last week phase 1 of the Cigna-Anthem merger trial came to a close.

The DOJ’s challenge in both cases is fundamentally rooted in a timeworn structural analysis: More consolidation in the market (where “the market” is a hotly-contested issue, of course) means less competition and higher premiums for consumers.

Following the traditional structural playbook, the DOJ argues that the Aetna-Humana merger (to pick one) would result in presumptively anticompetitive levels of concentration, and that neither new entry not divestiture would suffice to introduce sufficient competition. It does not (in its pretrial brief, at least) consider other market dynamics (including especially the complex and evolving regulatory environment) that would constrain the firm’s ability to charge supracompetitive prices.

Aetna & Humana, for their part, contend that things are a bit more complicated than the government suggests, that the government defines the relevant market incorrectly, and that

the evidence will show that there is no correlation between the number of [Medicare Advantage organizations] in a county (or their shares) and Medicare Advantage pricing—a fundamental fact that the Government’s theories of harm cannot overcome.

The trial will, of course, feature expert economic evidence from both sides. But until we see that evidence, or read the inevitable papers derived from it, we are stuck evaluating the basic outlines of the economic arguments based on the existing literature.

A host of antitrust commentators, politicians, and other interested parties have determined that the literature condemns the mergers, based largely on a small set of papers purporting to demonstrate that an increase of premiums, without corresponding benefit, inexorably follows health insurance “consolidation.” In fact, virtually all of these critics base their claims on a 2012 case study of a 1999 merger (between Aetna and Prudential) by economists Leemore Dafny, Mark Duggan, and Subramaniam Ramanarayanan, Paying a Premium on Your Premium? Consolidation in the U.S. Health Insurance Industry, as well as associated testimony by Prof. Dafny, along with a small number of other papers by her (and a couple others).

Our paper challenges these claims. As we summarize:

This white paper counsels extreme caution in the use of past statistical studies of the purported effects of health insurance company mergers to infer that today’s proposed mergers—between Aetna/Humana and Anthem/Cigna—will likely have similar effects. Focusing on one influential study—Paying a Premium on Your Premium…—as a jumping off point, we highlight some of the many reasons that past is not prologue.

In short: extrapolated, long-term, cumulative, average effects drawn from 17-year-old data may grab headlines, but they really don’t tell us much of anything about the likely effects of a particular merger today, or about the effects of increased concentration in any particular product or geographic market.

While our analysis doesn’t necessarily undermine the paper’s limited, historical conclusions, it does counsel extreme caution for inferring the study’s applicability to today’s proposed mergers.

By way of reference, Dafny, et al. found average premium price increases from the 1999 Aetna/Prudential merger of only 0.25 percent per year for two years following the merger in the geographic markets they studied. “Health Insurance Mergers May Lead to 0.25 Percent Price Increases!” isn’t quite as compelling a claim as what critics have been saying, but it’s arguably more accurate (and more relevant) than the 7 percent price increase purportedly based on the paper that merger critics like to throw around.

Moreover, different markets and a changed regulatory environment alone aren’t the only things suggesting that past is not prologue. When we delve into the paper more closely we find even more significant limitations on the paper’s support for the claims made in its name, and its relevance to the current proposed mergers.

The full paper is available here.