Archives For antitrust

Amazon offers Prime discounts to Whole Food customers and offers free delivery for Prime members. Those are certainly consumer benefits. But with those comes a cost, which may or may not be significant. By bundling its products with collective discounts, Amazon makes it more attractive for shoppers to shift their buying practices from local stores to the internet giant. Will this eventually mean that local stores will become more inefficient, based on lower volume, and will eventually close? Do most Americans care about the potential loss of local supermarkets and specialty grocers? No one, including antitrust enforcers, seems to have asked them.

Rate this:

Continue Reading...

The gist of these arguments is simple. The Amazon / Whole Foods merger would lead to the exclusion of competitors, with Amazon leveraging its swaths of data and pricing below costs. All of this begs a simple question: have these prophecies come to pass?

The problem with antitrust populism is not just that it leads to unfounded predictions regarding the negative effects of a given business practice. It also ignores the significant gains which consumers may reap from these practices. The Amazon / Whole foods offers a case in point.

Rate this:

Continue Reading...

Even with these caveats, it’s still worth looking at the recent trends. Whole Foods’s sales since 2015 have been flat, with only low single-digit growth, according to data from Second Measure. This suggests Whole Foods is not yet getting a lift from the relationship. However, the percentage of Whole Foods’ new customers who are Prime Members increased post-merger, from 34 percent in June 2017 to 41 percent in June 2018. This suggests that Amazon’s platform is delivering customers to Whole Foods.

Rate this:

Continue Reading...

The negativity that surrounded the deal at its announcement made Whole Foods seem like an innocent player, but it is important to recall that they were hemorrhaging and were looking to exit. Throughout the 2010s, the company lost its market leading edge as others began to offer the same kinds of services and products. Still, the company was able to sell near the top of its value to Amazon because it was able to court so many suitors. Given all of these features, Whole Foods could have been using the exit as a mechanism to appropriate another firm’s rent.

Rate this:

Continue Reading...

Brandeis is back, with today’s neo-Brandeisians reflexively opposing virtually all mergers involving large firms. For them, industry concentration has grown to crisis proportions and breaking up big companies should be the animating goal not just of antitrust policy but of U.S. economic policy generally. The key to understanding the neo-Brandeisian opposition to the Whole Foods/Amazon mergers is that it has nothing to do with consumer welfare, and everything to do with a large firm animus. Sabeel Rahman, a Roosevelt Institute scholar, concedes that big firms give us higher productivity, and hence lower prices, but he dismisses the value of that. He writes, “If consumer prices are our only concern, it is hard to see how Amazon, Comcast, and companies such as Uber need regulation.” And this gets to the key point regarding most of the opposition to the merger: it had nothing to do with concerns about monopolistic effects on economic efficiency or consumer prices.  It had everything to do with opposition to big firm for the sole reason that they are big.

Rate this:

Continue Reading...

Carl Shapiro, the government’s economics expert opposing the AT&T-Time Warner merger, seems skeptical of much of the antitrust populists’ Amazon rhetoric: “Simply saying that Amazon has grown like a weed, charges very low prices, and has driven many smaller retailers out of business is not sufficient. Where is the consumer harm?”

On its face, there was nothing about the Amazon/Whole Foods merger that should have raised any antitrust concerns. While one year is too soon to fully judge the competitive impacts of the Amazon-Whole Foods merger, nevertheless, it appears that much of the populist antitrust movement’s speculation that the merger would destroy competition and competitors and impoverish workers has failed to materialize.

Rate this:

Continue Reading...

Viewed from the long history of the evolution of the grocery store, the Amazon-Whole Foods merger made sense as the start of the next stage of that historical process. The combination of increased wealth that is driving the demand for upscale grocery stores, and the corresponding increase in the value of people’s time that is driving the demand for one-stop shopping and various forms of pick-up and delivery, makes clear the potential benefits of this merger. Amazon was already beginning to make a mark in the sale and delivery of the non-perishables and dry goods that upscale groceries tend to have less of. Acquiring Whole Foods gives it a way to expand that into perishables in a very sensible way. We are only beginning to see the synergies that this combination will produce. Its long-term effect on the structure of the grocery business will be significant and highly beneficial for consumers.

Rate this:

Continue Reading...

At the heart of the common ownership issue in the current antitrust debate is an empirical measure, the Modified Herfindahl-Hirschmann Index, researchers have used to correlate patterns of common ownership with measures of firm behavior and performance. In an accompanying post, Thom Lambert provides a great summary of just what the MHHI, and more specifically the MHHIΔ, is and how it can be calculated. I’m going to free-ride off Thom’s effort, so if you’re not very familiar with the measure, I suggest you start here and here.

There are multiple problems with the common ownership story and with the empirical evidence proponents of stricter antitrust enforcement point to in order to justify their calls to action. Thom and I address a number of those problems in our recent paper on “The Case for Doing Nothing About Institutional Investors’ Common Ownership of Small Stakes in Competing Firms.” However, one problem we don’t take on in that paper is the nature of the MHHIΔ itself. More specifically, what is one to make of it and how should it be interpreted, especially from a policy perspective?

The Policy Benchmark

The benchmark for discussion is the original Herfindahl-Hirschmann Index (HHI), which has been part of antitrust for decades. The HHI is calculated by summing the squared value of each firm’s market share. Depending on whether you use percents or percentages, the value of the sum may be multiplied by 10,000. For instance, for two firms that split the market evenly, the HHI could be calculated either as:

HHI = 502 + 502 = 5.000, or
HHI = (.502 + .502)*10,000 = 5,000

It’s a pretty simple exercise to see that one of the useful properties of HHI is that it is naturally bounded between 0 and 10,000. In the case of a pure monopoly that commands the entire market, the value of HHI is 10,000 (1002). As the number of firms increases and market shares approach very small fractions, the value of HHI asymptotically approaches 0. For a market with 10 firms firms that evenly share the market, for instance, HHI is 1,000; for 100 identical firms, HHI is 100; for 1,000 identical firms, HHI is 1. As a result, we know that when HHI is close to 10,000, the industry is highly concentrated in one firm; and when the HHI is close to zero, there is no meaningful concentration at all. Indeed, the Department of Justice’s Horizontal Merger Guidelines make use of this property of the HHI:

Based on their experience, the Agencies generally classify markets into three types:

  • Unconcentrated Markets: HHI below 1500
  • Moderately Concentrated Markets: HHI between 1500 and 2500
  • Highly Concentrated Markets: HHI above 2500

The Agencies employ the following general standards for the relevant markets they have defined:

  • Small Change in Concentration: Mergers involving an increase in the HHI of less than 100 points are unlikely to have adverse competitive effects and ordinarily require no further analysis.
  • Unconcentrated Markets: Mergers resulting in unconcentrated markets are unlikely to have adverse competitive effects and ordinarily require no further analysis.
  • Moderately Concentrated Markets: Mergers resulting in moderately concentrated markets that involve an increase in the HHI of more than 100 points potentially raise significant competitive concerns and often warrant scrutiny.
  • Highly Concentrated Markets: Mergers resulting in highly concentrated markets that involve an increase in the HHI of between 100 points and 200 points potentially raise significant competitive concerns and often warrant scrutiny. Mergers resulting in highly concentrated markets that involve an increase in the HHI of more than 200 points will be presumed to be likely to enhance market power. The presumption may be rebutted by persuasive evidence showing that the merger is unlikely to enhance market power.

Just by way of reference, an HHI of 2500 could reflect four firms sharing the market equally (i.e., 25% each), or it could be one firm with roughly 49% of the market and 51 identical small firms sharing the rest evenly.

Injecting MHHIΔ Into the Mix

MHHI is intended to account for both the product market concentration among firms captured by the HHI, and the common ownership concentration across firms in the market measured by the MHHIΔ. In short, MHHI = HHI + MHHIΔ.

As Thom explains in great detail, MHHIΔ attempts to measure the combined effects of the relative influence of shareholders that own positions across competing firms on management’s strategic decision-making and the combined market shares of the commonly-owned firms. MHHIΔ is the measure used in the various empirical studies allegedly demonstrating a causal relationship between common ownership (higher MHHIΔs) and the supposed anti-competitive behavior of choice.

Some common ownership critics, such as Einer Elhague, have taken those results and suggested modifying antitrust rules to incorporate the MHHIΔ in the HHI guidelines above. For instance, Elhague writes (p 1303):

Accordingly, the federal agencies can and should challenge any stock acquisitions that have produced, or are likely to produce, anti-competitive horizontal shareholdings. Given their own guidelines and the empirical results summarized in Part I, they should investigate any horizontal stock acquisitions that have created, or would create, a ΔMHHI of over 200 in a market with an MHHI over 2500, in order to determine whether those horizontal stock acquisitions raised prices or are likely to do so.

Elhague, like many others, couch their discussion of MHHI and MHHIΔ in the context of HHI values as though the additive nature of MHHI means such a context make sense. And if the examples are carefully chosen, the numbers even seem to make sense. For instance, even in our paper (page 30), we give a few examples to illustrate some of the endogeneity problems with MHHIΔ:

For example, suppose again that five institutional investors hold equal stakes (say, 3%) of each airline servicing a market and that the airlines have no other significant shareholders.  If there are two airlines servicing the market and their market shares are equivalent, HHI will be 5000, MHHI∆ will be 5000, and MHHI (HHI + MHHI∆) will be 10000.  If a third airline enters and grows so that the three airlines have equal market shares, HHI will drop to 3333, MHHI∆ will rise to 6667, and MHHI will remain constant at 10000.  If a fourth airline enters and the airlines split the market evenly, HHI will fall to 2500, MHHI∆ will rise further to 7500, and MHHI will again total 10000.

But do MHHI and MHHI∆ really fit so neatly into the HHI framework? Sadly–and worringly–no, not at all.

The Policy Problem

There seems to be a significant problem with simply imposing MHHIΔ into the HHI framework. Unlike HHI, from which we can infer something about the market based on the nominal value of the measure, MHHIΔ has no established intuitive or theoretical grounding. In fact, MHHIΔ has no intuitively meaningful mathematical boundaries from which to draw inferences about “how big is big?”, a fundamental problem for antitrust policy.

This is especially true within the range of cross-shareholding values we’re talking about in the common ownership debate. To illustrate just how big a problem this is, consider a constrained optimization of MHHI based on parameters that are not at all unreasonable relative to hypothetical examples cited in the literature:

  • Four competing firms in the market, each of which is constrained to having at least 5% market share, and their collective sum must equal 1 (or 100%).
  • Five institutional investors each of which can own no more than 5% of the outstanding shares of any individual airline, with no restrictions across airlines.
  • The remaining outstanding shares are assumed to be diffusely owned (i.e., no other large shareholder in any firm).

With only these modest restrictions on market share and common ownership, what’s the maximum potential value of MHHI? A mere 26,864,516,491, with an MHHI∆ of 26,864,513,774 and HHI of 2,717.

That’s right, over 26.8 billion. To reach such an astronomical number, what are the parameter values? The four firms split the market with 33, 31.7, 18.3, and 17% shares, respectively. Investor 1 owns 2.6% of the largest firm (by market share) while Investors 2-5 each own between 4.5 and 5% of the largest firm. Investors 1 and 2 own 5% of the smallest firm, while Investors 3 and 4 own 3.9% and Investor 5 owns a minuscule (0.0006%) share. Investor 2 is the only investor with any holdings (a tiny 0.0000004% each) in the two middling firms. These are not unreasonable numbers by any means, but the MHHI∆ surely is–especially from a policy perspective.

So if MHHI∆ can range from near zero to as much as 28.6 billion within reasonable ranges of market share and shareholdings, what should we make of Elhague’s proposal that mergers be scrutinized for increasing MHHI∆ by 200 points if the MHHI is 2,500 or more? We argue that such an arbitrary policy model is not only unfounded empirically, but is completely void of substantive reason or relevance.

The DOJ’s Horizontal Merger Guidelines above indicate that antitrust agencies adopted the HHI benchmarks for review “[b]ased on their experience”.  In the 1982 and 1984 Guidelines, the agencies adopted HHI standards 1,000 and 1,800, compared to the current 1,500 and 2,500 levels, in determining whether the industry is concentrated and a merger deserves additional scrutiny. These changes reflect decades of case reviews relating market structure to likely competitive behavior and consumer harm.

We simply do not know enough yet empirically about the relation between MHHI∆ and benchmarks of competitive behavior and consumer welfare to make any intelligent policies based on that metric–even if the underlying argument had any substantive theoretical basis, which we doubt. This is just one more reason we believe the best response to the common ownership problem is to do nothing, at least until we have a theoretically, and empirically, sound basis on which to make intelligent and informed policy decisions and frameworks.

Announcement

Truth on the Market is pleased to announce its next blog symposium:

Is Amazon’s Appetite Bottomless?

The Whole Foods Merger After One Year

August 28, 2018

One year ago tomorrow the Amazon/Whole Foods merger closed, following its approval by the FTC. The merger was something of a flashpoint in the growing populist antitrust movement, raising some interesting questions — and a host of objections from a number of scholars, advocates, journalists, antitrust experts, and others who voiced a range of possible problematic outcomes.

Under settled antitrust law — evolved over the last century-plus — the merger between Amazon and Whole Foods was largely uncontroversial. But the size and scope of Amazon’s operation and ambition has given some pause. And despite the apparent inapplicability of antitrust law to the array of populist concerns about large tech companies, advocates nonetheless contend that antitrust should be altered to deal with new threats posed by companies like Amazon.  

For something of a primer on the antitrust debate surrounding Amazon, listen to ICLE’s Geoffrey Manne and Open Markets’ Lina Khan on Season 2 Episode 1 of Briefly, a podcast produced by the University of Chicago Law Review.  

Beginning tomorrow, August 28, Truth on the Market and the International Center for Law & Economics will host a blog symposium discussing the impact of the merger.

One year on, we asked antitrust scholars and other experts to consider:

  • What has been the significance of the Amazon/Whole Foods merger?
  • How has the merger affected various markets and the participants within them (e.g., grocery stores, food delivery services, online retailers, workers, grocery suppliers, etc.)?
  • What, if anything, does the merger and its aftermath tell us about current antitrust doctrine and our understanding of platform markets?
  • Has a year of experience borne out any of the objections to the merger?
  • Have the market changes since the merger undermined or reinforced the populist antitrust arguments regarding this or other conduct?

As in the past (see examples of previous TOTM blog symposia here), we’ve lined up an outstanding and diverse group of scholars to discuss these issues.

Participants

The symposium posts will be collected here. We hope you’ll join us!

Last week, I objected to Senator Warner relying on the flawed AOL/Time Warner merger conditions as a template for tech regulatory policy, but there is a much deeper problem contained in his proposals.  Although he does not explicitly say “big is bad” when discussing competition issues, the thrust of much of what he recommends would serve to erode the power of larger firms in favor of smaller firms without offering a justification for why this would result in a superior state of affairs. And he makes these recommendations without respect to whether those firms actually engage in conduct that is harmful to consumers.

In the Data Portability section, Warner says that “As platforms grow in size and scope, network effects and lock-in effects increase; consumers face diminished incentives to contract with new providers, particularly if they have to once again provide a full set of data to access desired functions.“ Thus, he recommends a data portability mandate, which would theoretically serve to benefit startups by providing them with the data that large firms possess. The necessary implication here is that it is a per se good that small firms be benefited and large firms diminished, as the proposal is not grounded in any evaluation of the competitive behavior of the firms to which such a mandate would apply.

Warner also proposes an “interoperability” requirement on “dominant platforms” (which I criticized previously) in situations where, “data portability alone will not produce procompetitive outcomes.” Again, the necessary implication is that it is a per se good that established platforms share their services with start ups without respect to any competitive analysis of how those firms are behaving. The goal is preemptively to “blunt their ability to leverage their dominance over one market or feature into complementary or adjacent markets or products.”

Perhaps most perniciously, Warner recommends treating large platforms as essential facilities in some circumstances. To this end he states that:

Legislation could define thresholds – for instance, user base size, market share, or level of dependence of wider ecosystems – beyond which certain core functions/platforms/apps would constitute ‘essential facilities’, requiring a platform to provide third party access on fair, reasonable and non-discriminatory (FRAND) terms and preventing platforms from engaging in self-dealing or preferential conduct.

But, as  i’ve previously noted with respect to imposing “essential facilities” requirements on tech platforms,

[T]he essential facilities doctrine is widely criticized, by pretty much everyone. In their respected treatise, Antitrust Law, Herbert Hovenkamp and Philip Areeda have said that “the essential facility doctrine is both harmful and unnecessary and should be abandoned”; Michael Boudin has noted that the doctrine is full of “embarrassing weaknesses”; and Gregory Werden has opined that “Courts should reject the doctrine.”

Indeed, as I also noted, “the Supreme Court declined to recognize the essential facilities doctrine as a distinct rule in Trinko, where it instead characterized the exclusionary conduct in Aspen Skiing as ‘at or near the outer boundary’ of Sherman Act § 2 liability.”

In short, it’s very difficult to know when access to a firm’s internal functions might be critical to the facilitation of a market. It simply cannot be true that a firm becomes bound under onerous essential facilities requirements (or classification as a public utility) simply because other firms find it more convenient to use its services than to develop their own.

The truth of what is actually happening in these cases, however, is that third-party firms are choosing to anchor their business to the processes of another firm which generates an “asset specificity” problem that they then seek the government to remedy:

A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control.

This is naturally a calculated risk that a firm may choose to make, but it is a risk. To pry open Google or Facebook for the benefit of competitors that choose to play to Google and Facebook’s user base, rather than opening markets of their own, punishes the large players for being successful while also rewarding behavior that shies away from innovation. Further, such a policy would punish the large platforms whenever they innovate with their services in any way that might frustrate third-party “integrators” (see, e.g., Foundem’s claims that Google’s algorithm updates meant to improve search quality for users harmed Foundem’s search rankings).  

Rather than encouraging innovation, blessing this form of asset specificity would have the perverse result of entrenching the status quo.

In all of these recommendations from Senator Warner, there is no claim that any of the targeted firms will have behaved anticompetitively, but merely that they are above a certain size. This is to say that, in some cases, big is bad.

Senator Warner’s policies would harm competition and innovation

As Geoffrey Manne and Gus Hurwitz have recently noted these views run completely counter to the last half-century or more of economic and legal learning that has occurred in antitrust law. From its murky, politically-motivated origins through the early 60’s when the Structure-Conduct-Performance (“SCP”) interpretive framework was ascendant, antitrust law was more or less guided by the gut feeling of regulators that big business necessarily harmed the competitive process.

Thus, at its height with SCP, “big is bad” antitrust relied on presumptions that large firms over a certain arbitrary threshold were harmful and should be subjected to more searching judicial scrutiny when merging or conducting business.

A paradigmatic example of this approach can be found in Von’s Grocery where the Supreme Court prevented the merger of two relatively small grocery chains. Combined, the two chains would have constitutes a mere 9 percent of the market, yet the Supreme Court, relying on the SCP aversion to concentration in itself, prevented the merger despite any procompetitive justifications that would have allowed the combined entity to compete more effectively in a market that was coming to be dominated by large supermarkets.

As Manne and Hurwitz observe: “this decision meant breaking up a merger that did not harm consumers, on the one hand, while preventing firms from remaining competitive in an evolving market by achieving efficient scale, on the other.” And this gets to the central defect of Senator Warner’s proposals. He ties his decisions to interfere in the operations of large tech firms to their size without respect to any demonstrable harm to consumers.

To approach antitrust this way — that is, to roll the clock back to a period before there was a well-defined and administrable standard for antitrust — is to open the door for regulation by political whim. But the value of the contemporary consumer welfare test is that it provides knowable guidance that limits both the undemocratic conduct of politically motivated enforcers as well as the opportunities for private firms to engage in regulatory capture. As Manne and Hurwitz observe:

Perhaps the greatest virtue of the consumer welfare standard is not that it is the best antitrust standard (although it is) — it’s simply that it is a standard. The story of antitrust law for most of the 20th century was one of standard-less enforcement for political ends. It was a tool by which any entrenched industry could harness the force of the state to maintain power or stifle competition.

While it is unlikely that Senator Warner intends to entrench politically powerful incumbents, or enable regulation by whim, those are the likely effects of his proposals.

Antitrust law has a rich set of tools for dealing with competitive harm. Introducing legislation to define arbitrary thresholds for limiting the potential power of firms will ultimately undermine the power of those tools and erode the welfare of consumers.