Archives For Structure–conduct–performance paradigm

Why do digital industries routinely lead to one company having a very large share of the market (at least if one defines markets narrowly)? To anyone familiar with competition policy discussions, the answer might seem obvious: network effects, scale-related economies, and other barriers to entry lead to winner-take-all dynamics in platform industries. Accordingly, it is that believed the first platform to successfully unlock a given online market enjoys a determining first-mover advantage.

This narrative has become ubiquitous in policymaking circles. Thinking of this sort notably underpins high-profile reports on competition in digital markets (here, here, and here), as well ensuing attempts to regulate digital platforms, such as the draft American Innovation and Choice Online Act and the EU’s Digital Markets Act.

But are network effects and the like the only ways to explain why these markets look like this? While there is no definitive answer, scholars routinely overlook an alternative explanation that tends to undercut the narrative that tech markets have become non-contestable.

The alternative model is simple: faced with zero prices and the almost complete absence of switching costs, users have every reason to join their preferred platform. If user preferences are relatively uniform and one platform has a meaningful quality advantage, then there is every reason to expect that most consumers will all join the same one—even though the market remains highly contestable. On the other side of the equation, because platforms face very few capacity constraints, there are few limits to a given platform’s growth. As will be explained throughout this piece, this intuition is as old as economics itself.

The Bertrand Paradox

In 1883, French mathematician Joseph Bertrand published a powerful critique of two of the most high-profile economic thinkers of his time: the late Antoine Augustin Cournot and Léon Walras (it would be another seven years before Alfred Marshall published his famous principles of economics).

Bertrand criticized several of Cournot and Walras’ widely accepted findings. This included Cournot’s conclusion that duopoly competition would lead to prices above marginal cost—or, in other words, that duopolies were imperfectly competitive.

By reformulating the problem slightly, Bertand arrived at the opposite conclusion. He argued that each firm’s incentive to undercut its rival would ultimately lead to marginal cost pricing, and one seller potentially capturing the entire market:

There is a decisive objection [to Cournot’s model]: According to his hypothesis, no [supracompetitive] equilibrium is possible. There is no limit to price decreases; whatever the joint price being charged by firms, a competitor could always undercut this price and, with few exceptions, attract all consumers. If the competitor is allowed to get away with this [i.e. the rival does not react], it will double its profits.

This result is mainly driven by the assumption that, unlike in Cournot’s model, firms can immediately respond to their rival’s chosen price/quantity. In other words, Bertrand implicitly framed the competitive process as price competition, rather than quantity competition (under price competition, firms do not face any capacity constraints and they cannot commit to producing given quantities of a good):

If Cournot’s calculations mask this result, it is because of a remarkable oversight. Referring to them as D and D’, Cournot deals with the quantities sold by each of the two competitors and treats them as independent variables. He assumes that if one were to change by the will of one of the two sellers, the other one could remain fixed. The opposite is evidently true.

This later came to be known as the “Bertrand paradox”—the notion that duopoly-market configurations can produce the same outcome as perfect competition (i.e., P=MC).

But while Bertrand’s critique was ostensibly directed at Cournot’s model of duopoly competition, his underlying point was much broader. Above all, Bertrand seemed preoccupied with the notion that expressing economic problems mathematically merely gives them a veneer of accuracy. In that sense, he was one of the first economists (at least to my knowledge) to argue that the choice of assumptions has a tremendous influence on the predictions of economic models, potentially rendering them unreliable:

On other occasions, Cournot introduces assumptions that shield his reasoning from criticism—scholars can always present problems in a way that suits their reasoning.

All of this is not to say that Bertrand’s predictions regarding duopoly competition necessarily hold in real-world settings; evidence from experimental settings is mixed. Instead, the point is epistemological. Bertrand’s reasoning was groundbreaking because he ventured that market structures are not the sole determinants of consumer outcomes. More broadly, he argued that assumptions regarding the competitive process hold significant sway over the results that a given model may produce (and, as a result, over normative judgements concerning the desirability of given market configurations).

The Theory of Contestable Markets

Bertrand is certainly not the only economist to have suggested market structures alone do not determine competitive outcomes. In the early 1980s, William Baumol (and various co-authors) went one step further. Baumol argued that, under certain conditions, even monopoly market structures could deliver perfectly competitive outcomes. This thesis thus rejected the Structure-Conduct-Performance (“SCP”) Paradigm that dominated policy discussions of the time.

Baumol’s main point was that industry structure is not the main driver of market “contestability,” which is the key determinant of consumer outcomes. In his words:

In the limit, when entry and exit are completely free, efficient incumbent monopolists and oligopolists may in fact be able to prevent entry. But they can do so only by behaving virtuously, that is, by offering to consumers the benefits which competition would otherwise bring. For every deviation from good behavior instantly makes them vulnerable to hit-and-run entry.

For instance, it is widely accepted that “perfect competition” leads to low prices because firms are price-takers; if one does not sell at marginal cost, it will be undercut by rivals. Observers often assume this is due to the number of independent firms on the market. Baumol suggests this is wrong. Instead, the result is driven by the sanction that firms face for deviating from competitive pricing.

In other words, numerous competitors are a sufficient, but not necessary condition for competitive pricing. Monopolies can produce the same outcome when there is a present threat of entry and an incumbent’s deviation from competitive pricing would be sanctioned. This is notably the case when there are extremely low barriers to entry.

Take this hypothetical example from the world of cryptocurrencies. It is largely irrelevant to a user whether there are few or many crypto exchanges on which to trade coins, nonfungible tokens (NFTs), etc. What does matter is that there is at least one exchange that meets one’s needs in terms of both price and quality of service. This could happen because there are many competing exchanges, or because a failure to meet my needs by the few (or even one) exchange that does exist would attract the entry of others to which I could readily switch—thus keeping the behavior of the existing exchanges in check.

This has far-reaching implications for antitrust policy, as Baumol was quick to point out:

This immediately offers what may be a new insight on antitrust policy. It tells us that a history of absence of entry in an industry and a high concentration index may be signs of virtue, not of vice. This will be true when entry costs in our sense are negligible.

Given what precedes, Baumol surmised that industry structure must be driven by endogenous factors—such as firms’ cost structures—rather than the intensity of competition that they face. For instance, scale economies might make monopoly (or another structure) the most efficient configuration in some industries. But so long as rivals can sanction incumbents for failing to compete, the market remains contestable. Accordingly, at least in some industries, both the most efficient and the most contestable market configuration may entail some level of concentration.

To put this last point in even more concrete terms, online platform markets may have features that make scale (and large market shares) efficient. If so, there is every reason to believe that competition could lead to more, not less, concentration. 

How Contestable Are Digital Markets?

The insights of Bertrand and Baumol have important ramifications for contemporary antitrust debates surrounding digital platforms. Indeed, it is critical to ascertain whether the (relatively) concentrated market structures we see in these industries are a sign of superior efficiency (and are consistent with potentially intense competition), or whether they are merely caused by barriers to entry.

The barrier-to-entry explanation has been repeated ad nauseam in recent scholarly reports, competition decisions, and pronouncements by legislators. There is thus little need to restate that thesis here. On the other hand, the contestability argument is almost systematically ignored.

Several factors suggest that online platform markets are far more contestable than critics routinely make them out to be.

First and foremost, consumer switching costs are extremely low for most online platforms. To cite but a few examples: Changing your default search engine requires at most a couple of clicks; joining a new social network can be done by downloading an app and importing your contacts to the app; and buying from an alternative online retailer is almost entirely frictionless, thanks to intermediaries such as PayPal.

These zero or near-zero switching costs are compounded by consumers’ ability to “multi-home.” In simple terms, joining TikTok does not require users to close their Facebook account. And the same applies to other online services. As a result, there is almost no opportunity cost to join a new platform. This further reduces the already tiny cost of switching.

Decades of app development have greatly improved the quality of applications’ graphical user interfaces (GUIs), to such an extent that costs to learn how to use a new app are mostly insignificant. Nowhere is this more apparent than for social media and sharing-economy apps (it may be less true for productivity suites that enable more complex operations). For instance, remembering a couple of intuitive swipe motions is almost all that is required to use TikTok. Likewise, ridesharing and food-delivery apps merely require users to be familiar with the general features of other map-based applications. It is almost unheard of for users to complain about usability—something that would have seemed impossible in the early 21st century, when complicated interfaces still plagued most software.

A second important argument in favor of contestability is that, by and large, online platforms face only limited capacity constraints. In other words, platforms can expand output rapidly (though not necessarily costlessly).

Perhaps the clearest example of this is the sudden rise of the Zoom service in early 2020. As a result of the COVID pandemic, Zoom went from around 10 million daily active users in early 2020 to more than 300 million by late April 2020. Despite being a relatively data-intensive service, Zoom did not struggle to meet this new demand from a more than 30-fold increase in its user base. The service never had to turn down users, reduce call quality, or significantly increase its price. In short, capacity largely followed demand for its service. Online industries thus seem closer to the Bertrand model of competition, where the best platform can almost immediately serve any consumers that demand its services.

Conclusion

Of course, none of this should be construed to declare that online markets are perfectly contestable. The central point is, instead, that critics are too quick to assume they are not. Take the following examples.

Scholars routinely cite the putatively strong concentration of digital markets to argue that big tech firms do not face strong competition, but this is a non sequitur. As Bertrand and Baumol (and others) show, what matters is not whether digital markets are concentrated, but whether they are contestable. If a superior rival could rapidly gain user traction, this alone will discipline the behavior of incumbents.

Markets where incumbents do not face significant entry from competitors are just as consistent with vigorous competition as they are with barriers to entry. Rivals could decline to enter either because incumbents have aggressively improved their product offerings or because they are shielded by barriers to entry (as critics suppose). The former is consistent with competition, the latter with monopoly slack.

Similarly, it would be wrong to presume, as many do, that concentration in online markets is necessarily driven by network effects and other scale-related economies. As ICLE scholars have argued elsewhere (here, here and here), these forces are not nearly as decisive as critics assume (and it is debatable that they constitute barriers to entry).

Finally, and perhaps most importantly, this piece has argued that many factors could explain the relatively concentrated market structures that we see in digital industries. The absence of switching costs and capacity constraints are but two such examples. These explanations, overlooked by many observers, suggest digital markets are more contestable than is commonly perceived.

In short, critics’ failure to meaningfully grapple with these issues serves to shape the prevailing zeitgeist in tech-policy debates. Cournot and Bertrand’s intuitions about oligopoly competition may be more than a century old, but they continue to be tested empirically. It is about time those same standards were applied to tech-policy debates.

An oft-repeated claim of conferences, media, and left-wing think tanks is that lax antitrust enforcement has led to a substantial increase in concentration in the US economy of late, strangling the economy, harming workers, and saddling consumers with greater markups in the process. But what if rising concentration (and the current level of antitrust enforcement) were an indication of more competition, not less?

By now the concentration-as-antitrust-bogeyman story is virtually conventional wisdom, echoed, of course, by political candidates such as Elizabeth Warren trying to cash in on the need for a government response to such dire circumstances:

In industry after industry — airlines, banking, health care, agriculture, tech — a handful of corporate giants control more and more. The big guys are locking out smaller, newer competitors. They are crushing innovation. Even if you don’t see the gears turning, this massive concentration means prices go up and quality goes down for everything from air travel to internet service.  

But the claim that lax antitrust enforcement has led to increased concentration in the US and that it has caused economic harm has been debunked several times (for some of our own debunking, see Eric Fruits’ posts here, here, and here). Or, more charitably to those who tirelessly repeat the claim as if it is “settled science,” it has been significantly called into question

Most recently, several working papers looking at the data on concentration in detail and attempting to identify the likely cause for the observed data, show precisely the opposite relationship. The reason for increased concentration appears to be technological, not anticompetitive. And, as might be expected from that cause, its effects are beneficial. Indeed, the story is both intuitive and positive.

What’s more, while national concentration does appear to be increasing in some sectors of the economy, it’s not actually so clear that the same is true for local concentration — which is often the relevant antitrust market.

The most recent — and, I believe, most significant — corrective to the conventional story comes from economists Chang-Tai Hsieh of the University of Chicago and Esteban Rossi-Hansberg of Princeton University. As they write in a recent paper titled, “The Industrial Revolution in Services”: 

We show that new technologies have enabled firms that adopt them to scale production over a large number of establishments dispersed across space. Firms that adopt this technology grow by increasing the number of local markets that they serve, but on average are smaller in the markets that they do serve. Unlike Henry Ford’s revolution in manufacturing more than a hundred years ago when manufacturing firms grew by concentrating production in a given location, the new industrial revolution in non-traded sectors takes the form of horizontal expansion across more locations. At the same time, multi-product firms are forced to exit industries where their productivity is low or where the new technology has had no effect. Empirically we see that top firms in the overall economy are more focused and have larger market shares in their chosen sectors, but their size as a share of employment in the overall economy has not changed. (pp. 42-43) (emphasis added).

This makes perfect sense. And it has the benefit of not second-guessing structural changes made in response to technological change. Rather, it points to technological change as doing what it regularly does: improving productivity.

The implementation of new technology seems to be conferring benefits — it’s just that these benefits are not evenly distributed across all firms and industries. But the assumption that larger firms are causing harm (or even that there is any harm in the first place, whatever the cause) is unmerited. 

What the authors find is that the apparent rise in national concentration doesn’t tell the relevant story, and the data certainly aren’t consistent with assumptions that anticompetitive conduct is either a cause or a result of structural changes in the economy.

Hsieh and Rossi-Hansberg point out that increased concentration is not happening everywhere, but is being driven by just three industries:

First, we show that the phenomena of rising concentration . . . is only seen in three broad sectors – services, wholesale, and retail. . . . [T]op firms have become more efficient over time, but our evidence indicates that this is only true for top firms in these three sectors. In manufacturing, for example, concentration has fallen.

Second, rising concentration in these sectors is entirely driven by an increase [in] the number of local markets served by the top firms. (p. 4) (emphasis added).

These findings are a gloss on a (then) working paper — The Fall of the Labor Share and the Rise of Superstar Firms — by David Autor, David Dorn, Lawrence F. Katz, Christina Patterson, and John Van Reenan (now forthcoming in the QJE). Autor et al. (2019) finds that concentration is rising, and that it is the result of increased productivity:

If globalization or technological changes push sales towards the most productive firms in each industry, product market concentration will rise as industries become increasingly dominated by superstar firms, which have high markups and a low labor share of value-added.

We empirically assess seven predictions of this hypothesis: (i) industry sales will increasingly concentrate in a small number of firms; (ii) industries where concentration rises most will have the largest declines in the labor share; (iii) the fall in the labor share will be driven largely by reallocation rather than a fall in the unweighted mean labor share across all firms; (iv) the between-firm reallocation component of the fall in the labor share will be greatest in the sectors with the largest increases in market concentration; (v) the industries that are becoming more concentrated will exhibit faster growth of productivity; (vi) the aggregate markup will rise more than the typical firm’s markup; and (vii) these patterns should be observed not only in U.S. firms, but also internationally. We find support for all of these predictions. (emphasis added).

This is alone is quite important (and seemingly often overlooked). Autor et al. (2019) finds that rising concentration is a result of increased productivity that weeds out less-efficient producers. This is a good thing. 

But Hsieh & Rossi-Hansberg drill down into the data to find something perhaps even more significant: the rise in concentration itself is limited to just a few sectors, and, where it is observed, it is predominantly a function of more efficient firms competing in more — and more localized — markets. This means that competition is increasing, not decreasing, whether it is accompanied by an increase in concentration or not. 

No matter how may times and under how many monikers the antitrust populists try to revive it, the Structure-Conduct-Performance paradigm remains as moribund as ever. Indeed, on this point, as one of the new antitrust agonists’ own, Fiona Scott Morton, has written (along with co-authors Martin Gaynor and Steven Berry):

In short, there is no well-defined “causal effect of concentration on price,” but rather a set of hypotheses that can explain observed correlations of the joint outcomes of price, measured markups, market share, and concentration. As Bresnahan (1989) argued three decades ago, no clear interpretation of the impact of concentration is possible without a clear focus on equilibrium oligopoly demand and “supply,” where supply includes the list of the marginal cost functions of the firms and the nature of oligopoly competition. 

Some of the recent literature on concentration, profits, and markups has simply reasserted the relevance of the old-style structure-conduct-performance correlations. For economists trained in subfields outside industrial organization, such correlations can be attractive. 

Our own view, based on the well-established mainstream wisdom in the field of industrial organization for several decades, is that regressions of market outcomes on measures of industry structure like the Herfindahl-Hirschman Index should be given little weight in policy debates. Such correlations will not produce information about the causal estimates that policy demands. It is these causal relationships that will help us understand what, if anything, may be causing markups to rise. (emphasis added).

Indeed! And one reason for the enduring irrelevance of market concentration measures is well laid out in Hsieh and Rossi-Hansberg’s paper:

This evidence is consistent with our view that increasing concentration is driven by new ICT-enabled technologies that ultimately raise aggregate industry TFP. It is not consistent with the view that concentration is due to declining competition or entry barriers . . . , as these forces will result in a decline in industry employment. (pp. 4-5) (emphasis added)

The net effect is that there is essentially no change in concentration by the top firms in the economy as a whole. The “super-star” firms of today’s economy are larger in their chosen sectors and have unleashed productivity growth in these sectors, but they are not any larger as a share of the aggregate economy. (p. 5) (emphasis added)

Thus, to begin with, the claim that increased concentration leads to monopsony in labor markets (and thus unemployment) appears to be false. Hsieh and Rossi-Hansberg again:

[W]e find that total employment rises substantially in industries with rising concentration. This is true even when we look at total employment of the smaller firms in these industries. (p. 4)

[S]ectors with more top firm concentration are the ones where total industry employment (as a share of aggregate employment) has also grown. The employment share of industries with increased top firm concentration grew from 70% in 1977 to 85% in 2013. (p. 9)

Firms throughout the size distribution increase employment in sectors with increasing concentration, not only the top 10% firms in the industry, although by definition the increase is larger among the top firms. (p. 10) (emphasis added)

Again, what actually appears to be happening is that national-level growth in concentration is actually being driven by increased competition in certain industries at the local level:

93% of the growth in concentration comes from growth in the number of cities served by top firms, and only 7% comes from increased employment per city. . . . [A]verage employment per county and per establishment of top firms falls. So necessarily more than 100% of concentration growth has to come from the increase in the number of counties and establishments served by the top firms. (p.13)

The net effect is a decrease in the power of top firms relative to the economy as a whole, as the largest firms specialize more, and are dominant in fewer industries:

Top firms produce in more industries than the average firm, but less so in 2013 compared to 1977. The number of industries of a top 0.001% firm (relative to the average firm) fell from 35 in 1977 to 17 in 2013. The corresponding number for a top 0.01% firm is 21 industries in 1977 and 9 industries in 2013. (p. 17)

Thus, summing up, technology has led to increased productivity as well as greater specialization by large firms, especially in relatively concentrated industries (exactly the opposite of the pessimistic stories):  

[T]op firms are now more specialized, are larger in the chosen industries, and these are precisely the industries that have experienced concentration growth. (p. 18)

Unsurprisingly (except to some…), the increase in concentration in certain industries does not translate into an increase in concentration in the economy as a whole. In other words, workers can shift jobs between industries, and there is enough geographic and firm mobility to prevent monopsony. (Despite rampant assumptions that increased concentration is constraining labor competition everywhere…).

Although the employment share of top firms in an average industry has increased substantially, the employment share of the top firms in the aggregate economy has not. (p. 15)

It is also simply not clearly the case that concentration is causing prices to rise or otherwise causing any harm. As Hsieh and Rossi-Hansberg note:

[T]he magnitude of the overall trend in markups is still controversial . . . and . . . the geographic expansion of top firms leads to declines in local concentration . . . that could enhance competition. (p. 37)

Indeed, recent papers such as Traina (2018), Gutiérrez and Philippon (2017), and the IMF (2019) have found increasing markups over the last few decades but at much more moderate rates than the famous De Loecker and Eeckhout (2017) study. Other parts of the anticompetitive narrative have been challenged as well. Karabarbounis and Neiman (2018) finds that profits have increased, but are still within their historical range. Rinz (2018) shows decreased wages in concentrated markets but also points out that local concentration has been decreasing over the relevant time period.

None of this should be so surprising. Has antitrust enforcement gotten more lax, leading to greater concentration? According to Vita and Osinski (2018), not so much. And how about the stagnant rate of new firms? Are incumbent monopolists killing off new startups? The more likely — albeit mundane — explanation, according to Hopenhayn et al. (2018), is that increased average firm age is due to an aging labor force. Lastly, the paper from Hsieh and Rossi-Hansberg discussed above is only the latest in a series of papers, including Bessen (2017), Van Reenen (2018), and Autor et al. (2019), that shows a rise in fixed costs due to investments in proprietary information technology, which correlates with increased concentration. 

So what is the upshot of all this?

  • First, as noted, employment has not decreased because of increased concentration; quite the opposite. Employment has increased in the industries that have experienced the most concentration at the national level.
  • Second, this result suggests that the rise in concentrated industries has not led to increased market power over labor.
  • Third, concentration itself needs to be understood more precisely. It is not explained by a simple narrative that the economy as a whole has experienced a great deal of concentration and this has been detrimental for consumers and workers. Specific industries have experienced national level concentration, but simultaneously those same industries have become more specialized and expanded competition into local markets. 

Surprisingly (because their paper has been around for a while and yet this conclusion is rarely recited by advocates for more intervention — although they happily use the paper to support claims of rising concentration), Autor et al. (2019) finds the same thing:

Our formal model, detailed below, generates superstar effects from increases in the toughness of product market competition that raise the market share of the most productive firms in each sector at the expense of less productive competitors. . . . An alternative perspective on the rise of superstar firms is that they reflect a diminution of competition, due to a weakening of U.S. antitrust enforcement (Dottling, Gutierrez and Philippon, 2018). Our findings on the similarity of trends in the U.S. and Europe, where antitrust authorities have acted more aggressively on large firms (Gutierrez and Philippon, 2018), combined with the fact that the concentrating sectors appear to be growing more productive and innovative, suggests that this is unlikely to be the primary explanation, although it may important in some specific industries (see Cooper et al, 2019, on healthcare for example). (emphasis added).

The popular narrative among Neo-Brandeisian antitrust scholars that lax antitrust enforcement has led to concentration detrimental to society is at base an empirical one. The findings of these empirical papers severely undermine the persuasiveness of that story.

[This post is the seventh in an ongoing symposium on “Should We Break Up Big Tech?” that features analysis and opinion from various perspectives.]

[This post is authored by Alec Stapp, Research Fellow at the International Center for Law & Economics]

Should we break up Microsoft? 

In all the talk of breaking up “Big Tech,” no one seems to mention the biggest tech company of them all. Microsoft’s market cap is currently higher than those of Apple, Google, Amazon, and Facebook. If big is bad, then, at the moment, Microsoft is the worst.

Apart from size, antitrust activists also claim that the structure and behavior of the Big Four — Facebook, Google, Apple, and Amazon — is why they deserve to be broken up. But they never include Microsoft, which is curious given that most of their critiques also apply to the largest tech giant:

  1. Microsoft is big (current market cap exceeds $1 trillion)
  2. Microsoft is dominant in narrowly-defined markets (e.g., desktop operating systems)
  3. Microsoft is simultaneously operating and competing on a platform (i.e., the Microsoft Store)
  4. Microsoft is a conglomerate capable of leveraging dominance from one market into another (e.g., Windows, Office 365, Azure)
  5. Microsoft has its own “kill zone” for startups (196 acquisitions since 1994)
  6. Microsoft operates a search engine that preferences its own content over third-party content (i.e., Bing)
  7. Microsoft operates a platform that moderates user-generated content (i.e., LinkedIn)

To be clear, this is not to say that an antitrust case against Microsoft is as strong as the case against the others. Rather, it is to say that the cases against the Big Four on these dimensions are as weak as the case against Microsoft, as I will show below.

Big is bad

Tim Wu published a book last year arguing for more vigorous antitrust enforcement — including against Big Tech — called “The Curse of Bigness.” As you can tell by the title, he argues, in essence, for a return to the bygone era of “big is bad” presumptions. In his book, Wu mentions “Microsoft” 29 times, but only in the context of its 1990s antitrust case. On the other hand, Wu has explicitly called for antitrust investigations of Amazon, Facebook, and Google. It’s unclear why big should be considered bad when it comes to the latter group but not when it comes to Microsoft. Maybe bigness isn’t actually a curse, after all.

As the saying goes in antitrust, “Big is not bad; big behaving badly is bad.” This aphorism arose to counter erroneous reasoning during the era of structure-conduct-performance when big was presumed to mean bad. Thanks to an improved theoretical and empirical understanding of the nature of the competitive process, there is now a consensus that firms can grow large either via superior efficiency or by engaging in anticompetitive behavior. Size alone does not tell us how a firm grew big — so it is not a relevant metric.

Dominance in narrowly-defined markets

Critics of Google say it has a monopoly on search and critics of Facebook say it has a monopoly on social networking. Microsoft is similarly dominant in at least a few narrowly-defined markets, including desktop operating systems (Windows has a 78% market share globally): 

Source: StatCounter

Microsoft is also dominant in the “professional networking platform” market after its acquisition of LinkedIn in 2016. And the legacy tech giant is still the clear leader in the “paid productivity software” market. (Microsoft’s Office 365 revenue is roughly 10x Google’s G Suite revenue).

The problem here is obvious. These are overly-narrow market definitions for conducting an antitrust analysis. Is it true that Facebook’s platforms are the only service that can connect you with your friends? Should we really restrict the productivity market to “paid”-only options (as the EU similarly did in its Android decision) when there are so many free options available? These questions are laughable. Proper market definition requires considering whether a hypothetical monopolist could profitably impose a small but significant and non-transitory increase in price (SSNIP). If not (which is likely the case in the narrow markets above), then we should employ a broader market definition in each case.

Simultaneously operating and competing on a platform

Elizabeth Warren likes to say that if you own a platform, then you shouldn’t both be an umpire and have a team in the game. Let’s put aside the problems with that flawed analogy for now. What she means is that you shouldn’t both run the platform and sell products, services, or apps on that platform (because it’s inherently unfair to the other sellers). 

Warren’s solution to this “problem” would be to create a regulated class of businesses called “platform utilities” which are “companies with an annual global revenue of $25 billion or more and that offer to the public an online marketplace, an exchange, or a platform for connecting third parties.” Microsoft’s revenue last quarter was $32.5 billion, so it easily meets the first threshold. And Windows obviously qualifies as “a platform for connecting third parties.”

Just as in mobile operating systems, desktop operating systems are compatible with third-party applications. These third-party apps can be free (e.g., iTunes) or paid (e.g., Adobe Photoshop). Of course, Microsoft also makes apps for Windows (e.g., Word, PowerPoint, Excel, etc.). But the more you think about the technical details, the blurrier the line between the operating system and applications becomes. Is the browser an add-on to the OS or a part of it (as Microsoft Edge appears to be)? The most deeply-embedded applications in an OS are simply called “features.”

Even though Warren hasn’t explicitly mentioned that her plan would cover Microsoft, it almost certainly would. Previously, she left Apple out of the Medium post announcing her policy, only to later tell a journalist that the iPhone maker would also be prohibited from producing its own apps. But what Warren fails to include in her announcement that she would break up Apple is that trying to police the line between a first-party platform and third-party applications would be a nightmare for companies and regulators, likely leading to less innovation and higher prices for consumers (as they attempt to rebuild their previous bundles).

Leveraging dominance from one market into another

The core critique in Lina Khan’s “Amazon’s Antitrust Paradox” is that the very structure of Amazon itself is what leads to its anticompetitive behavior. Khan argues (in spite of the data) that Amazon uses profits in some lines of business to subsidize predatory pricing in other lines of businesses. Furthermore, she claims that Amazon uses data from its Amazon Web Services unit to spy on competitors and snuff them out before they become a threat.

Of course, this is similar to the theory of harm in Microsoft’s 1990s antitrust case, that the desktop giant was leveraging its monopoly from the operating system market into the browser market. Why don’t we hear the same concern today about Microsoft? Like both Amazon and Google, you could uncharitably describe Microsoft as extending its tentacles into as many sectors of the economy as possible. Here are some of the markets in which Microsoft competes (and note how the Big Four also compete in many of these same markets):

What these potential antitrust harms leave out are the clear consumer benefits from bundling and vertical integration. Microsoft’s relationships with customers in one market might make it the most efficient vendor in related — but separate — markets. It is unsurprising, for example, that Windows customers would also frequently be Office customers. Furthermore, the zero marginal cost nature of software makes it an ideal product for bundling, which redounds to the benefit of consumers.

The “kill zone” for startups

In a recent article for The New York Times, Tim Wu and Stuart A. Thompson criticize Facebook and Google for the number of acquisitions they have made. They point out that “Google has acquired at least 270 companies over nearly two decades” and “Facebook has acquired at least 92 companies since 2007”, arguing that allowing such a large number of acquisitions to occur is conclusive evidence of regulatory failure.

Microsoft has made 196 acquisitions since 1994, but they receive no mention in the NYT article (or in most of the discussion around supposed “kill zones”). But the acquisitions by Microsoft or Facebook or Google are, in general, not problematic. They provide a crucial channel for liquidity in the venture capital and startup communities (the other channel being IPOs). According to the latest data from Orrick and Crunchbase, between 2010 and 2018, there were 21,844 acquisitions of tech startups for a total deal value of $1.193 trillion

By comparison, according to data compiled by Jay R. Ritter, a professor at the University of Florida, there were 331 tech IPOs for a total market capitalization of $649.6 billion over the same period. Making it harder for a startup to be acquired would not result in more venture capital investment (and therefore not in more IPOs), according to recent research by Gordon M. Phillips and Alexei Zhdanov. The researchers show that “the passage of a pro-takeover law in a country is associated with more subsequent VC deals in that country, while the enactment of a business combination antitakeover law in the U.S. has a negative effect on subsequent VC investment.”

As investor and serial entrepreneur Leonard Speiser said recently, “If the DOJ starts going after tech companies for making acquisitions, venture investors will be much less likely to invest in new startups, thereby reducing competition in a far more harmful way.” 

Search engine bias

Google is often accused of biasing its search results to favor its own products and services. The argument goes that if we broke them up, a thousand search engines would bloom and competition among them would lead to less-biased search results. While it is a very difficult — if not impossible — empirical question to determine what a “neutral” search engine would return, one attempt by Josh Wright found that “own-content bias is actually an infrequent phenomenon, and Google references its own content more favorably than other search engines far less frequently than does Bing.” 

The report goes on to note that “Google references own content in its first results position when no other engine does in just 6.7% of queries; Bing does so over twice as often (14.3%).” Arguably, users of a particular search engine might be more interested in seeing content from that company because they have a preexisting relationship. But regardless of how we interpret these results, it’s clear this not a frequent phenomenon.

So why is Microsoft being left out of the antitrust debate now?

One potential reason why Google, Facebook, and Amazon have been singled out for criticism of practices that seem common in the tech industry (and are often pro-consumer) may be due to the prevailing business model in the journalism industry. Google and Facebook are by far the largest competitors in the digital advertising market, and Amazon is expected to be the third-largest player by next year, according to eMarketer. As Ramsi Woodcock pointed out, news publications are also competing for advertising dollars, the type of conflict of interest that usually would warrant disclosure if, say, a journalist held stock in a company they were covering.

Or perhaps Microsoft has successfully avoided receiving the same level of antitrust scrutiny as the Big Four because it is neither primarily consumer-facing like Apple or Amazon nor does it operate a platform with a significant amount of political speech via user-generated content (UGC) like Facebook or Google (YouTube). Yes, Microsoft moderates content on LinkedIn, but the public does not get outraged when deplatforming merely prevents someone from spamming their colleagues with requests “to add you to my professional network.”

Microsoft’s core areas are in the enterprise market, which allows it to sidestep the current debates about the supposed censorship of conservatives or unfair platform competition. To be clear, consumer-facing companies or platforms with user-generated content do not uniquely merit antitrust scrutiny. On the contrary, the benefits to consumers from these platforms are manifest. If this theory about why Microsoft has escaped scrutiny is correct, it means the public discussion thus far about Big Tech and antitrust has been driven by perception, not substance.


Last week, I objected to Senator Warner relying on the flawed AOL/Time Warner merger conditions as a template for tech regulatory policy, but there is a much deeper problem contained in his proposals.  Although he does not explicitly say “big is bad” when discussing competition issues, the thrust of much of what he recommends would serve to erode the power of larger firms in favor of smaller firms without offering a justification for why this would result in a superior state of affairs. And he makes these recommendations without respect to whether those firms actually engage in conduct that is harmful to consumers.

In the Data Portability section, Warner says that “As platforms grow in size and scope, network effects and lock-in effects increase; consumers face diminished incentives to contract with new providers, particularly if they have to once again provide a full set of data to access desired functions.“ Thus, he recommends a data portability mandate, which would theoretically serve to benefit startups by providing them with the data that large firms possess. The necessary implication here is that it is a per se good that small firms be benefited and large firms diminished, as the proposal is not grounded in any evaluation of the competitive behavior of the firms to which such a mandate would apply.

Warner also proposes an “interoperability” requirement on “dominant platforms” (which I criticized previously) in situations where, “data portability alone will not produce procompetitive outcomes.” Again, the necessary implication is that it is a per se good that established platforms share their services with start ups without respect to any competitive analysis of how those firms are behaving. The goal is preemptively to “blunt their ability to leverage their dominance over one market or feature into complementary or adjacent markets or products.”

Perhaps most perniciously, Warner recommends treating large platforms as essential facilities in some circumstances. To this end he states that:

Legislation could define thresholds – for instance, user base size, market share, or level of dependence of wider ecosystems – beyond which certain core functions/platforms/apps would constitute ‘essential facilities’, requiring a platform to provide third party access on fair, reasonable and non-discriminatory (FRAND) terms and preventing platforms from engaging in self-dealing or preferential conduct.

But, as  i’ve previously noted with respect to imposing “essential facilities” requirements on tech platforms,

[T]he essential facilities doctrine is widely criticized, by pretty much everyone. In their respected treatise, Antitrust Law, Herbert Hovenkamp and Philip Areeda have said that “the essential facility doctrine is both harmful and unnecessary and should be abandoned”; Michael Boudin has noted that the doctrine is full of “embarrassing weaknesses”; and Gregory Werden has opined that “Courts should reject the doctrine.”

Indeed, as I also noted, “the Supreme Court declined to recognize the essential facilities doctrine as a distinct rule in Trinko, where it instead characterized the exclusionary conduct in Aspen Skiing as ‘at or near the outer boundary’ of Sherman Act § 2 liability.”

In short, it’s very difficult to know when access to a firm’s internal functions might be critical to the facilitation of a market. It simply cannot be true that a firm becomes bound under onerous essential facilities requirements (or classification as a public utility) simply because other firms find it more convenient to use its services than to develop their own.

The truth of what is actually happening in these cases, however, is that third-party firms are choosing to anchor their business to the processes of another firm which generates an “asset specificity” problem that they then seek the government to remedy:

A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control.

This is naturally a calculated risk that a firm may choose to make, but it is a risk. To pry open Google or Facebook for the benefit of competitors that choose to play to Google and Facebook’s user base, rather than opening markets of their own, punishes the large players for being successful while also rewarding behavior that shies away from innovation. Further, such a policy would punish the large platforms whenever they innovate with their services in any way that might frustrate third-party “integrators” (see, e.g., Foundem’s claims that Google’s algorithm updates meant to improve search quality for users harmed Foundem’s search rankings).  

Rather than encouraging innovation, blessing this form of asset specificity would have the perverse result of entrenching the status quo.

In all of these recommendations from Senator Warner, there is no claim that any of the targeted firms will have behaved anticompetitively, but merely that they are above a certain size. This is to say that, in some cases, big is bad.

Senator Warner’s policies would harm competition and innovation

As Geoffrey Manne and Gus Hurwitz have recently noted these views run completely counter to the last half-century or more of economic and legal learning that has occurred in antitrust law. From its murky, politically-motivated origins through the early 60’s when the Structure-Conduct-Performance (“SCP”) interpretive framework was ascendant, antitrust law was more or less guided by the gut feeling of regulators that big business necessarily harmed the competitive process.

Thus, at its height with SCP, “big is bad” antitrust relied on presumptions that large firms over a certain arbitrary threshold were harmful and should be subjected to more searching judicial scrutiny when merging or conducting business.

A paradigmatic example of this approach can be found in Von’s Grocery where the Supreme Court prevented the merger of two relatively small grocery chains. Combined, the two chains would have constitutes a mere 9 percent of the market, yet the Supreme Court, relying on the SCP aversion to concentration in itself, prevented the merger despite any procompetitive justifications that would have allowed the combined entity to compete more effectively in a market that was coming to be dominated by large supermarkets.

As Manne and Hurwitz observe: “this decision meant breaking up a merger that did not harm consumers, on the one hand, while preventing firms from remaining competitive in an evolving market by achieving efficient scale, on the other.” And this gets to the central defect of Senator Warner’s proposals. He ties his decisions to interfere in the operations of large tech firms to their size without respect to any demonstrable harm to consumers.

To approach antitrust this way — that is, to roll the clock back to a period before there was a well-defined and administrable standard for antitrust — is to open the door for regulation by political whim. But the value of the contemporary consumer welfare test is that it provides knowable guidance that limits both the undemocratic conduct of politically motivated enforcers as well as the opportunities for private firms to engage in regulatory capture. As Manne and Hurwitz observe:

Perhaps the greatest virtue of the consumer welfare standard is not that it is the best antitrust standard (although it is) — it’s simply that it is a standard. The story of antitrust law for most of the 20th century was one of standard-less enforcement for political ends. It was a tool by which any entrenched industry could harness the force of the state to maintain power or stifle competition.

While it is unlikely that Senator Warner intends to entrench politically powerful incumbents, or enable regulation by whim, those are the likely effects of his proposals.

Antitrust law has a rich set of tools for dealing with competitive harm. Introducing legislation to define arbitrary thresholds for limiting the potential power of firms will ultimately undermine the power of those tools and erode the welfare of consumers.

 

What to make of Wednesday’s decision by the European Commission alleging that Google has engaged in anticompetitive behavior? In this post, I contrast the European Commission’s (EC) approach to competition policy with US antitrust, briefly explore the history of smartphones and then discuss the ruling.

Asked about the EC’s decision the day it was announced, FTC Chairman Joseph Simons noted that, while the market is concentrated, Apple and Google “compete pretty heavily against each other” with their mobile operating systems, in stark contrast to the way the EC defined the market. Simons also stressed that for the FTC what matters is not the structure of the market per se but whether or not there is harm to the consumer. This again contrasts with the European Commission’s approach, which does not require harm to consumers. As Simons put it:

Once they [the European Commission] find that a company is dominant… that imposes upon the company kind of like a fairness obligation irrespective of what the effect is on the consumer. Our regulatory… our antitrust regime requires that there be a harm to consumer welfare — so the consumer has to be injured — so the two tests are a little bit different.

Indeed, and as the history below shows, the popularity of Apple’s iOS and Google’s Android operating systems arose because they were superior products — not because of anticompetitive conduct on the part of either Apple or Google. On the face of it, the conduct of both Apple and Google has led to consumer benefits, not harms. So, from the perspective of U.S. antitrust authorities, there is no reason to take action.

Moreover, there is a danger that by taking action as the EU has done, competition and innovation will be undermined — which would be a perverse outcome indeed. These concerns were reflected in a statement by Senator Mike Lee (R-UT):

Today’s decision by the European Commission to fine Google over $5 billion and require significant changes to its business model to satisfy EC bureaucrats has the potential to undermine competition and innovation in the United States,” Sen. Lee said. “Moreover, the decision further demonstrates the different approaches to competition policy between U.S. and EC antitrust enforcers. As discussed at the hearing held last December before the Senate’s Subcommittee on Antitrust, Competition Policy & Consumer Rights, U.S. antitrust agencies analyze business practices based on the consumer welfare standard. This analytical framework seeks to protect consumers rather than competitors. A competitive marketplace requires strong antitrust enforcement. However, appropriate competition policy should serve the interests of consumers and not be used as a vehicle by competitors to punish their successful rivals.

Ironically, the fundamental basis for the Commission’s decision is an analytical framework developed by economists at Harvard in the 1950s, which presumes that the structure of a market determines the conduct of the participants, which in turn presumptively affects outcomes for consumers. This “structure-conduct-performance” paradigm has been challenged both theoretically and empirically (and by “challenged,” I mean “demolished”).

Maintaining, as EC Commissioner Vestager has, that “What would serve competition is to have more players,” is to adopt a presumption regarding competition rooted in the structure of the market, without sufficient attention to the facts on the ground. As French economist Jean Tirole noted in his Nobel Prize lecture:

Economists accordingly have advocated a case-by-case or “rule of reason” approach to antitrust, away from rigid “per se” rules (which mechanically either allow or prohibit certain behaviors, ranging from price-fixing agreements to resale price maintenance). The economists’ pragmatic message however comes with a double social responsibility. First, economists must offer a rigorous analysis of how markets work, taking into account both the specificities of particular industries and what regulators do and do not know….

Second, economists must participate in the policy debate…. But of course, the responsibility here goes both ways. Policymakers and the media must also be willing to listen to economists.

In good Tirolean fashion, we begin with an analysis of how the market for smartphones developed. What quickly emerges is that the structure of the market is a function of intense competition, not its absence. And, by extension, mandating a different structure will likely impede competition, or, at the very least, will not likely contribute to it.

A brief history of smartphone competition

In 2006, Nokia’s N70 became the first smartphone to sell more than a million units. It was a beautiful device, with a simple touch screen interface and real push buttons for numbers. The following year, Apple released its first iPhone. It sold 7 million units — about the same as Nokia’s N95 and slightly less than LG’s Shine. Not bad, but paltry compared to the sales of Nokia’s 1200 series phones, which had combined sales of over 250 million that year — about twice the total of all smartphone sales in 2007.

By 2017, smartphones had come to dominate the market, with total sales of over 1.5 billion. At the same time, the structure of the market has changed dramatically. In the first quarter of 2018, Apple’s iPhone X and iPhone 8 were the two best-selling smartphones in the world. In total, Apple shipped just over 52 million phones, accounting for 14.5% of the global market. Samsung, which has a wider range of devices, sold even more: 78 million phones, or 21.7% of the market. At third and fourth place were Huawei (11%) and Xiaomi (7.5%). Nokia and LG didn’t even make it into the top 10, with market shares of only 3% and 1% respectively.

Several factors have driven this highly dynamic market. Dramatic improvements in cellular data networks have played a role. But arguably of greater importance has been the development of software that offers consumers an intuitive and rewarding experience.

Apple’s iOS and Google’s Android operating systems have proven to be enormously popular among both users and app developers. This has generated synergies — or what economists call network externalities — as more apps have been developed, so more people are attracted to the ecosystem and vice versa, leading to a virtuous circle that benefits both users and app developers.

By contrast, Nokia’s early smartphones, including the N70 and N95, ran Symbian, the operating system developed for Psion’s handheld devices, which had a clunkier user interface and was more difficult to code — so it was less attractive to both users and developers. In addition, Symbian lacked an effective means of solving the problem of fragmentation of the operating system across different devices, which made it difficult for developers to create apps that ran across the ecosystem — something both Apple (through its closed system) and Google (through agreements with carriers) were able to address. Meanwhile, Java’s MIDP used in LG’s Shine, and its successor J2ME imposed restrictions on developers (such as prohibiting access to files, hardware, and network connections) that seem to have made it less attractive than Android.

The relative superiority of their operating systems enabled Apple and the manufacturers of Android-based phones to steal a march on the early leaders in the smartphone revolution.

The fact that Google allows smartphone manufacturers to install Android for free, distributes Google Play and other apps in a free bundle, and pays such manufacturers for preferential treatment for Google Search, has also kept the cost of Android-based smartphones down. As a result, Android phones are the cheapest on the market, providing a powerful experience for as little as $50. It is reasonable to conclude from this that innovation, driven by fierce competition, has led to devices, operating systems, and apps that provide enormous benefits to consumers.

The Commission decision would harm device manufacturers, app developers and consumers

The EC’s decision seems to disregard the history of smartphone innovation and competition and their ongoing consequences. As Dirk Auer explains, the Open Handset Alliance (OHA) was created specifically to offer an effective alternative to Apple’s iPhone — and it worked. Indeed, it worked so spectacularly that Android is installed on about 80% of all new phones. This success was the result of several factors that the Commission now seeks to undermine:

First, in order to maintain order within the Android universe, and thereby ensure that apps developed for Android would function on the vast majority of Android devices, Google and the OHA sought to limit the extent to which Android “forks” could be created. (Apple didn’t face this problem because its source code is proprietary, so cannot be modified by third-party developers.) One way Google does this is by imposing restrictions on the licensing of its proprietary apps, such as the Google Play store (a repository of apps, similar to Apple’s App Store).

Device manufacturers that don’t conform to these restrictions may still build devices with their forked version of Android — but without those Google apps. Indeed, Amazon chooses to develop a non-conforming version of Android and built its own app repository for its Fire devices (though it is still possible to add the Google Play Store). That strategy seems to be working for Amazon in the tablet market; in 2017 it rose past Samsung to become the second biggest manufacturer of tablets worldwide, after Apple.

Second, in order to be able to offer Android for free to smartphone manufacturers, Google sought to develop unique revenue streams (because, although the software is offered for free, it turns out that software developers generally don’t work for free). The main way Google did this was by requiring manufacturers that choose to install Google Play also to install its browser (Chrome) and search tools, which generate revenue from advertising. At the same time, Google kept its platform open by permitting preloads of rivals’ apps and creating a marketplace where rivals can also reach scale. Mozilla’s Firefox browser, for example, has been downloaded over 100 million times on Android.

The importance of these factors to the success of Android is acknowledged by the EC. But instead of treating them as legitimate business practices that enabled the development of high-quality, low-cost smartphones and a universe of apps that benefits billions of people, the Commission simply asserts that they are harmful, anticompetitive practices.

For example, the Commission asserts that

In order to be able to pre-install on their devices Google’s proprietary apps, including the Play Store and Google Search, manufacturers had to commit not to develop or sell even a single device running on an Android fork. The Commission found that this conduct was abusive as of 2011, which is the date Google became dominant in the market for app stores for the Android mobile operating system.

This is simply absurd, to say nothing of ahistorical. As noted, the restrictions on Android forks plays an important role in maintaining the coherency of the Android ecosystem. If device manufacturers were able to freely install Google apps (and other apps via the Play Store) on devices running problematic Android forks that were unable to run the apps properly, consumers — and app developers — would be frustrated, Google’s brand would suffer, and the value of the ecosystem would be diminished. Extending this restriction to all devices produced by a specific manufacturer, regardless of whether they come with Google apps preinstalled, reinforces the importance of the prohibition to maintaining the coherency of the ecosystem.

It is ridiculous to say that something (efforts to rein in Android forking) that made perfect sense until 2011 and that was central to the eventual success of Android suddenly becomes “abusive” precisely because of that success — particularly when the pre-2011 efforts were often viewed as insufficient and unsuccessful (a January 2012 Guardian Technology Blog post, “How Google has lost control of Android,” sums it up nicely).

Meanwhile, if Google is unable to tie pre-installation of its search and browser apps to the installation of its app store, then it will have less financial incentive to continue to maintain the Android ecosystem. Or, more likely, it will have to find other ways to generate revenue from the sale of devices in the EU — such as charging device manufacturers for Android or Google Play. The result is that consumers will be harmed, either because the ecosystem will be degraded, or because smartphones will become more expensive.

The troubling absence of Apple from the Commission’s decision

In addition, the EC’s decision is troublesome in other ways. First, for its definition of the market. The ruling asserts that “Through its control over Android, Google is dominant in the worldwide market (excluding China) for licensable smart mobile operating systems, with a market share of more than 95%.” But “licensable smart mobile operating systems” is a very narrow definition, as it necessarily precludes operating systems that are not licensable — such as Apple’s iOS and RIM’s Blackberry OS. Since Apple has nearly 25% of the market share of smartphones in Europe, the European Commission has — through its definition of the market — presumed away the primary source of effective competition. As Pinar Akman has noted:

How can Apple compete with Google in the market as defined by the Commission when Apple allows only itself to use its operating system only on devices that Apple itself manufactures?

The EU then invents a series of claims regarding the lack of competition with Apple:

  • end user purchasing decisions are influenced by a variety of factors (such as hardware features or device brand), which are independent from the mobile operating system;

It is not obvious that this is evidence of a lack of competition. A better explanation is that the EU’s narrow definition of the market is defective. In fact, one could easily draw the opposite conclusion of that drawn by the Commission: the fact that purchasing decisions are driven by various factors suggests that there is substantial competition, with phone manufacturers seeking to design phones that offer a range of features, on a number of dimensions, to best capture diverse consumer preferences. They are able to do this in large part precisely because consumers are able to rely upon a generally similar operating system and continued access to the apps that they have downloaded. As Tim Cook likes to remind his investors, Apple is quite successful at targeting “Android switchers” to switch to iOS.

 

  • Apple devices are typically priced higher than Android devices and may therefore not be accessible to a large part of the Android device user base;

 

And yet, in the first quarter of 2018, Apple phones accounted for five of the top ten selling smartphones worldwide. Meanwhile, several competing phones, including the fifth and sixth best-sellers, Samsung’s Galaxy S9 and S9+, sell for similar prices to the most expensive iPhones. And a refurbished iPhone 6 can be had for less than $150.

 

  • Android device users face switching costs when switching to Apple devices, such as losing their apps, data and contacts, and having to learn how to use a new operating system;

 

This is, of course, true for any system switch. And yet the growing market share of Apple phones suggests that some users are willing to part with those sunk costs. Moreover, the increasing predominance of cloud-based and cross-platform apps, as well as Apple’s own “Move to iOS” Android app (which facilitates the transfer of users’ data from Android to iOS), means that the costs of switching border on trivial. As mentioned above, Tim Cook certainly believes in “Android switchers.”

 

  • even if end users were to switch from Android to Apple devices, this would have limited impact on Google’s core business. That’s because Google Search is set as the default search engine on Apple devices and Apple users are therefore likely to continue using Google Search for their queries.

 

This is perhaps the most bizarre objection of them all. The fact that Apple chooses to install Google search as the default demonstrates that consumers prefer that system over others. Indeed, this highlights a fundamental problem with the Commission’s own rationale, As Akman notes:

It is interesting that the case appears to concern a dominant undertaking leveraging its dominance from a market in which it is dominant (Google Play Store) into another market in which it is also dominant (internet search). As far as this author is aware, most (if not all?) cases of tying in the EU to date concerned tying where the dominant undertaking leveraged its dominance in one market to distort or eliminate competition in an otherwise competitive market.

Conclusion

As the foregoing demonstrates, the EC’s decision is based on a fundamental misunderstanding of the nature and evolution of the market for smartphones and associated applications. The statement by Commissioner Vestager quoted above — that “What would serve competition is to have more players” — belies this misunderstanding and highlights the erroneous assumptions underpinning the Commission’s analysis, which is wedded to a theory of market competition that was long ago thrown out by economists.

And, thankfully, it appears that the FTC Chairman is aware of at least some of the flaws in the EC’s conclusions.

Google will undoubtedly appeal the Commission’s decision. For the sakes of the millions of European consumers who rely on Android-based phones and the millions of software developers who provide Android apps, let’s hope that they succeed.

Today would have been Henry Manne’s 90th birthday. When he passed away in 2015 he left behind an immense and impressive legacy. In 1991, at the inaugural meeting of the American Law & Economics Association (ALEA), Manne was named a Life Member of ALEA and, along with Nobel Laureate Ronald Coase, and federal appeals court judges Richard Posner and Guido Calabresi, one of the four Founders of Law and Economics. The organization I founded, the International Center for Law & Economics is dedicated to his memory, along with that of his great friend and mentor, UCLA economist Armen Alchian.

Manne is best known for his work in corporate governance and securities law and regulation, of course. But sometimes forgotten is that his work on the market for corporate control was motivated by concerns about analytical flaws in merger enforcement. As former FTC commissioners Maureen Ohlhausen and Joshua Wright noted in a 2015 dissenting statement:

The notion that the threat of takeover would induce current managers to improve firm performance to the benefit of shareholders was first developed by Henry Manne. Manne’s pathbreaking work on the market for corporate control arose out of a concern that antitrust constraints on horizontal mergers would distort its functioning. See Henry G. Manne, Mergers and the Market for Corporate Control, 73 J. POL. ECON. 110 (1965).

But Manne’s focus on antitrust didn’t end in 1965. Moreover, throughout his life he was a staunch critic of misguided efforts to expand the power of government, especially when these efforts claimed to have their roots in economic reasoning — which, invariably, was hopelessly flawed. As his obituary notes:

In his teaching, his academic writing, his frequent op-eds and essays, and his work with organizations like the Cato Institute, the Liberty Fund, the Institute for Humane Studies, and the Mont Pèlerin Society, among others, Manne advocated tirelessly for a clearer understanding of the power of markets and competition and the importance of limited government and economically sensible regulation.

Thus it came to be, in 1974, that Manne was called to testify before the Senate Judiciary Committee, Subcommittee on Antitrust and Monopoly, on Michigan Senator Philip A. Hart’s proposed Industrial Reorganization Act. His testimony is a tour de force, and a prescient rejoinder to the faddish advocates of today’s “hipster antitrust”— many of whom hearken longingly back to the antitrust of the 1960s and its misguided “gurus.”

Henry Manne’s trenchant testimony critiquing the Industrial Reorganization Act and its (ostensible) underpinnings is reprinted in full in this newly released ICLE white paper (with introductory material by Geoffrey Manne):

Henry G. Manne: Testimony on the Proposed Industrial Reorganization Act of 1973 — What’s Hip (in Antitrust) Today Should Stay Passé

Sen. Hart proposed the Industrial Reorganization Act in order to address perceived problems arising from industrial concentration. The bill was rooted in the belief that industry concentration led inexorably to monopoly power; that monopoly power, however obtained, posed an inexorable threat to freedom and prosperity; and that the antitrust laws (i.e., the Sherman and Clayton Acts) were insufficient to address the purported problems.

That sentiment — rooted in the reflexive application of the (largely-discredited structure-conduct-performance (SCP) paradigm) — had already become largely passé among economists in the 70s, but it has resurfaced today as the asserted justification for similar (although less onerous) antitrust reform legislation and the general approach to antitrust analysis commonly known as “hipster antitrust.”

The critiques leveled against the asserted economic underpinnings of efforts like the Industrial Reorganization Act are as relevant today as they were then. As Henry Manne notes in his testimony:

To be successful in this stated aim [“getting the government out of the market”] the following dreams would have to come true: The members of both the special commission and the court established by the bill would have to be satisfied merely to complete their assigned task and then abdicate their tremendous power and authority; they would have to know how to satisfactorily define and identify the limits of the industries to be restructured; the Government’s regulation would not sacrifice significant efficiencies or economies of scale; and the incentive for new firms to enter an industry would not be diminished by the threat of a punitive response to success.

The lessons of history, economic theory, and practical politics argue overwhelmingly against every one of these assumptions.

Both the subject matter of and impetus for the proposed bill (as well as Manne’s testimony explaining its economic and political failings) are eerily familiar. The preamble to the Industrial Reorganization Act asserts that

competition… preserves a democratic society, and provides an opportunity for a more equitable distribution of wealth while avoiding the undue concentration of economic, social, and political power; [and] the decline of competition in industries with oligopoly or monopoly power has contributed to unemployment, inflation, inefficiency, an underutilization of economic capacity, and the decline of exports….

The echoes in today’s efforts to rein in corporate power by adopting structural presumptions are unmistakable. Compare, for example, this language from Sen. Klobuchar’s Consolidation Prevention and Competition Promotion Act of 2017:

[C]oncentration that leads to market power and anticompetitive conduct makes it more difficult for people in the United States to start their own businesses, depresses wages, and increases economic inequality;

undue market concentration also contributes to the consolidation of political power, undermining the health of democracy in the United States; [and]

the anticompetitive effects of market power created by concentration include higher prices, lower quality, significantly less choice, reduced innovation, foreclosure of competitors, increased entry barriers, and monopsony power.

Remarkably, Sen. Hart introduced his bill as “an alternative to government regulation and control.” Somehow, it was the antithesis of “government control” to introduce legislation that, in Sen. Hart’s words,

involves changing the life styles of many of our largest corporations, even to the point of restructuring whole industries. It involves positive government action, not to control industry but to restore competition and freedom of enterprise in the economy

Like today’s advocates of increased government intervention to design the structure of the economy, Sen. Hart sought — without a trace of irony — to “cure” the problem of politicized, ineffective enforcement by doubling down on the power of the enforcers.

Henry Manne was having none of it. As he pointedly notes in his testimony, the worst problems of monopoly power are of the government’s own making. The real threat to democracy, freedom, and prosperity is the political power amassed in the bureaucratic apparatus that frequently confers monopoly, at least as much as the monopoly power it spawns:

[I]t takes two to make that bargain [political protection and subsidies in exchange for lobbying]. And as we look around at various industries we are constrained to ask who has not done this. And more to the point, who has not succeeded?

It is unhappily almost impossible to name a significant industry in the United States that has not gained some degree of protection from the rigors of competition from Federal, State or local governments.

* * *

But the solution to inefficiencies created by Government controls cannot lie in still more controls. The politically responsible task ahead for Congress is to dismantle our existing regulatory monster before it strangles us.

We have spawned a gigantic bureaucracy whose own political power threatens the democratic legitimacy of government.

We are rapidly moving toward the worst features of a centrally planned economy with none of the redeeming political, economic, or ethical features usually claimed for such systems.

The new white paper includes Manne’s testimony in full, including his exchange with Sen. Hart and committee staffers following his prepared remarks.

It is, sadly, nearly as germane today as it was then.

One final note: The subtitle for the paper is a reference to the song “What Is Hip?” by Tower of Power. Its lyrics are decidedly apt:

You done went and found you a guru,

In your effort to find you a new you,

And maybe even managed

To raise your conscious level.

While you’re striving to find the right road,

There’s one thing you should know:

What’s hip today

Might become passé.

— Tower of Power, What Is Hip? (Emilio Castillo, John David Garibaldi & Stephen M. Kupka, What Is Hip? (Bob-A-Lew Songs 1973), from the album TOWER OF POWER (Warner Bros. 1973))

And here’s the song, in all its glory:

 

There are some who view a host of claimed negative social ills allegedly related to the large size of firms like Amazon as an occasion to call for the company’s break up. And, unfortunately, these critics find an unlikely ally in President Trump, whose tweet storms claim that tech platforms are too big and extract unfair rents at the expense of small businesses. But these critics are wrong: Amazon is not a dangerous monopoly, and it certainly should not be broken up.  

Of course, no one really spells out what it means for these companies to be “too big.” Even Barry Lynn, a champion of the neo-Brandeisian antitrust movement, has shied away from specifics. The best that emerges when probing his writings is that he favors something like a return to Joe Bain’s “Structure-Conduct-Performance” paradigm (but even here, the details are fuzzy).

The reality of Amazon’s impact on the market is quite different than that asserted by its critics. Amazon has had decades to fulfill a nefarious scheme to suddenly raise prices and reap the benefits of anticompetive behavior. Yet it keeps putting downward pressure on prices in a way that seems to be commoditizing goods instead of building anticompetitive moats.

Amazon Does Not Anticompetitively Exercise Market Power

Twitter rants aside, more serious attempts to attack Amazon on antitrust grounds argue that it is engaging in pricing that is “predatory.” But “predatory pricing” requires a specific demonstration of factors — which, to date, have not been demonstrated — in order to justify legal action. Absent a showing of these factors, it has long been understood that seemingly “predatory” conduct is unlikely to harm consumers and often actually benefits consumers.

One important requirement that has gone unsatisfied is that a firm engaging in predatory pricing must have market power. Contrary to common characterizations of Amazon as a retail monopolist, its market power is less than it seems. By no means does it control retail in general. Rather, less than half of all online commerce (44%) takes place on its platform (and that number represents only 4% of total US retail commerce). Of that 44 percent, a significant portion is attributable to the merchants who use Amazon as a platform for their own online retail sales. Rather than abusing a monopoly market position to predatorily harm its retail competitors, at worst Amazon has created a retail business model that puts pressure on other firms to offer more convenience and lower prices to their customers. This is what we want and expect of competitive markets.

The claims leveled at Amazon are the intellectual kin of the ones made against Walmart during its ascendancy that it was destroying main street throughout the nation. In 1993, it was feared that Walmart’s quest to vertically integrate its offerings through Sam’s Club warehouse operations meant that “[r]etailers could simply bypass their distributors in favor of Sam’s — and Sam’s could take revenues from local merchants on two levels: as a supplier at the wholesale level, and as a competitor at retail.” This is a strikingly similar accusation to those leveled against Amazon’s use of its Seller Marketplace to aggregate smaller retailers on its platform.

But, just as in 1993 with Walmart, and now with Amazon, the basic fact remains that consumer preferences shift. Firms need to alter their behavior to satisfy their customers, not pretend they can change consumer preferences to suit their own needs. Preferring small, local retailers to Amazon or Walmart is a decision for individual consumers interacting in their communities, not for federal officials figuring out how best to pattern the economy.

All of this is not to say that Amazon is not large, or important, or that, as a consequence of its success it does not exert influence over the markets it operates in. But having influence through success is not the same as anticompetitively asserting market power.

Other criticisms of Amazon focus on its conduct in specific vertical markets in which it does have more significant market share. For instance, a UK Liberal Democratic leader recently claimed that “[j]ust as Standard Oil once cornered 85% of the refined oil market, today… Amazon accounts for 75% of ebook sales … .”

The problem with this concern is that Amazon’s conduct in the ebook market has had, on net, pro-competitive, not anti-competitive, effects. Amazon’s behavior in the ebook market has actually increased demand for books overall (and expanded output), increased the amount that consumers read, and decreased the price of theses books. Amazon is now even opening physical bookstores. Lina Khan made much hay in her widely cited article last year that this was all part of a grand strategy to predatorily push competitors out of the market:

The fact that Amazon has been willing to forego profits for growth undercuts a central premise of contemporary predatory pricing doctrine, which assumes that predation is irrational precisely because firms prioritize profits over growth. In this way, Amazon’s strategy has enabled it to use predatory pricing tactics without triggering the scrutiny of predatory pricing laws.

But it’s hard to allege predation in a market when over the past twenty years Amazon has consistently expanded output and lowered overall prices in the book market. Courts and lawmakers have sought to craft laws that encourage firms to provide consumers with more choices at lower prices — a feat that Amazon repeatedly accomplishes. To describe this conduct as anticompetitive is asking for a legal requirement that is at odds with the goal of benefiting consumers. It is to claim that Amazon has a contradictory duty to both benefit consumers and its shareholders, while also making sure that all of its less successful competitors also stay in business.

But far from creating a monopoly, the empirical reality appears to be that Amazon is driving categories of goods, like books, closer to the textbook model of commodities in a perfectly competitive market. Hardly an antitrust violation.

Amazon Should Not Be Broken Up

“Big is bad” may roll off the tongue, but, as a guiding ethic, it makes for terrible public policy. Amazon’s size and success are a direct result of its ability to enter relevant markets and to innovate. To break up Amazon, or any other large firm, is to punish it for serving the needs of its consumers.

None of this is to say that large firms are incapable of causing harm or acting anticompetitively. But we should accept calls for dramatic regulatory intervention  — especially from those in a position to influence regulatory or market reactions to such calls — to be supported by substantial factual evidence and legal and economic theory.

This tendency to go after large players is nothing new. As noted above, Walmart triggered many similar concerns thirty years ago. Thinking about Walmart then, pundits feared that direct competition with Walmart was fruitless:

In the spring of 1992 Ken Stone came to Maine to address merchant groups from towns in the path of the Wal-Mart advance. His advice was simple and direct: don’t compete directly with Wal-Mart; specialize and carry harder-to-get and better-quality products; emphasize customer service; extend your hours; advertise more — not just your products but your business — and perhaps most pertinent of all to this group of Yankee individualists, work together.

And today, some think it would be similarly pointless to compete with Amazon:

Concentration means it is much harder for someone to start a new business that might, for example, try to take advantage of the cheap housing in Minneapolis. Why bother when you know that if you challenge Amazon, they will simply dump your product below cost and drive you out of business?

The interesting thing to note, of course, is that Walmart is now desperately trying to compete with Amazon. But despite being very successful in its own right, and having strong revenues, Walmart doesn’t seem able to keep up.

Some small businesses will close as new business models emerge and consumer preferences shift. This is to be expected in a market driven by creative destruction. Once upon a time Walmart changed retail and improved the lives of many Americans. If our lawmakers can resist the urge to intervene without real evidence of harm, Amazon just might do the same.

Truth on the Market is pleased to announce its next blog symposium:

Agricultural and Biotech Mergers: Implications for Antitrust Law and Economics in Innovative Industries

March 30 & 31, 2017

Earlier this week the European Commission cleared the merger of Dow and DuPont, subject to conditions including divestiture of DuPont’s “global R&D organisation.” As the Commission noted:

The Commission had concerns that the merger as notified would have reduced competition on price and choice in a number of markets for existing pesticides. Furthermore, the merger would have reduced innovation. Innovation, both to improve existing products and to develop new active ingredients, is a key element of competition between companies in the pest control industry, where only five players are globally active throughout the entire research & development (R&D) process.

In addition to the traditional focus on price effects, the merger’s presumed effect on innovation loomed large in the EC’s consideration of the Dow/DuPont merger — as it is sure to in its consideration of the other two pending mergers in the agricultural biotech and chemicals industries between Bayer and Monsanto and ChemChina and Syngenta. Innovation effects are sure to take center stage in the US reviews of the mergers, as well.

What is less clear is exactly how antitrust agencies evaluate — and how they should evaluate — mergers like these in rapidly evolving, high-tech industries.

These proposed mergers present a host of fascinating and important issues, many of which go to the core of modern merger enforcement — and antitrust law and economics more generally. Among other things, they raise issues of:

  • The incorporation of innovation effects in antitrust analysis;
  • The relationship between technological and organizational change;
  • The role of non-economic considerations in merger review;
  • The continued relevance (or irrelevance) of the Structure-Conduct-Performance paradigm;
  • Market definition in high-tech markets; and
  • The patent-antitrust interface

Beginning on March 30, Truth on the Market and the International Center for Law & Economics will host a blog symposium discussing how some of these issues apply to these mergers per se, as well as the state of antitrust law and economics in innovative-industry mergers more broadly.

As in the past (see examples of previous TOTM blog symposia here), we’ve lined up an outstanding and diverse group of scholars to discuss these issues:

  • Allen Gibby, Senior Fellow for Law & Economics, International Center for Law & Economics
  • Shubha Ghosh, Crandall Melvin Professor of Law and Director of the Technology Commercialization Law Program, Syracuse University College of Law
  • Ioannis Lianos,  Chair of Global Competition Law and Public Policy, Faculty of Laws, University College London
  • John E. Lopatka (tent.), A. Robert Noll Distinguished Professor of Law, Penn State Law
  • Geoffrey A. Manne, Executive Director, International Center for Law & Economics
  • Diana L. Moss, President, American Antitrust Institute
  • Nicolas Petit, Professor of Law, Faculty of Law, and Co-director, Liege Competition and Innovation Institute, University of Liege
  • Levi A. Russell, Assistant Professor, Agricultural & Applied Economics, University of Georgia
  • Joanna M. Shepherd, Professor of Law, Emory University School of Law
  • Michael Sykuta, Associate Professor, Agricultural and Applied Economics, and Director, Contracting Organizations Research Institute, University of Missouri

Initial contributions to the symposium will appear periodically on the 30th and 31st, and the discussion will continue with responsive posts (if any) next week. We hope to generate a lively discussion, and readers are invited to contribute their own thoughts in comments to the participants’ posts.

The symposium posts will be collected here.

We hope you’ll join us!

Co-authored with Berin Szoka

In the past two weeks, Members of Congress from both parties have penned scathing letters to the FTC warning of the consequences (both to consumers and the agency itself) if the Commission sues Google not under traditional antitrust law, but instead by alleging unfair competition under Section 5 of the FTC Act. The FTC is rumored to be considering such a suit, and FTC Chairman Jon Leibowitz and Republican Commissioner Tom Rosch have expressed a desire to litigate such a so-called “pure” Section 5 antitrust case — one not adjoining a cause of action under the Sherman Act. Unfortunately for the Commissioners, no appellate court has upheld such an action since the 1960s.

This brewing standoff is reminiscent of a similar contest between Congress and the FTC over the Commission’s aggressive use of Section 5 in consumer protection cases in the 1970s. As Howard Beales recounts, the FTC took an expansive view of its authority and failed to produce guidelines or limiting principles to guide its growing enforcement against “unfair” practices — just as today it offers no limiting principles or guidelines for antitrust enforcement under the Act. Only under heavy pressure from Congress, including a brief shutdown of the agency (and significant public criticism for becoming the “National Nanny“), did the agency finally produce a Policy Statement on Unfairness — which Congress eventually codified by statute.

Given the attention being paid to the FTC’s antitrust authority under Section 5, we thought it would be helpful to offer a brief primer on the topic, highlighting why we share the skepticism expressed by the letter-writing members of Congress (along with many other critics).

The topic has come up, of course, in the context of the FTC’s case against Google. The scuttlebut is that the Commission believes it may not be able to bring and win a traditional, Section 2 antitrust action, and so may resort to Section 5 to make its case — or simply force a settlement, as the FTC did against Intel in late 2010. While it may be Google’s head on the block today, it could be anyone’s tomorrow. This isn’t remotely just about Google; it’s about broader concerns over the Commission’s use of Section 5 to prosecute monopolization cases without being subject to the rigorous economic standards of traditional antitrust law.

Background on Section 5

Section 5 has two “prongs.” The first, reflected in its prohibition of “unfair acts or deceptive acts or practices” (UDAP) is meant (and has previously been used—until recently, as explained) as a consumer protection statute. The other, prohibiting “unfair methods of competition” (UMC) has, indeed, been interpreted to have relevance to competition cases.

Most commonly (and commonly-accepted), the UMC language has been viewed to authorize the agency to bring cases that fill the gaps between clearly anticompetitive conduct and the language of the Sherman Act. Principally, this has been invoked in “invitation to collude” cases, which raise the spectre of price-fixing but nevertheless do not meet the literal prohibition against “agreement in restraint of trade” under Section 1 of the Sherman Act.

Over strenuous objections from dissenting Commissioners (and only in consent decrees; not before courts), the FTC has more recently sought to expand the reach of the UDAP language beyond the consumer protection realm to address antitrust concerns that would likely be non-starters under the Sherman Act.

In N-Data, the Commission brought and settled a case invoking both the UDAP and UMC prongs of Section 5 to reach (alleged) conduct that amounted to breach of a licensing agreement without the requisite (Sherman Act) Section 2 claim of exclusionary conduct (which would have required that the FTC show that N-Data’s conducted had the effect of excluding its rivals without efficiency or welfare-enhancing properties). Although the FTC’s claims fall outside the ambit of Section 2, the Commission’s invocation of Section 5’s UDAP language was so broad that it could — quite improperly — be employed to encompass traditional Section 2 claims nonetheless, but without the rigor Section 2 requires (as the vigorous dissents by Commissioners Kovacic and Majoras discuss). As Commissioner Kovacic wrote in his dissent:

[T]he framework that the [FTC’s] Analysis presents for analyzing the challenged conduct as an unfair act or practice would appear to encompass all behavior that could be called a UMC or a violation of the Sherman or Clayton Acts. The Commission’s discussion of the UAP [sic] liability standard accepts the view that all business enterprises – including large companies – fall within the class of consumers whose injury is a worthy subject of unfairness scrutiny. If UAP coverage extends to the full range of business-to-business transactions, it would seem that the three-factor test prescribed for UAP analysis would capture all actionable conduct within the UMC prohibition and the proscriptions of the Sherman and Clayton Acts. Well-conceived antitrust cases (or UMC cases) typically address instances of substantial actual or likely harm to consumers. The FTC ordinarily would not prosecute behavior whose adverse effects could readily be avoided by the potential victims – either business entities or natural persons. And the balancing of harm against legitimate business justifications would encompass the assessment of procompetitive rationales that is a core element of a rule of reason analysis in cases arising under competition law.

In Intel, the most notorious of the recent FTC Section 5 antitrust actions, the Commission brought (and settled) a straightforward (if unwinnable) Section 2 case as a Section 5 case (with Section 2 “tag along” claims), using the justification that it simply couldn’t win a Section 2 case under current jurisprudence. Intel presumably settled the case because the absence of judicial limits under Section 5 made its outcome far less certain — and presumably the FTC brought the case under Section 5 for the same reason.

In Intel, there was no effort to distinguish Section 5 grounds from those under Section 2. Rather, the FTC claimed that the limiting jurisprudence under Section 2 wasn’t meant to rein in agencies, but merely private plaintiffs. This claim falls flat, as one of us (Geoff) has noted:

[Chairman] Leibowitz’ continued claim that courts have reined in Sherman Act jurisprudence only out of concern with the incentives and procedures of private enforcement, and not out of a concern with a more substantive balancing of error costs—errors from which the FTC is not, unfortunately immune—seems ridiculous to me. To be sure (as I said before), the procedural background matters as do the incentives to bring cases that may prove to be inefficient.

But take, for example, Twombly, mentioned by Leibowitz as one of the cases that has recently reined in Sherman Act enforcement in order to constrain overzealous private enforcement (and thus not in a way that should apply to government enforcement). . . .

But the over-zealousness of private plaintiffs is not all [Twombly] was about, as the Court made clear:

The inadequacy of showing parallel conduct or interdependence, without more, mirrors the ambiguity of the behavior: consistent with conspiracy, but just as much in line with a wide swath of rational and competitive business strategy unilaterally prompted by common perceptions of the market. Accordingly, we have previously hedged against false inferences from identical behavior at a number of points in the trial sequence.

Hence, when allegations of parallel conduct are set out in order to make a §1 claim, they must be placed in a context that raises a suggestion of a preceding agreement, not merely parallel conduct that could just as well be independent action. [Citations omitted].

The Court was appropriately concerned with the ability of decision-makers to separate pro-competitive from anticompetitive conduct. Even when the FTC brings cases, it and the court deciding the case must make these determinations. And, while the FTC may bring fewer strike suits, it isn’t limited to challenging conduct that is simple to identify as anticompetitive. Quite the opposite, in fact—the government has incentives to develop and bring suits proposing novel theories of anticompetitive conduct and of enforcement (as it is doing in the Intel case, for example).

Problems with Unleashing Section 5

It would be a serious problem — as the Members of Congress who’ve written letters seem to realize — if Section 5 were used to sidestep the important jurisprudential limitations on Section 2 by focusing on such unsupported theories as “reduction in consumer choice” instead of Section 2’s well-established consumer welfare standard. As Geoff has noted:

Following Sherman Act jurisprudence, traditionally the FTC has understood (and courts have demanded) that antitrust enforcement . . . requires demonstrable consumer harm to apply. But this latest effort reveals an agency pursuing an interpretation of Section 5 that would give it unprecedented and largely-unchecked authority. In particular, the definition of “unfair” competition wouldn’t be confined to the traditional antitrust measures — reduction in output or an output-reducing increase in price — but could expand to, well, just about whatever the agency deems improper.

* * *

One of the most important shifts in antitrust over the past 30 years has been the move away from indirect and unreliable proxies of consumer harm toward a more direct, effects-based analysis. Like the now archaic focus on market concentration in the structure-conduct-performance framework at the core of “old” merger analysis, the consumer choice framework [proposed by Commissioner Rosch as a cause of action under Section 5] substitutes an indirect and deeply flawed proxy for consumer welfare for assessment of economically relevant economic effects. By focusing on the number of choices, the analysis shifts attention to the wrong question.

The fundamental question from an antitrust perspective is whether consumer choice is a better predictor of consumer outcomes than current tools allow. There doesn’t appear to be anything in economic theory to suggest that it would be. Instead, it reduces competitive analysis to a single attribute of market structure and appears susceptible to interpretations that would sacrifice a meaningful measure of consumer welfare (incorporating assessment of price, quality, variety, innovation and other amenities) on economically unsound grounds. It is also not the law.

Commissioner Kovacic echoed this in his dissent in N-Data:

More generally, it seems that the Commission’s view of unfairness would permit the FTC in the future to plead all of what would have been seen as competition-related infringements as constituting unfair acts or practices.

And the same concerns animate Kovacic’s belief (drawn from an article written with then-Attorney Advisor Mark Winerman) that courts will continue to look with disapproval on efforts by the FTC to expand its powers:

We believe that UMC should be a competition-based concept, in the modern sense of fostering improvements in economic performance rather than equating the health of the competitive process with the wellbeing of individual competitors, per se. It should not, moreover, rely on the assertion in [the Supreme Court’s 1972 Sperry & Hutchinson Trading Stamp case] that the Commission could use its UMC authority to reach practices outside both the letter and spirit of the antitrust laws. We think the early history is now problematic, and we view the relevant language in [Sperry & Hutchinson] with skepticism.

Representatives Eshoo and Lofgren were even more direct in their letter:

Expanding the FTC’s Section 5 powers to include antitrust matters could lead to overbroad authority that amplifies uncertainty and stifles growth. . . . If the FTC intends to litigate under this interpretation of Section 5, we strongly urge the FTC to reconsider.

But it isn’t only commentators and Congressmen who point to this limitation. The FTC Act itself contains such a limitation. Section 5(n) of the Act, the provision added by Congress in 1994 to codify the core principles of the FTC’s 1980 Unfairness Policy Statement, says that:

The Commission shall have no authority under this section or section 57a of this title to declare unlawful an act or practice on the grounds that such act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. [Emphasis added].

In other words, Congress has already said, quite clearly, that Section 5 isn’t a blank check. Yet Chairman Leibowitz seems to be banking on the dearth of direct judicial precedent saying so to turn it into one — as do those who would cheer on a Section 5 antitrust case (against Google, Intel or anyone else). Given the unique breadth of the FTC’s jurisdiction over the entire economy, the agency would again threaten to become a second national legislature, capable of regulating nearly the entire economy.

The Commission has tried — and failed — to bring such cases before the courts in recent years. But the judiciary has not been receptive to an invigoration of Section 5 for several reasons. Chief among these is that the agency simply hasn’t defined the scope of its power over unfair competition under the Act, and the courts hesitate to let the Commission set the limits of its own authority. As Kovacic and Winerman have noted:

The first [reason for judicial reluctance in Section 5 cases] is judicial concern about the apparent absence of limiting principles. The tendency of the courts has been to endorse limiting principles that bear a strong resemblance to standards familiar to them from Sherman Act and Clayton Act cases. The cost-benefit concepts devised in rule of reason cases supply the courts with natural default rules in the absence of something better.

The Commission has done relatively little to inform judicial thinking, as the agency has not issued guidelines or policy statements that spell out its own view about the appropriate analytical framework. This inactivity contrasts with the FTC’s efforts to use policy statements to set boundaries for the application of its consumer protection powers under Section 5.

This concern was stressed in the letter sent by Senator DeMint and other Republican Senators to Chairman Leibowitz:

[W]e are concerned about the apparent eagerness of the Commission under your leadership to expand Section 5 actions without a clear indication of authority or a limiting principle. When a federal regulatory agency uses creative theories to expand its activities, entrepreneurs may be deterred from innovating and growing lest they be targeted by government action.

As we have explained many times (see, e.g., herehere and here), a Section 2 case against Google will be an uphill battle. As far as we have seen publicly, complainants have offered only harm to competitors — not harm to consumers — to justify such a case. It is little surprise, then, that the agency (or, more accurately, Chairman Leibowitz and Commissioner Rosch) may be seeking to use the less-limited power of Section 5 to mount such a case.

In a blog post in 2011, Geoff wrote:

Commissioner Rosch has claimed that Section Five could address conduct that has the effect of “reducing consumer choice” — an effect that a very few commentators support without requiring any evidence that the conduct actually reduces consumer welfare. Troublingly, “reducing consumer choice” seems to be a euphemism for “harm to competitors, not competition,” where the reduction in choice is the reduction of choice of competitors who may be put out of business by competitive behavior.

The U.S. has a long tradition of resisting enforcement based on harm to competitors without requiring a commensurate, strong showing of harm to consumers — an economically-sensible tradition aimed squarely at minimizing the likelihood of erroneous enforcement. The FTC’s invigorated interest in Section Five contemplates just such wrong-headed enforcement, however, to the inevitable detriment of the very consumers the agency is tasked with protecting.

In fact, the theoretical case against Google depends entirely on the ways it may have harmed certain competitors rather than on any evidence of actual harm to consumers (and in the face of ample evidence of significant consumer benefits).

* * *

In each of [the complaints against Google], the problem is that the claimed harm to competitors does not demonstrably translate into harm to consumers.

For example, Google’s integration of maps into its search results unquestionably offers users an extremely helpful presentation of these results, particularly for users of mobile phones. That this integration might be harmful to MapQuest’s bottom line is not surprising — but nor is it a cause for concern if the harm flows from a strong consumer preference for Google’s improved, innovative product. The same is true of the other claims. . . .

To the extent that the FTC brings an antitrust case against Google under Section 5, using the Act to skirt the jurisprudential limitations (and associated economic rigor) that make a Section 2 case unwinnable, it would be contravening congressional intent, judicial precedent, the plain language of the FTC Act, and the collected wisdom of the antitrust commentariat that sees such an action as inappropriate. This includes not just traditional antitrust-skeptics like us, but even antitrust-enthusiasts like Allen Grunes, who has written:

The FTC, of course, has Section 5 authority. But there is well-developed case law on monopolization under Section 2 of the Sherman Act. There are no doctrinal “gaps” that need to be filled. For that reason it would be inappropriate, in my view, to use Section 5 as a crutch if the evidence is insufficient to support a case under Section 2.

As Geoff has said:

Modern antitrust analysis, both in scholarship and in the courts, quite properly rejects the reductive and unsupported sort of theories that would undergird a Section 5 case against Google. That the FTC might have a better chance of winning a Section 5 case, unmoored from the economically sound limitations of Section 2 jurisprudence, is no reason for it to pursue such a case. Quite the opposite: When consumer welfare is disregarded for the sake of the agency’s power, it ceases to further its mandate. . . . But economic substance, not self-aggrandizement by rhetoric, should guide the agency. Competition and consumers are dramatically ill-served by the latter.

Conclusion: What To Do About Unfairness?

So, what should the FTC do with Section 5? The right answer may be “nothing” (and probably is, in our opinion). But even those who think something should be done to apply the Act more broadly to allegedly anticompetitive conduct should be able to agree that the FTC ought not bring a case under Section 5’s UDAP language without first defining with analytical rigor what its limiting principles are.

Rather than attempting to do this in the course of a single litigation, the agency ought to heed Kovacic and Winerman’s advice and do more to “inform judicial thinking” such as by “issu[ing] guidelines or policy statements that spell out its own view about the appropriate analytical framework.” The best way to start that process would be for whoever succeeds Leibowitz as chairman to convene a workshop on the topic. (As one of us (Berin) has previously suggested, the FTC is long overdue on issuing guidelines to explain how it has applied its Unfairness and Deception Policy Statements in UDAP consumer protection cases. Such a workshop would dovetail nicely with this.)

The question posed should not presume that Section 5’s UDAP language ought to be used to reach conduct actionable under the antitrust statutes at all. Rather, the fundamental question to be asked is whether the use of Section 5 in antitrust cases is a relic of a bygone era before antitrust law was given analytical rigor by economics. If the FTC cannot rigorously define an interpretation of Section 5 that will actually serve consumer welfare — which the Supreme Court has defined as the proper aim of antitrust law — Congress should explicitly circumscribe it once and for all, limiting Section 5 to protecting consumers against unfair and deceptive acts and practices and, narrowly, prohibiting unfair competition in the form of invitations to collude. The FTC (along with the DOJ and the states) would still regulate competition through the existing antitrust laws. This might be the best outcome of all.

Previous commentary by us on Section 5:

In the past weeks, the chatter surrounding a possible FTC antitrust case against Google has risen in volume, thanks largely to the FTC’s hiring of litigator Beth Wilkinson.  The question remains, however, what this aggressive move portends and, more importantly, why the FTC is taking it.

It is worth noting at the outset that, as far as I know, Wilkinson has no antitrust experience; she is a litigator.  Now, there’s nothing wrong with an agency enlisting a hired gun to help litigate its cases, but when the hired gun is not hired for her substantive expertise but rather her ability to persuade, it perhaps suggests something about the strength of the agency’s case.

It’s reading tea leaves (a time-honored, if flawed, DC practice), but Wilkinson’s hiring suggests to me that the FTC views its case as one that will require some serious rhetorical handling in order to win.  While on its Sherman Act Section 2 merits that would be true anyway, it also suggests to me that the FTC intends to use the case as an opportunity to push – and seek court approval for – the ambitious plans of some of the Commissioners to expand the agency’s powers under Section 5 of the FTC Act.  This would be a costly mistake for consumers.

Last year, in an interview with Global Competition Review, FTC Chairman Leibowitz was asked whether the agency was “investigating the online search market.”  He declined to answer directly but instead offered this suggestive comment:

What I can say is that one of the commission’s priorities is to find a pure Section Five case under unfair methods of competition.  Everyone acknowledges that Congress gave us much more jurisdiction than just antitrust.  And I go back to this because at some point if and when, say, a large technology company acknowledges an investigation by the FTC, we can use both our unfair or deceptive acts or practice authority and our unfair methods of competition authority to investigate the same or similar unfair competitive behavior . . . .

Commissioner Rosch has likewise suggested that Section 5 could and should be expanded, precisely to reach activity that would be unreachable under current Section 2 standards.  The effort to expand the FTC’s antitrust enforcement under Section 5, and to write out the jurisprudential standards of Section 2, is a troubling one.

Following Sherman Act jurisprudence, traditionally the FTC has understood (and courts have demanded) that antitrust enforcement under Section 5 (as a technical matter, the FTC does not directly enforce Section 2 of the Sherman Act but instead enforces the Act via its Section 5 authority) requires demonstrable consumer harm to apply.  But this latest effort reveals an agency pursuing an interpretation of Section 5 that would give it unprecedented and largely-unchecked authority.  In particular, the definition of “unfair” competition wouldn’t be confined to the traditional antitrust measures—reduction in output or an output-reducing increase in price—but could expand to, well, just about whatever the agency deems improper.

Most problematically, Commissioner Rosch has suggested that Section Five could address conduct that has the effect of “reducing consumer choice” without requiring any evidence that conduct actually reduces consumer welfare—a theory that only a vanishingly few commentators (essentially one law professor and one FTC lawyer have written the entire body of scholarship on this topic) support.  Troublingly, “reducing consumer choice” seems to be a euphemism for “harm to competitors, not competition,” where the reduction in choice is the reduction of choice of competitors who may be put out of business by a competitor’s conduct.

Under Section 2 standards, the FTC would have a tough time winning its case.  This is because the agency doesn’t seem to have a theory of harm that reaches consumers—and none of Google’s competitors that have been stoking the flames has offered one.  Instead, all of the propounded theories turn on harm to competitors.  But the U.S. has a long tradition of resisting enforcement based on harm to competitors without a showing of harm to consumers.  If all that were required were harm to competitors, then all pro-competitive conduct would be actionable under the antitrust laws; for what is the aim and effect of competition if not the besting of one’s competitors?  The competitive process is by definition one that can “reduce consumer choice.”  This is why the great economist Joseph Schumpeter famously called the competitive process one of “creative destruction.”

In fact, the theoretical case against Google depends entirely on the ways it may have harmed certain competitors rather than on any evidence of harm to consumer welfare.  For example, Google’s implementation and placement within its organic search results of its own shopping results is alleged to make it difficult for competing product-specific search sites (like Nextag or Amazon, for example) to reach Google’s users.  Leaving aside the weakness of the factual allegation (I challenge you to perform a search for a product on Google that doesn’t offer up a mix of retailers, manufacturers, review sites and multiple product search engine results on the first page), it is hard to see how consumers are harmed here.

On the one hand, users have easy access to competing sites directly from their browser’s address bar and, increasingly importantly, to more persuasive product reviews from friends and colleagues via social media.  In this way even the basic factual predicate is faulty, and it’s not even clear that consumer choice itself is reduced if Nextag is absent from Google searches, as the site can be reached by, among other things, links from reviews, links from friends on social media, other general search engines, and every browser address bar.

On the other hand, users are by no means foreclosed from access to actual products (and there is no evidence that I know of that consumer prices or supply are in any way affected) if any particular product search engine doesn’t appear in the top results.  Placement of Google’s own product search results in fact streamlines consumers’ access, and Google’s comprehensive and effective search engine ensures that its shopping results are probably better than anyone else’s anyway.  The same is true for travel searches, maps, and the range of other complained-of results.  Flight information and reservations, location information and maps are widely available online and off through myriad sources other than Google.

The bottom line is that harm to competitors is at least as consistent with pro-competitive as with anti-competitive conduct, and simply counting the number of firms offering competing choices to consumers that happen to appear in the top few Google search results is no way to infer actual consumer harm.

One of the most important shifts in antitrust over the past 30 years has been the move away from indirect and unreliable proxies of consumer harm toward a more direct, effects-based analysis.  Like the now archaic focus on market concentration in the structure-conduct-performance framework at the core of “old” merger analysis, the consumer choice framework substitutes an indirect and deeply flawed proxy for consumer welfare for assessment of economically relevant economic effects.  By focusing on the number of choices, the analysis shifts attention to the wrong question.

The fundamental question from an antitrust perspective is whether consumer choice is a better predictor of consumer outcomes than current tools allow.   There doesn’t appear to be anything in economic theory to suggest that it would be.  Instead, it reduces competitive analysis to a single attribute of market structure and appears susceptible to interpretations that would sacrifice a meaningful measure of consumer welfare (incorporating assessment of price, quality, variety, innovation and other amenities) on economically unsound grounds.  It is also not the law.

Commissioner Rosch has suggested that the Supreme Court in its 2007 Leegin decision provided a green light for consumer-choice-reducing antitrust theories without a showing of traditional (output-reducing) harm.  But as Josh pointed out, the Ninth Circuit has held (in last year’s Brantley v. NBC Universal decision, which Thom has also blogged about here and here) that Leegin more accurately holds precisely the opposite, and coupled with the Court’s 2006 Independent Ink decision, seems clearly to restrict, rather than authorize, a consumer choice claim:

The Supreme Court has noted that both [reduced choice and increased prices] are “fully consistent with a free, competitive market,” [citing Independent Ink] and are therefore insufficient to establish an injury to competition. Thus even vertical agreements that prohibit retail price reductions and result in higher consumer prices . . . are not unlawful absent a further showing of anticompetitive conduct [citing Leegin].

Modern antitrust analysis, both in scholarship and in the courts, quite properly rejects the reductive and unsupported sort of theories that would undergird a Section 5 case against Google.  That the FTC might have a better chance of winning a Section 5 case, unmoored from the economically sound limitations of Section 2 jurisprudence, is no reason for it to pursue such a case.  Quite the opposite:  When consumer welfare is disregarded for the sake of the agency’s power, it ceases to further its mandate.  No doubt Beth Wilkinson could help make the rhetorical argument for a Section 5 case against Google based on a tenuous consumer choice theory.  But economic substance, not self-aggrandizement by rhetoric, should guide the agency.  Competition and consumers are dramatically ill-served by the latter.

Full disclosure: I worked briefly with Beth Wilkinson at Latham and Watkins.  Further full disclosure: The International Center for Law and Economics, of which I am the Executive Director, has received support to make research grants from Google, among many other companies and individuals.

[Cross-posted at Forbes]

The “consumer choice” approach to antitrust is increasingly discussed in a variety of settings, and endorsed by regulators and in scholarship, especially but not exclusively in the Section 5 context.  The fundamental idea is that the “conventional” efficiency approach embedded in the total and/or consumer welfare standards is too cramped and does not measure the “right” things. The consumer choice is a standard focusing on the options available to consumers and is proposed as an alternative to efficiency-based standards.  Preliminary, I do not think the approach as I understand it is an improvement for modern antitrust methods, nor do I think that its adoption would be a good development for the coherence of antitrust jurisprudence or consumers.

Averitt & Lande describe the consumer choice antitrust standard as follows:

It suggests that the role of antitrust should be broadly conceived to protect all the types of options that are significantly important to consumers. An antitrust violation can, therefore, be understood as an activity that unreasonably restricts the totality of price and nonprice choices that would otherwise have been available.

The “consumer choice” framework tells us, Averitt  & Lande assert, that from an antitrust perspective “more  consumer choice is probably good.” The central idea is that the efficiency perspective is hampered by “only” looking at things like prices and output (including quality-adjusted prices), and occasionally innovation.  The fundamental observation of the “consumer choice” framework is that a reduction of “choice” (however defined, but lets come back to that), even if coupled with a reduction in price or increase in output, is a cognizable antitrust injury.

The approach is getting some traction.

For example, in a speech, Commissioner Rosch asserts that the appropriate antitrust standard is “is to look at consumer welfare from the buyers’ perspective, or what Robert Lande has termed a “consumer choice” perspective, which occurs when a firm’s conduct impairs the choices that free competition brings to the marketplace.”   Indeed, Commissioner Rosch apparently argues that the consumer choice standard not only should be the law, but that it is the law after Leegin, asserting that after the Supreme Court’s decision: “injury to consumer choice (as well as an increase in price) is now recognized as injury to consumer welfare in the United States.”  This, I think, is a controversial and questionable interpretation of Leegin.  But holding that aside for the moment,  I want to focus on some skeptical observations concerning the utility of such a framework for antitrust analysis, and more importantly, for consumers.

Continue Reading…

How should an economist interpret the fact that Microsoft appears to be “behind” recent enforcement actions against Google in the United States and, especially, in Europe?

“With skepticism!”  Is the answer I suspect many readers will offer upon first glance.  There is a long public choice literature, and long history in antitrust itself, that suggests that one should be weary of private enforcement of the antitrust laws against rivals both in the form of litigation and attempts to delegate the enforcement effort (and costs) to the government.

In a recent post, economist (and blogger) Joshua Gans suggests that this conventional economic wisdom is wrong.  Gans discusses the Microsoft-Google Wars and claims that Microsoft’s involvement in the recent actions against Google in Europe, Texas and elsewhere are feature of an antitrust policy that is working, rather than a bug of an antitrust system that funnels competitive activity away on the margin from dimensions that benefit consumers, i.e. competition on the merits, and toward rent-seeking.

Gans characterizes Microsoft’s recent reported involvement in the antitrust activity launched against Google in Europe, and now Texas, as the result of some sort of epiphany at the company:

I think the narrative that is appropriate is that the antitrust action against Microsoft, while it didn’t end up breaking it up, actually worked. Microsoft has largely behaved itself since. It no longer aggressively bundles or bullies OEMs into exclusives. What is more, in its more competitive segments, it is actually a strong consumer performer. Think about video games, for one. And it is improving in its traditional monopoly areas too where it is forced to compete on products rather than with heavy handed contracting.

Lets hold aside the issue, for a moment, of whether Microsoft is as much of an antitrust enforcement success as Gans’ characterization suggests.  There remains significant debate on this issue, but I don’t want to re-hash that here, and do not need for the purposes of this post. There is also a lot of normative judgment in Gans’ post, e.g. “no longer aggressively bundles or bullies,” could be a good or bad thing from a consumer welfare perspective depending on whether the bundling or exclusive dealing with OEMs or IAPs provided consumer benefits.  But it is at least worth noting that the possibility that Microsoft has been chilled from plausibly pro-competitive conduct ought to be recognized.  But that is not the point.

The real question is what, if anything, Microsoft’s involvement tells us about how an economist should think about modern antitrust enforcement actions against Google?

Here is Gans’ answer:

Google’s new narrative in these actions is that this is a dynamic industry and they face lots of competition and potential competition — just look to Microsoft’s example! But if the correct story is that Microsoft faced real competition only because antitrust action tied its hands on anticompetitive acts, then Google’s line is incorrect and, what is worse, may lead to bad policy outcomes. Think to Google’s recent acquisitions in search in Japan that took it from a 70:30 duopoly to monopoly. This is not what we want.

In this regard, I can think of no better advocate for this narrative than Microsoft. Who better to tell the world that antitrust policy in high tech environments actually works. Yes, they are interested but it is not an argument against antitrust action to simply point to Microsoft involvement.

This answer did not move me from my initial skepticism.  At least, not in the direction of less skepticism.  There are some odd assumptions being made here.  First, I’m tempted to ask about who we know that a 70:30 market structure is “not what we want”?  Second, who is this “we” anyway?  Perhaps it is consumers.  Its unclear.  But it sounds essentially like a classic structure-conduct-performance argument.  The problems with such arguments in high-tech markets with rapid technological change (and even in more stable “brick and mortar” settings) are well known.  Is there any evidence that Google’s recent transactions in Japan generated consumer harms?  Did it generate benefits?  If it didn’t harm consumers — what is the problem?

From an economic perspective, assuming that Microsoft’s underlying conduct was clearly anticompetitive, that the costs of enforcement are less than the benefits created for consumers, and that anything around a 70-30 market share structure in search harms competition and reduces consumer welfare assumes away all of the interesting economic questions in order to reach a policy conclusion: Microsoft’s involvement tells us to favor antitrust enforcement against Google — or at the very least, is neutral.

But what about that policy argument?  It is the bolded sentence that got my attention.  Gans claims that Microsoft is the best advocate for antitrust enforcement in high-sectors because they know that enforcement “actually works.”  Somewhat more provocatively, Gans claims that the fact that Microsoft is self-interested is not an argument against antitrust action. Au contraire.

That investment in private antitrust enforcement against one’s rivals is, ceteris paribus, a negative signal about the economic merits of an antitrust action is indeed an argument.  And its a good one.  And one that has been around a long time.

Posner (Antitrust Law, 2d at 281) writes about influence of rivals on state enforcement:

I would like to see the states, which have been growing increasingly active in antitrust enforcement since the 1980s, stripped of their authority to bring antitrust suits, federal or state, except under circumstances in which a private firm would be able to sue … .  States are unwilling to devote the resources necessary to do more than free ride on federal antitrust litigation, complicating its resolution.  In addition, they are excessively influenced by interest groups that may represent a potential antitrust defendant’s competitors.  This is a particular concern when the defendant is located in one state and one of its competitors is located in another and that competitor, who is pressing his state’s attorney general to bring suit, is a major political force in that state.

Posner is not alone here in expressing concerns about the influence of rival firms on state and federal enforcement, as well as the use of private enforcement and the threat of treble damages to subvert competition.   Indeed, a classic in the antitrust economics literature is Baumol & Ordover, Use of Antitrust to Subvert Competition, in which the authors argue that courts should presumptively deny standing to competitors seeking to block mergers.  The idea that rivals can use the government agencies to do things to hinder rather than help competition is not new, and has deep roots in the public choice literature.

The argument appears in the Gavil, Kovacic and Baker Antitrust Law textbook (page 1088):

Despite their potential benefits, private enforcement schemes (including private antitrust enforcement) can have adverse consequences.  Private enforcement can generate questionable claims, and can enable firms to use the courts to impede efficient behavior by their rivals.  Although private enforcement reduces the need to enlarge public enforcement bodies, private suits can consume substantial social resources in the form of costs incurred to prosecute and defend such cases.  Perhaps recognizing these adverse possibilities, courts have established limits on the ability of private plaintiffs to obtain relief under the Clayton Act.

Of course, those limits apply to litigation in court.  No such limits apply when the rival knocks on the door at the FTC or DOJ or State AG’s office.

Fred McChesney writes, citing the Baumol & Ordover analysis and Salop & White (1986) on private antitrust litigation, that:

One of the most worrisome statistics in antitrust is that for every case brought by government, private plaintiffs bring ten. The majority of cases are filed to hinder, not help, competition. According to Steven Salop, formerly an antitrust official in the Carter administration, and Lawrence J. White, an economist at New York University, most private antitrust actions are filed by members of one of two groups. The most numerous private actions are brought by parties who are in a vertical arrangement with the defendant (e.g., dealers or franchisees) and who therefore are unlikely to have suffered from any truly anticompetitive offense. Usually, such cases are attempts to convert simple contract disputes (compensable by ordinary damages) into triple-damage payoffs under the Clayton Act.

The second most frequent private case is that brought by competitors. Because competitors are hurt only when a rival is acting procompetitively by increasing its sales and decreasing its price, the desire to hobble the defendant’s efficient practices must motivate at least some antitrust suits by competitors. Thus, case statistics suggest that the anticompetitive costs from “abuse of antitrust,” as New York University economists William Baumol and Janusz Ordover (1985) referred to it, may actually exceed any procompetitive benefits of antitrust laws.

Separately, McChesney provides an example:

Consider a case like that against Salton, Inc., for resale price maintenance of its George Foreman grills, provisionally settled in September 2002. The case is one in which the federal antitrust authorities would have no interest. Resale price maintenance is now understood to be an intrabrand practice that enhances interbrand competition. Economists almost unanimously applaud resale price maintenance as a way to enhance distributor efforts to market the product vis-à-vis competing brands in ways that almost never have any anticompetitive aspects. Resale price maintenance simply has no place in the modern, economics-based enforcement agenda.

However, resale price maintenance cases like that against Salton are a natural for the state attorneys general. First, anomalously, resale price maintenance remains per se illegal under the Sherman Act and thus is illegal under states’ antitrust acts. Therefore, victory is automatic — and cheap. All that need be shown is a contract to set resale prices, or something that a jury might so construe as such a contract.
Victory is even easier when the states sue for hundreds of millions of dollars (as in the Salton case) and then offer a settlement for cents on the dollar ($8 million in the Salton case).  No company, particularly one with public shareholders, could refuse an offer to settle for so little. To do so would invite a shareholder suit. Salton’s George Foreman grill is one of the great success stories in kitchen appliance sales. With unit sales in the millions, its high profile is guaranteed by George Foreman’s name and
ability to promote it. Hanging the scalp of a brand-name retailer and a phenomenally successful product on an attorney general’s wall was not likely to discourage the two lead attorneys general in the Salton case, New York’s Eliot Spitzer and Illinois’s
James Ryan. The former has shown himself not averse to publicity; the latter was running for governor at the time the suit’s settlement was announced.

The suit certainly was valuable to the attorneys general. But what was in it for consumers, the supposed beneficiaries of antitrust? Nothing, apparently. Not only is resale price maintenance generally a beneficial practice socially, but the settlement
amount was laughable in terms of redressing any supposed consumer injury. The settlement amounted to just pennies per grill sold. The attorneys general did not even try to get the money to the actual sufferers of any higher prices. Instead — attorneys general are politicians and 2002 was an election year — the money was destined elsewhere, as the attorneys general announced:

“In view of the difficulty in identifying the millions of purchasers of the Salton grills covered by the settlement and relatively small alleged overcharge per grill purchased, the states propose to use the $8 million settlement
in the following manner: Each state shall direct that its share of the $8 million be distributed to the state, its political subdivisions, municipalities, not-for-profit
corporations, and/or charitable organizations for health or nutrition-related causes. In this manner, the purchasers covered by the lawsuits (persons who bought Salton George Foreman Grills) will benefit from the settlement.”

This statement is commendably candid. Not only will supposedly wronged consumers not get any money, but the supposed overcharge was “relatively small” to begin with. If the overcharge was “relatively small,” Salton could not have had
much market power. Thus, the case flunks one of the principal filter tests that Judge Easterbrook rightly would impose to evaluate the worth of a standard antitrust case.

Judge Easterbrook himself, as McChesney notes, was one of the earliest to note the potential for consumer welfare-reducing abuse of the antitrust laws, arguing in the Limits of Antitrust that rival enforcement actions:

Antitrust litigation is attractive as a method of raising rivals’ costs because of the asymmetrical structure of incentives. The plaintiffs costs of litigation will be smaller than the defendant’s. The plaintiff need only file the complaint and serve demands for discovery. If the plaintiff wins, the defendant will bear these legal costs. The defendant, on the other hand, faces treble damages and injunction, as well as its own (and even its rival’s) costs of litigation. The principal burden of discovery falls on the defendant. The defendant is apt to be larger, with more files to search, and to have control of more pertinent documents than the plaintiff. … The books are full of suits by rivals for the purpose, or with the effect, of reducing competition and increasing
price.

Of course, these points apply just as well (and sometimes doubly) to the actions of rivals that do not even require them to go to court, and instead knock on the door of the government enforcement agency.  Easterbrook proposed significant restrictions on such suits.

The idea that Microsoft is an especially qualified party to “tell the world that antitrust policy in high tech environments actually works” is dubious even holding aside the debate over whether one can identify palpable consumer benefits from the enforcement action and demonstrate that they outweigh the costs.  Given the long history in antitrust of abuse of the private action to impose costs on rivals engaging in efficient business practices — a piece of history that is central to any narrative of the history of modern antitrust — and the longstanding concern about this idea in the economics literature, the argument that identity of the plaintiff or interloper is irrelevant to the economic merits of the underlying claim in the Microsoft-Google context seems especially wrongheaded.   If anything, the proliferation of national antitrust laws and availability of EU enforcement make the problems emphasized in that literature more important, not less.