Archives For antitrust

More than a century of bad news

Bill Gates recently tweeted the image below, commenting that he is “always amazed by the disconnect between what we see in the news and the reality of the world around us.”

https://pbs.twimg.com/media/D8zWfENUYAAvK5I.png

Of course, this chart and Gates’s observation are nothing new – there has long been an accuracy gap between what the news covers (and therefore what Americans believe is important) and what is actually important. As discussed in one academic article on the subject:

The line between journalism and entertainment is dissolving even within traditional news formats. [One] NBC executive [] decreed that every news story should “display the attributes of fiction, of drama. It should have structure and conflict, problem and denouement, rising action and falling action, a beginning, a middle and an end.” … This has happened both in broadcast and print journalism. … Roger Ailes … explains this phenomenon with an Orchestra Pit Theory: “If you have two guys on a stage and one guy says, ‘I have a solution to the Middle East problem,’ and the other guy falls in the orchestra pit, who do you think is going to be on the evening news?”

Matters of policy get increasingly short shrift. In 1968, the network newscasts generally showed presidential candidates speaking, and on the average a candidate was shown speaking uninterrupted for forty-two seconds. Over the next twenty years, these sound bites had shrunk to an average of less than ten seconds. This phenomenon is by no means unique to broadcast journalism; there has been a parallel decline in substance in print journalism as well. …

The fusing of news and entertainment is not accidental. “I make no bones about it—we have to be entertaining because we compete with entertainment options as well as other news stories,” says the general manager of a Florida TV station that is famous, or infamous, for boosting the ratings of local newscasts through a relentless focus on stories involving crime and calamity, all of which are presented in a hyperdramatic tone (the so-called “If It Bleeds, It Leads” format). There was a time when news programs were content to compete with other news programs, and networks did not expect news divisions to be profit centers, but those days are over.

That excerpt feels like it could have been written today. It was not: it was published in 1996. The “if it bleeds, it leads” trope is often attributed to a 1989 New York magazine article – and once introduced into the popular vernacular it grew quickly in popularity:

Of course, the idea that the media sensationalizes its reporting is not a novel observation. “If it bleeds, it leads” is just the late-20th century term for what had been “sex sells” – and the idea of yellow journalism before then. And, of course, “if it bleeds” is the precursor to our more modern equivalent of “clickbait.”

The debate about how to save the press from Google and Facebook … is the wrong debate to have

We are in the midst of a debate about how to save the press in the digital age. The House Judiciary Committee recently held a hearing on the relationship between online platforms and the press; and the Australian Competition & Consumer Commission recently released a preliminary report on the same topic.

In general, these discussions focus on concerns that advertising dollars have shifted from analog-era media in the 20th century to digital platforms in the 21st century – leaving the traditional media underfunded and unable to do its job. More specifically, competition authorities are being urged (by the press) to look at this through the lens of antitrust, arguing that Google and Facebook are the dominant two digital advertising platforms and have used their market power to harm the traditional media.

I have previously explained that this is bunk; as has John Yun, critiquing current proposals. I won’t rehash those arguments here, beyond noting that traditional media’s revenues have been falling since the advent of the Internet – not since the advent of Google or Facebook. The problem that the traditional media face is not that monopoly platforms are engaging in conduct that is harmful to them – it is that the Internet is better both as an advertising and information-distribution platform such that both advertisers and information consumers have migrated to digital platforms (and away from traditional news media).

This is not to say that digital platforms are capable of, or well-suited to, the production and distribution of the high-quality news and information content that we have historically relied on the traditional media to produce. Yet, contemporary discussions about whether traditional news media can survive in an era where ad revenue accrues primarily to large digital platforms have been surprisingly quiet on the question of the quality of content produced by the traditional media.

Actually, that’s not quite true. First, as indicated by the chart tweeted by Gates, digital platforms may be providing consumers with information that is more relevant to them.

Second, and more important, media advocates argue that without the ad revenue that has been diverted (by advertisers, not by digital platforms) to firms like Google and Facebook they lack the resources to produce high quality content. But that assumes that they would produce high quality content if they had access to those resources. As Gates’s chart – and the last century of news production – demonstrates, that is an ill-supported claim. History suggests that, left to its own devices and not constrained for resources by competition from digital platforms, the traditional media produces significant amounts of clickbait.

It’s all about the Benjamins

Among critics of the digital platforms, there is a line of argument that the advertising-based business model is the original sin of the digital economy. The ad-based business model corrupts digital platforms and turns them against their users – the user, that is, becomes the product in the surveillance capitalism state. We would all be much better off, the argument goes, if the platforms operated under subscription- or micropayment-based business models.

It is noteworthy that press advocates eschew this line of argument. Their beef with the platforms is that they have “stolen” the ad revenue that rightfully belongs to the traditional media. The ad revenue, of course, that is the driver behind clickbait, “if it bleeds it leads,” “sex sells,” and yellow journalism. The original sin of advertising-based business models is not original to digital platforms – theirs is just an evolution of the model perfected by the traditional media.

I am a believer in the importance of the press – and, for that matter, for the efficacy of ad-based business models. But more than a hundred years of experience makes clear that mixing the two into the hybrid bastard that is infotainment should prompt concern and discussion about the business model of the traditional press (and, indeed, for most of the past 30 years or so it has done so).

When it comes to “saving the press” the discussion ought not be about how to restore traditional media to its pre-Facebook glory days of the early aughts, or even its pre-modern Internet gold age of the late 1980s. By that point, the media was well along the slippery slope to where it is today. We desperately need a strong, competitive market for news and information. We should use the crisis that that market currently is in to discuss solutions for the future, not how to preserve the past.

Thomas Wollmann has a new paper — “Stealth Consolidation: Evidence from an Amendment to the Hart-Scott-Rodino Act” — in American Economic Review: Insights this month. Greg Ip included this research in an article for the WSJ in which he claims that “competition has declined and corporate concentration risen through acquisitions often too small to draw the scrutiny of antitrust watchdogs.” In other words, “stealth consolidation”.

Wollmann’s study uses a difference-in-differences approach to examine the effect on merger activity of the 2001 amendment to the Hart-Scott-Rodino (HSR) Antitrust Improvements Act of 1976 (15 U.S.C. 18a). The amendment abruptly increased the pre-merger notification threshold from $15 million to $50 million in deal size. Strictly on those terms, the paper shows that raising the pre-merger notification threshold increased merger activity.

However, claims about “stealth consolidation” are controversial because they connote nefarious intentions and anticompetitive effects. As Wollmann admits in the paper, due to data limitations, he is unable to show that the new mergers are in fact anticompetitive or that the social costs of these mergers exceed the social benefits. Therefore, more research is needed to determine the optimal threshold for pre-merger notification rules, and claiming that harmful “stealth consolidation” is occurring is currently unwarranted.

Background: The “Unscrambling the Egg” Problem

In general, it is more difficult to unwind a consummated anticompetitive merger than it is to block a prospective anticompetitive merger. As Wollmann notes, for example, “El Paso Natural Gas Co. acquired its only potential rival in a market” and “the government’s challenge lasted 17 years and involved seven trips to the Supreme Court.”

Rolling back an anticompetitive merger is so difficult that it came to be known as “unscrambling the egg.” As William J. Baer, a former director of the Bureau of Competition at the FTC, described it, “there were strong incentives for speedily and surreptitiously consummating suspect mergers and then protracting the ensuing litigation” prior to the implementation of a pre-merger notification rule. These so-called “midnight mergers” were intended to avoid drawing antitrust scrutiny.

In response to this problem, Congress passed the Hart–Scott–Rodino Antitrust Improvements Act of 1976, which required companies to notify antitrust authorities of impending mergers if they exceeded certain size thresholds.

2001 Hart–Scott–Rodino Amendment

In 2001, Congress amended the HSR Act and effectively raised the threshold for premerger notification from $15 million in acquired firm assets to $50 million. This sudden and dramatic change created an opportunity to use a difference-in-differences technique to study the relationship between filing an HSR notification and merger activity.

According to Wollmann, here’s what notifications look like for never-exempt mergers (>$50M):

And here’s what notifications for newly-exempt ($15M < X < $50M) mergers look like:

So what does that mean for merger investigations? Here is the number of investigations into never-exempt mergers:

We see a pretty consistent relationship between number of mergers and number of investigations. More mergers means more investigations.  

How about for newly-exempt mergers?

Here, investigations go to zero while merger activity remains relatively stable. In other words, it appears that some mergers that would have been investigated had they required an HSR notification were not investigated.

Wollmann then uses four-digit SIC code industries to sort mergers into horizontal and non-horizontal categories. Here are never-exempt mergers:

He finds that almost all of the increase in merger activity (relative to the counterfactual in which the notification threshold were unchanged) is driven by horizontal mergers. And here are newly-exempt mergers:

Policy Implications & Limitations

The charts show a stark change in investigations and merger activity. The difference-in-differences methodology is solid and the author addresses some potential confounding variables (such as presidential elections). However, the paper leaves the broader implications for public policy unanswered.

Furthermore, given the limits of the data in this analysis, it’s not possible for this approach to explain competitive effects in the relevant antitrust markets, for three reasons:

Four-digit SIC code industries are not antitrust markets

Wollmann chose to classify mergers “as horizontal or non-horizontal based on whether or not the target and acquirer operate in the same four-digit SIC code industry, which is common convention.” But as Werden & Froeb (2018) notes, four-digit SIC code industries are orders of magnitude too large in most cases to be useful for antitrust analysis:

The evidence from cartel cases focused on indictments from 1970–80. Because the Justice Department prosecuted many local cartels, for 52 of the 80 indictments examined, the Commerce Quotient was less than 0.01, i.e., the SIC 4-digit industry was at least 100 times the apparent scope of the affected market.  Of the 80 indictments, 19 involved SIC 4-digit industries that had been thought to comport well with markets, so these were the most instructive. For  16 of the 19, the SIC 4-digit industry was at least 10 times the apparent scope of the affected market (i.e., the Commerce Quotient was less than 0.1).

Antitrust authorities do not rely on SIC 4-digit industry codes and instead establish a market definition based on the facts of each case. It is not possible to infer competitive effects from census data as Wollmann attempts to do.

The data cannot distinguish between anticompetitive mergers and procompetitive mergers

As Wollmann himself notes, the results tell us nothing about the relative costs and benefits of the new HSR policy:

Even so, these findings do not on their own advocate for one policy over another. To do so requires equating industry consolidation to a specific amount of economic harm and then comparing the resulting figure to the benefits derived from raising thresholds, which could be large. Even if the agencies ignore the reduced regulatory burden on firms, introducing exemptions can free up agency resources to pursue other cases (or reduce public spending). These and related issues require careful consideration but simply fall outside the scope of the present work.

For instance, firms could be reallocating merger activity to targets below the new threshold to avoid erroneous enforcement or they could be increasing merger activity for small targets due to reduced regulatory costs and uncertainty.

The study is likely underpowered for effects on blocked mergers

While the paper provides convincing evidence that investigations of newly-exempt mergers decreased dramatically following the change in the notification threshold, there is no equally convincing evidence of an effect on blocked mergers. As Wollmann points out, blocked mergers were exceedingly rare both before and after the Amendment (emphasis added):

Over 57,000 mergers comprise the sample, which spans eighteen years. The mean number of mergers each year is 3,180. The DOJ and FTC receive 31,464 notifications over this period, or 1,748 per year. Also, as stated above, blocked mergers are very infrequent: there are on average 13 per year pre-Amendment and 9 per-year post-Amendment.

Since blocked mergers are such a small percentage of total mergers both before and after the Amendment, we likely cannot tell from the data whether actual enforcement action changed significantly due to the change in notification threshold.

Greg Ip’s write-up for the WSJ includes some relevant charts for this issue. Ironically for a piece about the problems of lax merger review, the accompanying graphs show merger enforcement actions slightly increasing at both the FTC and the DOJ since 2001:

Source: WSJ

Overall, Wollmann’s paper does an effective job showing how changes in premerger notification rules can affect merger activity. However, due to data limitations, we cannot conclude anything about competitive effects or enforcement intensity from this study.

In an amicus brief filed last Friday, a diverse group of antitrust scholars joined the Washington Legal Foundation in urging the U.S. Court of Appeals for the Second Circuit to vacate the Federal Trade Commission’s misguided 1-800 Contacts decision. Reasoning that 1-800’s settlements of trademark disputes were “inherently suspect,” the FTC condemned the settlements under a cursory “quick look” analysis. In so doing, it improperly expanded the category of inherently suspect behavior and ignored an obvious procompetitive justification for the challenged settlements.  If allowed to stand, the Commission’s decision will impair intellectual property protections that foster innovation.

A number of 1-800’s rivals purchased online ad placements that would appear when customers searched for “1-800 Contacts.” 1-800 sued those rivals for trademark infringement, and the lawsuits settled. As part of each settlement, 1-800 and its rival agreed not to bid on each other’s trademarked terms in search-based keyword advertising. (For example, EZ Contacts could not bid on a placement tied to a search for 1-800 Contacts, and vice-versa). Each party also agreed to employ “negative keywords” to ensure that its ads would not appear in response to a consumer’s online search for the other party’s trademarks. (For example, in bidding on keywords, 1-800 would have to specify that its ad must not appear in response to a search for EZ Contacts, and vice-versa). Notably, the settlement agreements didn’t restrict the parties’ advertisements through other media such as TV, radio, print, or other forms of online advertising. Nor did they restrict paid search advertising in response to any search terms other than the parties’ trademarks.

The FTC concluded that these settlement agreements violated the antitrust laws as unreasonable restraints of trade. Although the agreements were not unreasonable per se, as naked price-fixing is, the Commission didn’t engage in the normally applicable rule of reason analysis to determine whether the settlements passed muster. Instead, the Commission condemned the settlements under the truncated analysis that applies when, in the words of the Supreme Court, “an observer with even a rudimentary understanding of economics could conclude that the arrangements in question would have an anticompetitive effect on customers and markets.” The Commission decided that no more than a quick look was required because the settlements “restrict the ability of lower cost online sellers to show their ads to consumers.”

That was a mistake. First, the restraints in 1-800’s settlements are far less extensive than other restraints that the Supreme Court has said may not be condemned under a cursory quick look analysis. In California Dental, for example, the Supreme Court reversed a Ninth Circuit decision that employed the quick look analysis to condemn a de facto ban on all price and “comfort” advertising by members of a dental association. In light of the possibility that the ban could reduce misleading ads, enhance customer trust, and thereby stimulate demand, the Court held that the restraint must be assessed under the more probing rule of reason. A narrow limit on the placement of search ads is far less restrictive than the all-out ban for which the California Dental Court prescribed full-on rule of reason review.

1-800’s settlements are also less likely to be anticompetitive than are other settlements that the Supreme Court has said must be evaluated under the rule of reason. The Court’s Actavis decision rejected quick look and mandated full rule of reason analysis for reverse payment settlements of pharmaceutical patent litigation. In a reverse payment settlement, the patent holder pays an alleged infringer to stay out of the market for some length of time. 1-800’s settlements, by contrast, did not exclude its rivals from the market, place any restrictions on the content of their advertising, or restrict the placement of their ads except on webpages responding to searches for 1-800’s own trademarks. If the restraints in California Dental and Actavis required rule of reason analysis, then those in 1-800’s settlements surely must as well.

In addition to disregarding Supreme Court precedents that limit when mere quick look is appropriate, the FTC gave short shrift to a key procompetitive benefit of the restrictions in 1-800’s settlements. 1-800 spent millions of dollars convincing people that they could save money by ordering prescribed contact lenses from a third party rather than buying them from prescribing optometrists. It essentially built the online contact lens market in which its rivals now compete. In the process, it created a strong trademark, which undoubtedly boosts its own sales. (Trademarks point buyers to a particular seller and enhance consumer confidence in the seller’s offering, since consumers know that branded sellers will not want to tarnish their brands with shoddy products or service.)

When a rival buys ad space tied to a search for 1-800 Contacts, that rival is taking a free ride on 1-800’s investments in its own brand and in the online contact lens market itself. A rival that has advertised less extensively than 1-800—primarily because 1-800 has taken the lead in convincing consumers to buy their contact lenses online—will incur lower marketing costs than 1-800 and may therefore be able to underprice it.  1-800 may thus find that it loses sales to rivals who are not more efficient than it is but have lower costs because they have relied on 1-800’s own efforts.

If market pioneers like 1-800 cannot stop this sort of free-riding, they will have less incentive to make the investments that create new markets and develop strong trade names. The restrictions in the 1-800 settlements were simply an effort to prevent inefficient free-riding while otherwise preserving the parties’ freedom to advertise. They were a narrowly tailored solution to a problem that hurt 1-800 and reduced incentives for future investments in market-developing activities that inure to the benefit of consumers.

Rule of reason analysis would have allowed the FTC to assess the full market effects of 1-800’s settlements. The Commission’s truncated assessment, which was inconsistent with Supreme Court decisions on when a quick look will suffice, condemned conduct that was likely procompetitive. The Second Circuit should vacate the FTC’s order.

The full amicus brief, primarily drafted by WLF’s Corbin Barthold and joined by Richard Epstein, Keith Hylton, Geoff Manne, Hal Singer, and me, is here.

This guest post is by Corbin K. Barthold, Litigation Counsel at Washington Legal Foundation.

Complexity need not follow size. A star is huge but mostly homogenous. “It’s core is so hot,” explains Martin Rees, “that no chemicals can exist (complex molecules get torn apart); it is basically an amorphous gas of atomic nuclei and electrons.”

Nor does complexity always arise from remoteness of space or time. Celestial gyrations can be readily grasped. Thales of Miletus probably predicted a solar eclipse. Newton certainly could have done so. And we’re confident that in 4.5 billion years the Andromeda galaxy will collide with our own.

If the simple can be seen in the large and the distant, equally can the complex be found in the small and the immediate. A double pendulum is chaotic. Likewise the local weather, the fluctuations of a wildlife population, or the dispersion of the milk you pour into your coffee.

Our economy is not like a planetary orbit. It’s more like the weather or the milk. No one knows which companies will become dominant, which products will become popular, or which industries will become defunct. No one can see far ahead. Investing is inherently risky because the future of the economy, or even a single segment of it, is intractably uncertain. Do not hand your savings to any expert who says otherwise. Experts, in fact, often see the least of all.

But if a broker with a “sure thing” stock is a mountebank, what does that make an antitrust scholar with an “optimum structure” for a market? 

Not a prophet.

There is so much that we don’t know. Consider, for example, the notion that market concentration is a good measure of market competitiveness. The idea seems intuitive enough, and in many corners it remains an article of faith.

But the markets where this assumption is most plausible—hospital care and air travel come to mind—are heavily shaped by that grand monopolist we call government. Only a large institution can cope with the regulatory burden placed on the healthcare industry. As Tyler Cowen writes, “We get the level of hospital concentration that we have in essence chosen through politics and the law.”

As for air travel: the government promotes concentration by barring foreign airlines from the domestic market. In any case, the state of air travel does not support a straightforward conclusion that concentration equals power. The price of flying has fallen almost continuously since passage of the Airline Deregulation Act in 1978. The major airlines are disciplined by fringe carriers such as JetBlue and Southwest.

It is by no means clear that, aside from cases of government-imposed concentration, a consolidated market is something to fear. Technology lowers costs, lower costs enable scale, and scale tends to promote efficiency. Scale can arise naturally, therefore, from the process of creating better and cheaper products.

Say you’re a nineteenth-century cow farmer, and the railroad reaches you. Your shipping costs go down, and you start to sell to a wider market. As your farm grows, you start to spread your capital expenses over more sales. Your prices drop. Then refrigerated rail cars come along, you start slaughtering your cows on site, and your shipping costs go down again. Your prices drop further. Farms that fail to keep pace with your cost-cutting go bust. The cycle continues until beef is cheap and yours is one of the few cow farms in the area. The market improves as it consolidates.

As the decades pass, this story repeats itself on successively larger stages. The relentless march of technology has enabled the best companies to compete for regional, then national, and now global market share. We should not be surprised to see ever fewer firms offering ever better products and services.

Bear in mind, moreover, that it’s rarely the same company driving each leap forward. As Geoffrey Manne and Alec Stapp recently noted in this space, markets are not linear. Just after you adopt the next big advance in the logistics of beef production, drone delivery will disrupt your delivery network, cultured meat will displace your product, or virtual-reality flavoring will destroy your industry. Or—most likely of all—you’ll be ambushed by something you can’t imagine.

Does market concentration inhibit innovation? It’s possible. “To this day,” write Joshua Wright and Judge Douglas Ginsburg, “the complex relationship between static product market competition and the incentive to innovate is not well understood.” 

There’s that word again: complex. When will thumping company A in an antitrust lawsuit increase the net amount of innovation coming from companies A, B, C, and D? Antitrust officials have no clue. They’re as benighted as anyone. These are the people who will squash Blockbuster’s bid to purchase a rival video-rental shop less than two years before Netflix launches a streaming service.

And it’s not as if our most innovative companies are using market concentration as an excuse to relax. If its only concern were maintaining Google’s grip on the market for internet-search advertising, Alphabet would not have spent $16 billion on research and development last year. It spent that much because its long-term survival depends on building the next big market—the one that does not exist yet.

No expert can reliably make the predictions necessary to say when or how a market should look different. And if we empowered some experts to make such predictions anyway, no other experts would be any good at predicting what the empowered experts would predict. Experts trying to give us “well structured” markets will instead give us a costly, politicized, and stochastic antitrust enforcement process. 

Here’s a modest proposal. Instead of using the antitrust laws to address the curse of bigness, let’s create the Office of the Double Pendulum. We can place the whole section in a single room at the Justice Department. 

All we’ll need is some ping-pong balls, a double pendulum, and a monkey. On each ball will be the name of a major corporation. Once a quarter—or a month; reasonable minds can differ—a ball will be drawn, and the monkey prodded into throwing the pendulum. An even number of twirls saves the company on the ball. An odd number dooms it to being broken up.

This system will punish success just as haphazardly as anything our brightest neo-Brandeisian scholars can devise, while avoiding the ruinously expensive lobbying, rent-seeking, and litigation that arise when scholars succeed in replacing the rule of law with the rule of experts.

All hail the chaos monkey. Unutterably complex. Ineffably simple.

It might surprise some readers to learn that we think the Court’s decision today in Apple v. Pepper reaches — superficially — the correct result. But, we hasten to add, the Court’s reasoning (and, for that matter, the dissent’s) is completely wrongheaded. It would be an understatement to say that the Court reached the right result for the wrong reason; in fact, the Court’s analysis wasn’t even in the same universe as the correct reasoning.

Below we lay out our assessment, in a post drawn from an article forthcoming in the Nebraska Law Review.

Did the Court forget that, just last year, it decided Amex, the most significant U.S. antitrust case in ages?

What is most remarkable about the decision (and the dissent) is that neither mentions Ohio v. Amex, nor even the two-sided market context in which the transactions at issue take place.

If the decision in Apple v. Pepper hewed to the precedent established by Ohio v. Amex it would start with the observation that the relevant market analysis for the provision of app services is an integrated one, in which the overall effect of Apple’s conduct on both app users and app developers must be evaluated. A crucial implication of the Amex decision is that participants on both sides of a transactional platform are part of the same relevant market, and the terms of their relationship to the platform are inextricably intertwined.

Under this conception of the market, it’s difficult to maintain that either side does not have standing to sue the platform for the terms of its overall pricing structure, whether the specific terms at issue apply directly to that side or not. Both end users and app developers are “direct” purchasers from Apple — of different products, but in a single, inextricably interrelated market. Both groups should have standing.

More controversially, the logic of Amex also dictates that both groups should be able to establish antitrust injury — harm to competition — by showing harm to either group, as long as it establishes the requisite interrelatedness of the two sides of the market.

We believe that the Court was correct to decide in Amex that effects falling on the “other” side of a tightly integrated, two-sided market from challenged conduct must be addressed by the plaintiff in making its prima facie case. But that outcome entails a market definition that places both sides of such a market in the same relevant market for antitrust analysis.

As a result, the Court’s holding in Amex should also have required a finding in Apple v. Pepper that an app user on one side of the platform who transacts with an app developer on the other side of the market, in a transaction made possible and directly intermediated by Apple’s App Store, should similarly be deemed in the same market for standing purposes.

Relative to a strict construction of the traditional baseline, the former entails imposing an additional burden on two-sided market plaintiffs, while the latter entails a lessening of that burden. Whether the net effect is more or fewer successful cases in two-sided markets is unclear, of course. But from the perspective of aligning evidentiary and substantive doctrine with economic reality such an approach would be a clear improvement.

Critics accuse the Court of making antitrust cases unwinnable against two-sided market platforms thanks to Amex’s requirement that a prima facie showing of anticompetitive effect requires assessment of the effects on both sides of a two-sided market and proof of a net anticompetitive outcome. The critics should have been chastened by a proper decision in Apple v. Pepper. As it is, the holding (although not the reasoning) still may serve to undermine their fears.

But critics should have recognized that a necessary corollary of Amex’s “expanded” market definition is that, relative to previous standing doctrine, a greater number of prospective parties should have standing to sue.

More important, the Court in Apple v. Pepper should have recognized this. Although nominally limited to the indirect purchaser doctrine, the case presented the Court with an opportunity to grapple with this logical implication of its Amex decision. It failed to do so.

On the merits, it looks like Apple should win. But, for much the same reason, the Respondents in Apple v. Pepper should have standing

This does not, of course, mean that either party should win on the merits. Indeed, on the merits of the case, the Petitioner in Apple v. Pepper appears to have the stronger argument, particularly in light of Amex which (assuming the App Store is construed as some species of a two-sided “transaction” market) directs that Respondent has the burden of considering harms and efficiencies across both sides of the market.

At least on the basis of the limited facts as presented in the case thus far, Respondents have not remotely met their burden of proving anticompetitive effects in the relevant market.

The actual question presented in Apple v. Pepper concerns standing, not whether the plaintiffs have made out a viable case on the merits. Thus it may seem premature to consider aspects of the latter in addressing the former. But the structure of the market considered by the court should be consistent throughout its analysis.

Adjustments to standing in the context of two-sided markets must be made in concert with the nature of the substantive rule of reason analysis that will be performed in a case. The two doctrines are connected not only by the just demands for consistency, but by the error-cost framework of the overall analysis, which runs throughout the stages of an antitrust case.

Here, the two-sided markets approach in Amex properly understands that conduct by a platform has relevant effects on both sides of its interrelated two-sided market. But that stems from the actual economics of the platform; it is not merely a function of a judicial construct. It thus holds true at all stages of the analysis.

The implication for standing is that users on both sides of a two-sided platform may suffer similarly direct (or indirect) injury as a result of the platform’s conduct, regardless of the side to which that conduct is nominally addressed.

The consequence, then, of Amex’s understanding of the market is that more potential plaintiffs — specifically, plaintiffs on both sides of a two-sided market — may claim to suffer antitrust injury.

Why the myopic focus of the holding (and dissent) on Illinois Brick is improper: It’s about the market definition, stupid!

Moreover, because of the Amex understanding, the problem of analyzing the pass-through of damages at issue in Illinois Brick (with which the Court entirely occupies itself in Apple v. Pepper) is either mitigated or inevitable.

In other words, either the users on the different sides of a two-sided market suffer direct injury without pass-through under a proper definition of the relevant market, or else their interrelatedness is so strong that, complicated as it may be, the needs of substantive accuracy trump the administrative costs in sorting out the incidence of the costs, and courts cannot avoid them.

Illinois Brick’s indirect purchaser doctrine was designed for an environment in which the relationship between producers and consumers is mediated by a distributor in a direct, linear supply chain; it was not designed for platforms. Although the question presented in Apple v. Pepper is explicitly about whether the Illinois Brick “indirect purchaser” doctrine applies to the Apple App Store, that determination is contingent on the underlying product market definition (whether the product market is in fact well-specified by the parties and the court or not).

Particularly where intermediaries exist precisely to address transaction costs between “producers” and “consumers,” the platform services they provide may be central to the underlying claim in a way that the traditional direct/indirect filters — and their implied relevant markets — miss.

Further, the Illinois Brick doctrine was itself based not on the substantive necessity of cutting off liability evaluations at a particular level of distribution, but on administrability concerns. In particular, the Court was concerned with preventing duplicative recovery when there were many potential groups of plaintiffs, as well as preventing injustices that would occur if unknown groups of plaintiffs inadvertently failed to have their rights adequately adjudicated in absentia. It was also concerned with avoiding needlessly complicated damages calculations.

But, almost by definition, the tightly coupled nature of the two sides of a two-sided platform should mitigate the concerns about duplicative recovery and unknown parties. Moreover, much of the presumed complexity in damages calculations in a platform setting arise from the nature of the platform itself. Assessing and apportioning damages may be complicated, but such is the nature of complex commercial relationships — the same would be true, for example, of damages calculations between vertically integrated companies that transact simultaneously at multiple levels, or between cross-licensing patent holders/implementers. In fact, if anything, the judicial efficiency concerns in Illinois Brick point toward the increased importance of properly assessing the nature of the product or service of the platform in order to ensure that it accurately encompasses the entire relevant transaction.

Put differently, under a proper, more-accurate market definition, the “direct” and “indirect” labels don’t necessarily reflect either business or antitrust realities.

Where the Court in Apple v. Pepper really misses the boat is in its overly formalistic claim that the business model (and thus the product) underlying the complained-of conduct doesn’t matter:

[W]e fail to see why the form of the upstream arrangement between the manufacturer or supplier and the retailer should determine whether a monopolistic retailer can be sued by a downstream consumer who has purchased a good or service directly from the retailer and has paid a higher-than-competitive price because of the retailer’s unlawful monopolistic conduct.

But Amex held virtually the opposite:

Because “[l]egal presumptions that rest on formalistic distinctions rather than actual market realities are generally disfavored in antitrust law,” courts usually cannot properly apply the rule of reason without an accurate definition of the relevant market.

* * *

Price increases on one side of the platform likewise do not suggest anticompetitive effects without some evidence that they have increased the overall cost of the platform’s services. Thus, courts must include both sides of the platform—merchants and cardholders—when defining the credit-card market.

In the face of novel business conduct, novel business models, and novel economic circumstances, the degree of substantive certainty may be eroded, as may the reasonableness of the expectation that typical evidentiary burdens accurately reflect competitive harm. Modern technology — and particularly the platform business model endemic to many modern technology firms — presents a need for courts to adjust their doctrines in the face of such novel issues, even if doing so adds additional complexity to the analysis.

The unlearned market-definition lesson of the Eighth Circuit’s Campos v. Ticketmaster dissent

The Eight Circuit’s Campos v. Ticketmaster case demonstrates the way market definition shapes the application of the indirect purchaser doctrine. Indeed, the dissent in that case looms large in the Ninth Circuit’s decision in Apple v. Pepper. [Full disclosure: One of us (Geoff) worked on the dissent in Campos v. Ticketmaster as a clerk to Eighth Circuit judge Morris S. Arnold]

In Ticketmaster, the plaintiffs alleged that Ticketmaster abused its monopoly in ticket distribution services to force supracompetitve charges on concert venues — a practice that led to anticompetitive prices for concert tickets. Although not prosecuted as a two-sided market, the business model is strikingly similar to the App Store model, with Ticketmaster charging fees to venues and then facilitating ticket purchases between venues and concert goers.

As the dissent noted, however:

The monopoly product at issue in this case is ticket distribution services, not tickets.

Ticketmaster supplies the product directly to concert-goers; it does not supply it first to venue operators who in turn supply it to concert-goers. It is immaterial that Ticketmaster would not be supplying the service but for its antecedent agreement with the venues.

But it is quite relevant that the antecedent agreement was not one in which the venues bought some product from Ticketmaster in order to resell it to concert-goers.

More important, and more telling, is the fact that the entirety of the monopoly overcharge, if any, is borne by concert-goers.

In contrast to the situations described in Illinois Brick and the literature that the court cites, the venues do not pay the alleged monopoly overcharge — in fact, they receive a portion of that overcharge from Ticketmaster. (Emphasis added).

Thus, if there was a monopoly overcharge it was really borne entirely by concert-goers. As a result, apportionment — the complexity of which gives rise to the standard in Illinois Brick — was not a significant issue. And the antecedent transaction that allegedly put concertgoers in an indirect relationship with Ticketmaster is one in which Ticketmaster and concert venues divvied up the alleged monopoly spoils, not one in which the venues absorb their share of the monopoly overcharge.

The analogy to Apple v. Pepper is nearly perfect. Apple sits between developers on one side and consumers on the other, charges a fee to developers for app distribution services, and facilitates app sales between developers and users. It is possible to try to twist the market definition exercise to construe the separate contracts between developers and Apple on one hand, and the developers and consumers on the other, as some sort of complicated version of the classical manufacturing and distribution chains. But, more likely, it is advisable to actually inquire into the relevant factual differences that underpin Apple’s business model and adapt how courts consider market definition for two-sided platforms.

Indeed, Hanover Shoe and Illinois Brick were born out of a particular business reality in which businesses structured themselves in what are now classical production and distribution chains. The Supreme Court adopted the indirect purchaser rule as a prudential limitation on antitrust law in order to optimize the judicial oversight of such cases. It seems strangely nostalgic to reflexively try to fit new business methods into old legal analyses, when prudence and reality dictate otherwise.

The dissent in Ticketmaster was ahead of its time insofar as it recognized that the majority’s formal description of the ticket market was an artifact of viewing what was actually something much more like a ticket-services platform operated by Ticketmaster through the poor lens of the categories established decades earlier.

The Ticketmaster dissent’s observations demonstrate that market definition and antitrust standing are interrelated. It makes no sense to adhere to a restrictive reading of the latter if it connotes an economically improper understanding of the former. Ticketmaster provided an intermediary service — perhaps not quite a two-sided market, but something close — that stands outside a traditional manufacturing supply chain. Had it been offered by the venues themselves and bundled into the price of concert tickets there would be no question of injury and of standing (nor would market definition matter much, as both tickets and distribution services would be offered as a joint product by the same parties, in fixed proportions).

What antitrust standing doctrine should look like after Amex

There are some clear implications for antitrust doctrine that (should) follow from the preceding discussion.

A plaintiff has a choice to allege that a defendant operates either as a two-sided market or in a more traditional, linear chain during the pleading stage. If the plaintiff alleges a two-sided market, then, to demonstrate standing, it need only be shown that injury occurred to some subset of platform users with which the plaintiff is inextricably interrelated. The plaintiff would not need to demonstrate injury to him or herself, nor allege net harm, nor show directness.

In response, a defendant can contest standing by challenging the interrelatedness of the plaintiff and the group of platform users with whom the plaintiff claims interrelatedness. If the defendant does not challenge the allegation that it operates a two-sided market, it could not challenge standing by showing indirectness, that plaintiff had not alleged personal injury, or that plaintiff hasn’t alleged a net harm.

Once past a determination of standing, however, a plaintiff who pleads a two-sided market would not be able to later withdraw this allegation in order to lessen the attendant legal burdens.

If the court accepts that the defendant is operating a two-sided market, both parties would be required to frame their allegations and defenses in accordance with the nature of the two-sided market and thus the holding in Amex. This is critical because, whereas alleging a two-sided market may make it easier for plaintiffs to demonstrate standing, Amex’s requirement that net harm be demonstrated across interrelated sets of users makes it more difficult for plaintiffs to present a viable prima facie case. Further, defendants would not be barred from presenting efficiencies defenses based on benefits that interrelated users enjoy.

Conclusion: The Court in Apple v. Pepper should have acknowledged the implications of its holding in Amex

After Amex, claims against two-sided platforms might require more evidence to establish anticompetitive harm, but that business model also means that firms should open themselves up to a larger pool of potential plaintiffs. The legal principles still apply, but the relative importance of those principles to judicial outcomes shifts (or should shift) in line with the unique economic position of potential plaintiffs and defendants in a platform environment.

Whether a priori the net result is more or fewer cases and more or fewer victories for plaintiffs is not the issue; what matters is matching the legal and economic theory to the relevant facts in play. Moreover, decrying Amex as the end of antitrust was premature: the actual affect on injured parties can’t be known until other changes (like standing for a greater number of plaintiffs) are factored into the analysis. The Court’s holding in Apple v. Pepper sidesteps this issue entirely, and thus fails to properly move antitrust doctrine forward in line with its holding in Amex.

Of course, it’s entirely possible that platforms and courts might be inundated with expensive and difficult to manage lawsuits. There may be reasons of administrability for limiting standing (as Illinois Brick perhaps prematurely did for fear of the costs of courts’ managing suits). But then that should have been the focus of the Court’s decision.

Allowing standing in Apple v. Pepper permits exactly the kind of legal experimentation needed to enable the evolution of antitrust doctrine along with new business realities. But in some ways the Court reached the worst possible outcome. It announced a rule that permits more plaintiffs to establish standing, but it did not direct lower courts to assess standing within the proper analytical frame. Instead, it just expands standing in a manner unmoored from the economic — and, indeed, judicial — context. That’s not a recipe for the successful evolution of antitrust doctrine.

In 2014, Benedict Evans, a venture capitalist at Andreessen Horowitz, wrote “Why Amazon Has No Profits (And Why It Works),” a blog post in which he tried to explain Amazon’s business model. He began with a chart of Amazon’s revenue and net income that has now become (in)famous:

Source: Benedict Evans

A question inevitably followed in antitrust circles: How can a company that makes so little profit on so much revenue be worth so much money? It must be predatory pricing!

Predatory pricing is a rather rare anticompetitive practice because the “predator” runs the risk of bankrupting itself in the process of trying to drive rivals out of business with below-cost pricing. Furthermore, even if a predator successfully clears the field of competition, in developed markets with deep capital markets, keeping out new entrants is extremely unlikely.

Nonetheless, in those rare cases where plaintiffs can demonstrate that a firm actually has a viable scheme to drive competitors from the market with prices that are “too low” and has the ability to recoup its losses once it has cleared the market of those competitors, plaintiffs (including the DOJ) can prevail in court.

In other words, whoa if true.

Khan’s Predatory Pricing Accusation

In 2017, Lina Khan, then a law student at Yale, published “Amazon’s Antitrust Paradox” in a note for the Yale Law Journal and used Evans’ chart as supporting evidence that Amazon was guilty of predatory pricing. In the abstract she says, “Although Amazon has clocked staggering growth, it generates meager profits, choosing to price below-cost and expand widely instead.”

But if Amazon is selling below-cost, where does the money come from to finance those losses?

In her article, Khan hinted at two potential explanations: (1) Amazon is using profits from the cloud computing division (AWS) to cross-subsidize losses in the retail division or (2) Amazon is using money from investors to subsidize short-term losses:

Recently, Amazon has started reporting consistent profits, largely due to the success of Amazon Web Services, its cloud computing business. Its North America retail business runs on much thinner margins, and its international retail business still runs at a loss. But for the vast majority of its twenty years in business, losses—not profits—were the norm. Through 2013, Amazon had generated a positive net income in just over half of its financial reporting quarters. Even in quarters in which it did enter the black, its margins were razor-thin, despite astounding growth.

Just as striking as Amazon’s lack of interest in generating profit has been investors’ willingness to back the company. With the exception of a few quarters in 2014, Amazon’s shareholders have poured money in despite the company’s penchant for losses.

Revising predatory pricing doctrine to reflect the economics of platform markets, where firms can sink money for years given unlimited investor backing, would require abandoning the recoupment requirement in cases of below-cost pricing by dominant platforms.

Below-Cost Pricing Not Subsidized by Investors

But neither explanation withstands scrutiny. First, the money is not from investors. Amazon has not raised equity financing since 2003. Nor is it debt financing: The company’s net debt position has been near-zero or negative for its entire history (excluding the Whole Foods acquisition):

Source: Benedict Evans

Amazon does not require new outside financing because it has had positive operating cash flow since 2002:

Notably for a piece of analysis attempting to explain Amazon’s business practices, the text of Khan’s 93-page law review article does not include the word “cash” even once.

Below-Cost Pricing Not Cross-Subsidized by AWS

Source: The Information

As Priya Anand observed in a recent piece for The Information, since Amazon started breaking out AWS in its financials, operating income for the North America retail business has been significantly positive:

But [Khan] underplays its retail profits in the U.S., where the antitrust debate is focused. As the above chart shows, its North America operation has been profitable for years, and its operating income has been on the rise in recent quarters. While its North America retail operation has thinner margins than AWS, it still generated $2.84 billion in operating income last year, which isn’t exactly a rounding error compared to its $4.33 billion in AWS operating income.

Below-Cost Pricing in Retail Also Known as “Loss Leader” Pricing

Okay, so maybe Amazon isn’t using below-cost pricing in aggregate in its retail division. But it still could be using profits from some retail products to cross-subsidize below-cost pricing for other retail products (e.g., diapers), with the intention of driving competitors out of business to capture monopoly profits. This is essentially what Khan claims happened in the Diapers.com (Quidsi) case. But in the retail industry, diapers are explicitly cited as a loss leader that help retailers to develop a customer relationship with mothers in the hopes of selling them a higher volume of products over time. This is exactly what the founders of Diapers.com told Inc Magazine in a 2012 interview (emphasis added):

We saw brick-and-mortar stores, the Wal-Marts and Targets of the world, using these products to build relationships with mom and the end consumer, bringing them into the store and selling them everything else. So we thought that was an interesting model and maybe we could replicate that online. And so we started with selling the loss leader product to basically build a relationship with mom. And once they had the passion for the brand and they were shopping with us on a weekly or a monthly basis that they’d start to fall in love with that brand. We were losing money on every box of diapers that we sold. We weren’t able to buy direct from the manufacturers.

An anticompetitive scheme could be built into such bundling, but in many if not the overwhelming majority of these cases, consumers are the beneficiaries of lower prices and expanded output produced by these arrangements. It’s hard to definitively say whether any given firm that discounts its products is actually pricing below average variable cost (“AVC”) without far more granular accounting ledgers than are typically  maintained. This is part of the reason why these cases can be so hard to prove.

A successful predatory pricing strategy also requires blocking market entry when the predator eventually raises prices. But the Diapers.com case is an explicit example of repeated entry that would defeat recoupment. In an article for the American Enterprise Institute, Jeffrey Eisenach shares the rest of the story following Amazon’s acquisition of Diapers.com:

Amazon’s conduct did not result in a diaper-retailing monopoly. Far from it. According to Khan, Amazon had about 43 percent of online sales in 2016 — compared with Walmart at 23 percent and Target with 18 percent — and since many people still buy diapers at the grocery store, real shares are far lower.

In the end, Quidsi proved to be a bad investment for Amazon: After spending $545 million to buy the firm and operating it as a stand-alone business for more than six years, it announced in April 2017 it was shutting down all of Quidsi’s operations, Diapers.com included. In the meantime, Quidsi’s founders poured the proceeds of the Amazon sale into a new online retailer — Jet.com — which was purchased by Walmart in 2016 for $3.3 billion. Jet.com cofounder Marc Lore now runs Walmart’s e-commerce operations and has said publicly that his goal is to surpass Amazon as the top online retailer.

Sussman’s Predatory Pricing Accusation

Earlier this year, Shaoul Sussman, a law student at Fordham University, published “Prime Predator: Amazon and the Rationale of Below Average Variable Cost Pricing Strategies Among Negative-Cash Flow Firms” in the Journal of Antitrust Enforcement. The article, which was written up by David Dayen for In These Times, presents a novel two-part argument for how Amazon might be profitably engaging in predatory pricing without raising prices:

  1. Amazon’s “True” Cash Flow Is Negative

Sussman argues that the company has been inflating its free cash flow numbers by excluding “capital leases.” According to Sussman, “If all of those expenses as detailed in its statements are accounted for, Amazon experienced a negative cash outflow of $1.461 billion in 2017.” Even though it’s not dispositive of predatory pricing on its own, Sussman believes that a negative free cash flow implies the company has been selling below-cost to gain market share.

2. Amazon Recoups Losses By Lowering AVC, Not By Raising Prices

Instead of raising prices to recoup losses from pricing below-cost, Sussman argues that Amazon flies under the antitrust radar by keeping consumer prices low and progressively decreasing AVC, ostensibly through using its monopsony power to offload costs on suppliers and partners (although this point is not fully explored in his piece).

But Sussman’s argument contains errors in both legal reasoning as well as its underlying empirical assumptions.

Below-cost pricing?

While there are many different ways to calculate the “cost” of a product or service, generally speaking, “below-cost pricing” means the price is less than marginal cost or AVC. Typically, courts tend to rely on AVC when dealing with predatory pricing cases. And as Herbert Hovenkamp has noted, proving that a price falls below the AVC is exceedingly difficult, particularly when dealing with firms in dynamic markets that sell a number of differentiated but complementary goods or services. Amazon, the focus of Sussman’s article, is a useful example here.

When products are complements, or can otherwise be bundled, firms may also be able to offer discounts that are unprofitable when selling single items. In business this is known as the “razor and blades model” (i.e., sell the razor handle below-cost one time and recoup losses on future sales of blades — although it’s not clear if this ever actually happens). Printer manufacturers are also an oft-cited example here, where printers are often sold below AVC in the expectation that the profits will be realized on the ongoing sale of ink. Amazon’s Kindle functions similarly: Amazon sells the Kindle around its AVC, ostensibly on the belief that it will realize a profit on selling e-books in the Kindle store.

Yet, even ignoring this common and broadly inoffensive practice, Sussman’s argument is odd. In essence, he claims that Amazon is concealing some of its costs in the form of capital leases in an effort to conceal its below-AVC pricing while it works to simultaneously lower its real AVC below the prices it charges consumers. At the end of this process, once its real AVC is actually sufficiently below consumers prices, it will (so the argument goes) be in the position of a monopolist reaping monopoly profits.

The problem with this argument should be immediately apparent. For the moment, let’s ignore the classic recoupment problem where new entrants will be drawn into the market to win some of those monopoly prices based on the new AVC that is possible. The real problem with his logic is that Sussman basically suggests that if Amazon sharply lowers AVC — that is it makes production massively more efficient — and then does not drop prices, they are a “predator.” But by pricing below its AVC in the first place, consumers in essence were given a loan by Amazon — they were able to enjoy what Sussman believes are radically low prices while Amazon works to actually make those prices possible through creating production efficiencies. It seems rather strange to punish a firm for loaning consumers a large measure of wealth. Its doubly odd when you then re-factor the recoupment problem back in: as soon as other firms figure out that a lower AVC is possible, they will enter the market and bid away any monopoly profits from Amazon.

Sussman’s Technical Analysis Is Flawed

While there are issues with Sussman’s general theory of harm, there are also some specific problems with his technical analysis of Amazon’s financial statements.

Capital Leases Are a Fixed Cost

First, capital leases should be not be included in cost calculations for a predatory pricing case because they are fixed — not variable — costs. Again, “below-cost” claims in predatory pricing cases generally use AVC (and sometimes marginal cost) as relevant cost measures.

Capital Leases Are Mostly for Server Farms

Second, the usual story is that Amazon uses its wildly-profitable Amazon Web Services (AWS) division to subsidize predatory pricing in its retail division. But Amazon’s “capital leases” — Sussman’s hidden costs in the free cash flow calculations — are mostly for AWS capital expenditures (i.e., server farms).

According to the most recent annual report: “Property and equipment acquired under capital leases was $5.7 billion, $9.6 billion, and $10.6 billion in 2016, 2017, and 2018, with the increase reflecting investments in support of continued business growth primarily due to investments in technology infrastructure for AWS, which investments we expect to continue over time.”

In other words, any adjustments to the free cash flow numbers for capital leases would make Amazon Web Services appear less profitable, and would not have a large effect on the accounting for Amazon’s retail operation (the only division thus far accused of predatory pricing).

Look at Operating Cash Flow Instead of Free Cash Flow

Again, while cash flow measures cannot prove or disprove the existence of predatory pricing, a positive cash flow measure should make us more skeptical of such accusations. In the retail sector, operating cash flow is the appropriate metric to consider. As shown above, Amazon has had positive (and increasing) operating cash flow since 2002.

Your Theory of Harm Is Also Known as “Investment”

Third, in general, Sussman’s novel predatory pricing theory is indistinguishable from pro-competitive behavior in an industry with high fixed costs. From the abstract (emphasis added):

[N]egative cash flow firm[s] … can achieve greater market share through predatory pricing strategies that involve long-term below average variable cost prices … By charging prices in the present reflecting future lower costs based on prospective technological and scale efficiencies, these firms are able to rationalize their predatory pricing practices to investors and shareholders.

“’Charging prices in the present reflecting future lower costs based on prospective technological and scale efficiencies” is literally what it means to invest in capex and R&D.

Sussman’s paper presents a clever attempt to work around the doctrinal limitations on predatory pricing. But, if courts seriously adopt an approach like this, they will be putting in place a legal apparatus that quite explicitly focuses on discouraging investment. This is one of the last things we should want antitrust law to be doing.

The once-mighty Blockbuster video chain is now down to a single store, in Bend, Oregon. It appears to be the only video rental store in Bend, aside from those offering “adult” features. Does that make Blockbuster a monopoly?

It seems almost silly to ask if the last firm in a dying industry is a monopolist. But, it’s just as silly to ask if the first firm in an emerging industry is a monopolist. They’re silly questions because they focus on the monopoly itself, rather than the alternative—what if the firm, and therefore the industry—did not exist at all.

A recent post on CEPR’s Vox blog points out something very obvious, but often forgotten: “The deadweight loss from a monopolist’s not producing at all can be much greater than from charging too high a price.”

The figure below is from the post, by Michael Kremer, Christopher Snyder, and Albert Chen. With monopoly pricing (and no price discrimination), consumer surplus is given by CS, profit is given by ∏, and a deadweight loss given by H.

The authors point out if fixed costs (or entry costs) are so high that the firm does not enter the market, the deadweight loss is equal to CS + H.

Too often, competition authorities fall for the Nirvana Fallacy, a tendency to compare messy, real-world economic circumstances today to idealized potential alternatives and to justify policies on the basis of the discrepancy between the real world and some alternative perfect (or near-perfect) world.

In 2005, Blockbuster dropped its bid to acquire competing Hollywood Entertainment Corporation, the then-second-largest video rental chain. Blockbuster said it expected the Federal Trade Commission would reject the deal on antitrust grounds. The merged companies would have made up more than 50 percent of the home video rental market.

Five years later Blockbuster, Hollywood, and third-place Movie Gallery had all filed for bankruptcy.

Blockbuster’s then-CEO, John Antioco, has been ridiculed for passing up an opportunity to buy Netflix for $50 million in 2005. But, Blockbuster knew its retail world was changing and had thought a consolidation might help it survive that change.

But, just as Antioco can be chided for undervaluing Netflix, so should the FTC. The regulators were so focused on Blockbuster-Hollywood market share that they undervalued the competitive pressure Netflix and other services were bringing. With hindsight, it seems obvious that the Blockbuster’s post-merger market share would not have conveyed any significant power over price. What’s not known is whether the merger would have put off the bankruptcy of the three largest video rental retailers.

Also, what’s not known is the extent to which consumers are better or worse off with the exit of Blockbuster, Hollywood, and Movie Gallery.

Nevertheless, the video rental business highlights a key point in an earlier TOTM post: A great deal of competition comes from the flanks, rather than head-on. Head-on competition from rental kiosks, such as Redbox, nibbled at the sales and margins of Blockbuster, Hollywood, and Movie Gallery. But, the real killer of the bricks-and-mortar stores came from a wide range of streaming services.

The lesson for regulators is that competition is nearly always and everywhere present, even if it’s standing on the sidelines.

Zoom, one of Silicon Valley’s lesser-known unicorns, has just gone public. At the time of writing, its shares are trading at about $65.70, placing the company’s value at $16.84 billion. There are good reasons for this success. According to its Form S-1, Zoom’s revenue rose from about $60 million in 2017 to a projected $330 million in 2019, and the company has already surpassed break-even . This growth was notably fueled by a thriving community of users who collectively spend approximately 5 billion minutes per month in Zoom meetings.

To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects. For instance, the value of Skype to one user depends – at least to some extent – on the number of other people that might be willing to use the network. In these settings, it is often said that positive feedback loops may cause the market to tip in favor of a single firm that is then left with an unassailable market position. Although Zoom still faces significant competitive challenges, it has nonetheless established a strong position in a market previously dominated by powerful incumbents who could theoretically count on network effects to stymie its growth.

Further complicating matters, Zoom chose to compete head-on with these incumbents. It did not create a new market or a highly differentiated product. Zoom’s Form S-1 is quite revealing. The company cites the quality of its product as its most important competitive strength. Similarly, when listing the main benefits of its platform, Zoom emphasizes that its software is “easy to use”, “easy to deploy and manage”, “reliable”, etc. In its own words, Zoom has thus gained a foothold by offering an existing service that works better than that of its competitors.

And yet, this is precisely the type of story that a literal reading of the network effects literature would suggest is impossible, or at least highly unlikely. For instance, the foundational papers on network effects often cite the example of the DVORAK keyboard (David, 1985; and Farrell & Saloner, 1985). These early scholars argued that, despite it being the superior standard, the DVORAK layout failed to gain traction because of the network effects protecting the QWERTY standard. In other words, consumers failed to adopt the superior DVORAK layout because they were unable to coordinate on their preferred option. It must be noted, however, that the conventional telling of this story was forcefully criticized by Liebowitz & Margolis in their classic 1995 article, The Fable of the Keys.

Despite Liebowitz & Margolis’ critique, the dominance of the underlying network effects story persists in many respects. And in that respect, the emergence of Zoom is something of a cautionary tale. As influential as it may be, the network effects literature has tended to overlook a number of factors that may mitigate, or even eliminate, the likelihood of problematic outcomes. Zoom is yet another illustration that policymakers should be careful when they make normative inferences from positive economics.

A Coasian perspective

It is now widely accepted that multi-homing and the absence of switching costs can significantly curtail the potentially undesirable outcomes that are sometimes associated with network effects. But other possibilities are often overlooked. For instance, almost none of the foundational network effects papers pay any notice to the application of the Coase theorem (though it has been well-recognized in the two-sided markets literature).

Take a purported market failure that is commonly associated with network effects: an installed base of users prevents the market from switching towards a new standard, even if it is superior (this is broadly referred to as “excess inertia,” while the opposite scenario is referred to as “excess momentum”). DVORAK’s failure is often cited as an example.

Astute readers will quickly recognize that this externality problem is not fundamentally different from those discussed in Ronald Coase’s masterpiece, “The Problem of Social Cost,” or Steven Cheung’s “The Fable of the Bees” (to which Liebowitz & Margolis paid homage in their article’s title). In the case at hand, there are at least two sets of externalities at play. First, early adopters of the new technology impose a negative externality on the old network’s installed base (by reducing its network effects), and a positive externality on other early adopters (by growing the new network). Conversely, installed base users impose a negative externality on early adopters and a positive externality on other remaining users.

Describing these situations (with a haughty confidence reminiscent of Paul Samuelson and Arthur Cecil Pigou), Joseph Farrell and Garth Saloner conclude that:

In general, he or she [i.e. the user exerting these externalities] does not appropriately take this into account.

Similarly, Michael Katz and Carl Shapiro assert that:

In terms of the Coase theorem, it is very difficult to design a contract where, say, the (potential) future users of HDTV agree to subsidize today’s buyers of television sets to stop buying NTSC sets and start buying HDTV sets, thereby stimulating the supply of HDTV programming.

And yet it is far from clear that consumers and firms can never come up with solutions that mitigate these problems. As Daniel Spulber has suggested, referral programs offer a case in point. These programs usually allow early adopters to receive rewards in exchange for bringing new users to a network. One salient feature of these programs is that they do not simply charge a lower price to early adopters; instead, in order to obtain a referral fee, there must be some agreement between the early adopter and the user who is referred to the platform. This leaves ample room for the reallocation of rewards. Users might, for instance, choose to split the referral fee. Alternatively, the early adopter might invest time to familiarize the switching user with the new platform, hoping to earn money when the user jumps ship. Both of these arrangements may reduce switching costs and mitigate externalities.

Danial Spulber also argues that users may coordinate spontaneously. For instance, social groups often decide upon the medium they will use to communicate. Families might choose to stay on the same mobile phone network. And larger groups (such as an incoming class of students) may agree upon a social network to share necessary information, etc. In these contexts, there is at least some room to pressure peers into adopting a new platform.

Finally, firms and other forms of governance may also play a significant role. For instance, employees are routinely required to use a series of networked goods. Common examples include office suites, email clients, social media platforms (such as Slack), or video communications applications (Zoom, Skype, Google Hangouts, etc.). In doing so, firms presumably act as islands of top-down decision-making and impose those products that maximize the collective preferences of employers and employees. Similarly, a single firm choosing to join a network (notably by adopting a standard) may generate enough momentum for a network to gain critical mass. Apple’s decisions to adopt USB-C connectors on its laptops and to ditch headphone jacks on its iPhones both spring to mind. Likewise, it has been suggested that distributed ledger technology and initial coin offerings may facilitate the creation of new networks. The intuition is that so-called “utility tokens” may incentivize early adopters to join a platform, despite initially weak network effects, because they expect these tokens to increase in value as the network expands.

A combination of these arrangements might explain how Zoom managed to grow so rapidly, despite the presence of powerful incumbents. In its own words:

Our rapid adoption is driven by a virtuous cycle of positive user experiences. Individuals typically begin using our platform when a colleague or associate invites them to a Zoom meeting. When attendees experience our platform and realize the benefits, they often become paying customers to unlock additional functionality.

All of this is not to say that network effects will always be internalized through private arrangements, but rather that it is equally wrong to assume that transaction costs systematically prevent efficient coordination among users.

Misguided regulatory responses

Over the past couple of months, several antitrust authorities around the globe have released reports concerning competition in digital markets (UK, EU, Australia), or held hearings on this topic (US). A recurring theme throughout their published reports is that network effects almost inevitably weaken competition in digital markets.

For instance, the report commissioned by the European Commission mentions that:

Because of very strong network externalities (especially in multi-sided platforms), incumbency advantage is important and strict scrutiny is appropriate. We believe that any practice aimed at protecting the investment of a dominant platform should be minimal and well targeted.

The Australian Competition & Consumer Commission concludes that:

There are considerable barriers to entry and expansion for search platforms and social media platforms that reinforce and entrench Google and Facebook’s market power. These include barriers arising from same-side and cross-side network effects, branding, consumer inertia and switching costs, economies of scale and sunk costs.

Finally, a panel of experts in the United Kingdom found that:

Today, network effects and returns to scale of data appear to be even more entrenched and the market seems to have stabilised quickly compared to the much larger degree of churn in the early days of the World Wide Web.

To address these issues, these reports suggest far-reaching policy changes. These include shifting the burden of proof in competition cases from authorities to defendants, establishing specialized units to oversee digital markets, and imposing special obligations upon digital platforms.

The story of Zoom’s emergence and the important insights that can be derived from the Coase theorem both suggest that these fears may be somewhat overblown.

Rivals do indeed find ways to overthrow entrenched incumbents with some regularity, even when these incumbents are shielded by network effects. Of course, critics may retort that this is not enough, that competition may sometimes arrive too late (excess inertia, i.e., “ a socially excessive reluctance to switch to a superior new standard”) or too fast (excess momentum, i.e., “the inefficient adoption of a new technology”), and that the problem is not just one of network effects, but also one of economies of scale, information asymmetry, etc. But this comes dangerously close to the Nirvana fallacy. To begin, it assumes that regulators are able to reliably navigate markets toward these optimal outcomes — which is questionable, at best. Moreover, the regulatory cost of imposing perfect competition in every digital market (even if it were possible) may well outweigh the benefits that this achieves. Mandating far-reaching policy changes in order to address sporadic and heterogeneous problems is thus unlikely to be the best solution.

Instead, the optimal policy notably depends on whether, in a given case, users and firms can coordinate their decisions without intervention in order to avoid problematic outcomes. A case-by-case approach thus seems by far the best solution.

And competition authorities need look no further than their own decisional practice. The European Commission’s decision in the Facebook/Whatsapp merger offers a good example (this was before Margrethe Vestager’s appointment at DG Competition). In its decision, the Commission concluded that the fast-moving nature of the social network industry, widespread multi-homing, and the fact that neither Facebook nor Whatsapp controlled any essential infrastructure, prevented network effects from acting as a barrier to entry. Regardless of its ultimate position, this seems like a vastly superior approach to competition issues in digital markets. The Commission adopted a similar reasoning in the Microsoft/Skype merger. Unfortunately, the Commission seems to have departed from this measured attitude in more recent decisions. In the Google Search case, for example, the Commission assumes that the mere existence of network effects necessarily increases barriers to entry:

The existence of positive feedback effects on both sides of the two-sided platform formed by general search services and online search advertising creates an additional barrier to entry.

A better way forward

Although the positive economics of network effects are generally correct and most definitely useful, some of the normative implications that have been derived from them are deeply flawed. Too often, policymakers and commentators conclude that these potential externalities inevitably lead to stagnant markets where competition is unable to flourish. But this does not have to be the case. The emergence of Zoom shows that superior products may prosper despite the presence of strong incumbents and network effects.

Basing antitrust policies on sweeping presumptions about digital competition – such as the idea that network effects are rampant or the suggestion that online platforms necessarily imply “extreme returns to scale” – is thus likely to do more harm than good. Instead, Antitrust authorities should take a leaf out of Ronald Coase’s book, and avoid blackboard economics in favor of a more granular approach.

In a recent NY Times opinion piece, Tim Wu, like Elizabeth Holmes, lionizes Steve Jobs. Like Jobs with the iPod and iPhone, and Holmes with the Theranos Edison machine, Wu tells us we must simplify the public’s experience of complex policy into a simple box with an intuitive interface. In this spirit he argues that “what the public wants from government is help with complexity,” such that “[t]his generation of progressives … must accept that simplicity and popularity are not a dumbing-down of policy.”

This argument provides remarkable insight into the complexity problems of progressive thought. Three of these are taken up below: the mismatch of comparing the work of the government to the success of Jobs; the mismatch between Wu’s telling of and Jobs’s actual success; and the latent hypocrisy in Wu’s “simplicity for me, complexity for thee” argument.

Contra Wu’s argument, we need politicians that embrace and lay bare the complexity of policy issues. Too much of our political moment is dominated by demagogues on every side of policy debates offering simple solutions to simplified accounts of complex policy issues. We need public intellectuals, and hopefully politicians as well, to make the case for complexity. Our problems are complex and solutions to them hard (and sometimes unavailing). Without leaders willing to steer into complexity, we can never have a polity able to address complexity.

I. “Good enough for government work” isn’t good enough for Jobs

As an initial matter, there is a great deal of wisdom in Wu’s recognition that the public doesn’t want complexity. As I said at the annual Silicon Flatirons conference in February, consumers don’t want a VCR with lots of dials and knobs that let them control lots of specific features—they just want the damn thing to work. And as that example is meant to highlight, once it does work, most consumers are happy to leave well enough alone (as demonstrated by millions of clocks that would continue to blink 12:00 if VCRs weren’t so 1990s).

Where Wu goes wrong, though, is that he fails to recognize that despite this desire for simplicity, for two decades VCR manufacturers designed and sold VCRs with clocks that were never set—a persistent blinking to constantly remind consumers of their own inadequacies. Had the manufacturers had any insight into the consumer desire for simplicity, all those clocks would have been used for something—anything—other than a reminder that consumers didn’t know how to set them. (Though, to their credit, these devices were designed to operate as most consumers desired without imposing any need to set the clock upon them—a model of simplicity in basic operation that allows consumers to opt-in to a more complex experience.)

If the government were populated by visionaries like Jobs, Wu’s prescription would be wise. But Jobs was a once-in-a-generation thinker. No one in a generation of VCR designers had the insight to design a VCR without a clock (or at least a clock that didn’t blink in a constant reminder of the owner’s inability to set it). And similarly few among the ranks of policy designers are likely to have his abilities, either. On the other hand, the public loves the promise of easy solutions to complex problems. Charlatans and demagogues who would cast themselves in his image, like Holmes did with Theranos, can find government posts in abundance.

Of course, in his paean to offering the public less choice, Wu, himself an oftentime designer of government policy, compares the art of policy design to the work of Jobs—not of Holmes. But where he promises a government run in the manner of Apple, he would more likely give us one more in the mold of Theranos.

There is a more pernicious side to Wu’s argument. He speaks of respect for the public, arguing that “Real respect for the public involves appreciating what the public actually wants and needs,” and that “They would prefer that the government solve problems for them.” Another aspect of respect for the public is recognizing their fundamental competence—that progressive policy experts are not the only ones who are able to understand and address complexity. Most people never set their VCRs’ clocks because they felt no need to, not because they were unable to figure out how to do so. Most people choose not to master the intricacies of public policy. But this is not because the progressive expert class is uniquely able to do so. It is—as Wu notes, that most people do not have the unlimited time or attention that would be needed to do so—time and attention that is afforded to him by his social class.

Wu’s assertion that the public “would prefer that the government solve problems for them” carries echoes of Louis Brandeis, who famously said of consumers that they were “servile, self-indulgent, indolent, ignorant.” Such a view naturally gives rise to Wu’s assumption that the public wants the government to solve problems for them. It assumes that they are unable to solve those problems on their own.

But what Brandeis and progressives cast in his mold attribute to servile indolence is more often a reflection that hoi polloi simply do not have the same concerns as Wu’s progressive expert class. If they had the time to care about the issues Wu would devote his government to, they could likely address them on their own. The fact that they don’t is less a reflection of the public’s ability than of its priorities.

II. Jobs had no monopoly on simplicity

There is another aspect to Wu’s appeal to simplicity in design that is, again, captured well in his invocation of Steve Jobs. Jobs was exceptionally successful with his minimalist, simple designs. He made a fortune for himself and more for Apple. His ideas made Apple one of the most successful companies, with one of the largest user bases, in the history of the world.

Yet many people hate Apple products. Some of these users prefer to have more complex, customizable devices—perhaps because they have particularized needs or perhaps simply because they enjoy having that additional control over how their devices operate and the feeling of ownership that that brings. Some users might dislike Apple products because the interface that is “intuitive” to millions of others is not at all intuitive to them. As trivial as it sounds, most PC users are accustomed to two-button mice—transitioning to Apple’s one-button mouse is exceptionally  discomfitting for many of these users. (In fairness, the one-button mouse design used by Apple products is not attributable to Steve Jobs.) And other users still might prefer devices that are simple in other ways, so are drawn to other products that better cater to their precise needs.

Apple has, perhaps, experienced periods of market dominance with specific products. But this has never been durable—Apple has always faced competition. And this has ensured that those parts of the public that were not well-served by Jobs’s design choices were not bound to use them—they always had alternatives.

Indeed, that is the redeeming aspect of the Theranos story: the market did what it was supposed to. While too many consumers may have been harmed by Holmes’ charlatan business practices, the reality is that once she was forced to bring the company’s product to market it was quickly outed as a failure.

This is how the market works. Companies that design good products, like Apple, are rewarded; other companies then step in to compete by offering yet better products or by addressing other segments of the market. Some of those companies succeed; most, like Theranos, fail.

This dynamic simply does not exist with government. Government is a policy monopolist. A simplified, streamlined, policy that effectively serves half the population does not effectively serve the other half. There is no alternative government that will offer competing policy designs. And to the extent that a given policy serves part of the public better than others, it creates winners and losers.

Of course, the right response to the inadequacy of Wu’s call for more, less complex policy is not that we need more, more complex policy. Rather, it’s that we need less policy—at least policy being dictated and implemented by the government. This is one of the stalwart arguments we free market and classical liberal types offer in favor of market economies: they are able to offer a wider range of goods and services that better cater to a wider range of needs of a wider range of people than the government. The reason policy grows complex is because it is trying to address complex problems; and when it fails to address those problems on a first cut, the solution is more often than not to build “patch” fixes on top of the failed policies. The result is an ever-growing book of rules bound together with voluminous “kludges” that is forever out-of-step with the changing realities of a complex, dynamic world.

The solution to so much complexity is not to sweep it under the carpet in the interest of offering simpler, but only partial, solutions catered to the needs of an anointed subset of the public. The solution is to find better ways to address those complex problems—and often times it’s simply the case that the market is better suited to such solutions.

III. A complexity: What does Wu think of consumer protection?

There is a final, and perhaps most troubling, aspect to Wu’s argument. He argues that respect for the public does not require “offering complete transparency and a multiplicity of choices.” Yet that is what he demands of business. As an academic and government official, Wu has been a loud and consistent consumer protection advocate, arguing that consumers are harmed when firms fail to provide transparency and choice—and that the government must use its coercive power to ensure that they do so.

Wu derives his insight that simpler-design-can-be-better-design from the success of Jobs—and recognizes more broadly that the consumer experience of products of the technological revolution (perhaps one could even call it the tech industry) is much better today because of this simplicity than it was in earlier times. Consumers, in other words, can be better off with firms that offer less transparency and choice. This, of course, is intuitive when one recognizes (as Wu has) that time and attention are among the scarcest of resources.

Steve Jobs and Elizabeth Holmes both understood that the avoidance of complexity and minimizing of choices are hallmarks of good design. Jobs built an empire around this; Holmes cost investors hundreds of millions of dollars in her failed pursuit. But while Holmes failed where Jobs succeeded, her failure was not tragic: Theranos was never the only medical testing laboratory in the market and, indeed, was never more than a bit player in that market. For every Apple that thrives, the marketplace erases a hundred Theranoses. But we do not have a market of governments. Wu’s call for policy to be more like Apple is a call for most government policy to fail like Theranos. Perhaps where the challenge is to do more complex policy simply, the simpler solution is to do less, but simpler, policy well.

Conclusion

We need less dumbing down of complex policy in the interest of simplicity; and we need leaders who are able to make citizens comfortable with and understanding of complexity. Wu is right that good policy need not be complex. But the lesson from that is not that complex policy should be made simple. Rather, the lesson is that policy that cannot be made simple may not be good policy after all.

Last month, the European Commission slapped another fine upon Google for infringing European competition rules (€1.49 billion this time). This brings Google’s contribution to the EU budget to a dizzying total of €8.25 billion (to put this into perspective, the total EU budget for 2019 is €165.8 billion). Given this massive number, and the geographic location of Google’s headquarters, it is perhaps not surprising that some high-profile commentators, including former President Obama and President Trump, have raised concerns about potential protectionism on the Commission’s part.

In a new ICLE Issue Brief, we question whether there is any merit to these claims of protectionism. We show that, since the entry into force of Regulation 1/2003 (the main piece of legislation that implements the competition provisions of the EU treaties), US firms have borne the lion’s share of monetary penalties imposed by the Commission for breaches of competition law.

For instance, US companies have been fined a total of €10.91 billion by the European Commission, compared to €1.17 billion for their European counterparts:

Although this discrepancy seems to point towards protectionism, we believe that the case is not so clear-cut. The large fines paid by US firms are notably driven by a small subset of decisions in the tech sector, where the plaintiffs were also American companies. Tech markets also exhibit various features which tend to inflate the amount of fines.

Despite the plausibility of these potential alternative explanations, there may still be some legitimacy to the allegations of protectionism. The European Commission is, by design, a political body. One may thus question the extent to which Europe’s paucity of tech sector giants is driving the Commission’s ideological preference for tech-sector intervention and the protection of the industry’s small competitors.

Click here to read the full article.

(The following is adapted from a recent ICLE Issue Brief on the flawed essential facilities arguments undergirding the EU competition investigations into Amazon’s marketplace that I wrote with Geoffrey Manne.  The full brief is available here. )

Amazon has largely avoided the crosshairs of antitrust enforcers to date. The reasons seem obvious: in the US it handles a mere 5% of all retail sales (with lower shares worldwide), and it consistently provides access to a wide array of affordable goods. Yet, even with Amazon’s obvious lack of dominance in the general retail market, the EU and some of its member states are opening investigations.

Commissioner Margarethe Vestager’s probe into Amazon, which came to light in September, centers on whether Amazon is illegally using its dominant position vis-á-vis third party merchants on its platforms in order to obtain data that it then uses either to promote its own direct sales, or else to develop competing products under its private label brands. More recently, Austria and Germany have launched separate investigations of Amazon rooted in many of the same concerns as those of the European Commission. The German investigation also focuses on whether the contractual relationships that third party sellers enter into with Amazon are unfair because these sellers are “dependent” on the platform.

One of the fundamental, erroneous assumptions upon which these cases are built is the alleged “essentiality” of the underlying platform or input. In truth, these sorts of cases are more often based on stories of firms that chose to build their businesses in a way that relies on a specific platform. In other words, their own decisions — from which they substantially benefited, of course — made their investments highly “asset specific” and thus vulnerable to otherwise avoidable risks. When a platform on which these businesses rely makes a disruptive move, the third parties cry foul, even though the platform was not — nor should have been — under any obligation to preserve the status quo on behalf of third parties.

Essential or not, that is the question

All three investigations are effectively premised on a version of an “essential facilities” theory — the claim that Amazon is essential to these companies’ ability to do business.

There are good reasons that the US has tightly circumscribed the scope of permissible claims invoking the essential facilities doctrine. Such “duty to deal” claims are “at or near the outer boundary” of US antitrust law. And there are good reasons why the EU and its member states should be similarly skeptical.

Characterizing one firm as essential to the operation of other firms is tricky because “[c]ompelling [innovative] firms to share the source of their advantage… may lessen the incentive for the monopolist, the rival, or both to invest in those economically beneficial facilities.” Further, the classification requires “courts to act as central planners, identifying the proper price, quantity, and other terms of dealing—a role for which they are ill-suited.”

The key difficulty is that alleged “essentiality” actually falls on a spectrum. On one end is something like a true monopoly utility that is actually essential to all firms that use its service as a necessary input; on the other is a firm that offers highly convenient services that make it much easier for firms to operate. This latter definition of “essentiality” describes firms like Google and Amazon, but it is not accurate to characterize such highly efficient and effective firms as truly “essential.” Instead, companies that choose to take advantage of the benefits such platforms offer, and to tailor their business models around them, suffer from an asset specificity problem.

Geoffrey Manne noted this problem in the context of the EU’s Google Shopping case:

A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control.

Third-party sellers that rely upon Amazon without a contingency plan are engaging in a calculated risk that, as business owners, they would typically be expected to manage.  The investigations by European authorities are based on the notion that antitrust law might require Amazon to remove that risk by prohibiting it from undertaking certain conduct that might raise costs for its third-party sellers.

Implications and extensions

In the full issue brief, we consider the tensions in EU law between seeking to promote innovation and protect the competitive process, on the one hand, and the propensity of EU enforcers to rely on essential facilities-style arguments on the other. One of the fundamental errors that leads EU enforcers in this direction is that they confuse the distribution channel of the Internet with an antitrust-relevant market definition.

A claim based on some flavor of Amazon-as-essential-facility should be untenable given today’s market realities because Amazon is, in fact, just one mode of distribution among many. Commerce on the Internet is still just commerce. The only thing preventing a merchant from operating a viable business using any of a number of different mechanisms is the transaction costs it would incur adjusting to a different mode of doing business. Casting Amazon’s marketplace as an essential facility insulates third-party firms from the consequences of their own decisions — from business model selection to marketing and distribution choices. Commerce is nothing new and offline distribution channels and retail outlets — which compete perfectly capably with online — are well developed. Granting retailers access to Amazon’s platform on artificially favorable terms is no more justifiable than granting them access to a supermarket end cap, or a particular unit at a shopping mall. There is, in other words, no business or economic justification for granting retailers in the time-tested and massive retail market an entitlement to use a particular mode of marketing and distribution just because they find it more convenient.

Source: Benedict Evans

[N]ew combinations are, as a rule, embodied, as it were, in new firms which generally do not arise out of the old ones but start producing beside them; … in general it is not the owner of stagecoaches who builds railways. – Joseph Schumpeter, January 1934

Elizabeth Warren wants to break up the tech giants — Facebook, Google, Amazon, and Apple — claiming they have too much power and represent a danger to our democracy. As part of our response to her proposal, we shared a couple of headlines from 2007 claiming that MySpace had an unassailable monopoly in the social media market.

Tommaso Valletti, the chief economist of the Directorate-General for Competition (DG COMP) of the European Commission, said, in what we assume was a reference to our posts, “they go on and on with that single example to claim that [Facebook] and [Google] are not a problem 15 years later … That’s not what I would call an empirical regularity.”

We appreciate the invitation to show that prematurely dubbing companies “unassailable monopolies” is indeed an empirical regularity.

It’s Tough to Make Predictions, Especially About the Future of Competition in Tech

No one is immune to this phenomenon. Antitrust regulators often take a static view of competition, failing to anticipate dynamic technological forces that will upend market structure and competition.

Scientists and academics make a different kind of error. They are driven by the need to satisfy their curiosity rather than shareholders. Upon inventing a new technology or discovering a new scientific truth, academics often fail to see the commercial implications of their findings.

Maybe the titans of industry don’t make these kinds of mistakes because they have skin in the game? The profit and loss statement is certainly a merciless master. But it does not give CEOs the power of premonition. Corporate executives hailed as visionaries in one era often become blinded by their success, failing to see impending threats to their company’s core value propositions.

Furthermore, it’s often hard as outside observers to tell after the fact whether business leaders just didn’t see a tidal wave of disruption coming or, worse, they did see it coming and were unable to steer their bureaucratic, slow-moving ships to safety. Either way, the outcome is the same.

Here’s the pattern we observe over and over: extreme success in one context makes it difficult to predict how and when the next paradigm shift will occur in the market. Incumbents become less innovative as they get lulled into stagnation by high profit margins in established lines of business. (This is essentially the thesis of Clay Christensen’s The Innovator’s Dilemma).

Even if the anti-tech populists are powerless to make predictions, history does offer us some guidance about the future. We have seen time and again that apparently unassailable monopolists are quite effectively assailed by technological forces beyond their control.

PCs

Source: Horace Dediu

Jan 1977: Commodore PET released

Jun 1977: Apple II released

Aug 1977: TRS-80 released

Feb 1978: “I.B.M. Says F.T.C. Has Ended Its Typewriter Monopoly Study” (NYT)

Mobile

Source: Comscore

Mar 2000: Palm Pilot IPO’s at $53 billion

Sep 2006: “Everyone’s always asking me when Apple will come out with a cellphone. My answer is, ‘Probably never.’” – David Pogue (NYT)

Apr 2007: “There’s no chance that the iPhone is going to get any significant market share.” Ballmer (USA TODAY)

Jun 2007: iPhone released

Nov 2007: “Nokia: One Billion Customers—Can Anyone Catch the Cell Phone King?” (Forbes)

Sep 2013: “Microsoft CEO Ballmer Bids Emotional Farewell to Wall Street” (Reuters)

If there’s one thing I regret, there was a period in the early 2000s when we were so focused on what we had to do around Windows that we weren’t able to redeploy talent to the new device form factor called the phone.

Search

Source: Distilled

Mar 1998: “How Yahoo! Won the Search Wars” (Fortune)

Once upon a time, Yahoo! was an Internet search site with mediocre technology. Now it has a market cap of $2.8 billion. Some people say it’s the next America Online.

Sep 1998: Google founded

Instant Messaging

Sep 2000: “AOL Quietly Linking AIM, ICQ” (ZDNet)

AOL’s dominance of instant messaging technology, the kind of real-time e-mail that also lets users know when others are online, has emerged as a major concern of regulators scrutinizing the company’s planned merger with Time Warner Inc. (twx). Competitors to Instant Messenger, such as Microsoft Corp. (msft) and Yahoo! Inc. (yhoo), have been pressing the Federal Communications Commission to force AOL to make its services compatible with competitors’.

Dec 2000: “AOL’s Instant Messaging Monopoly?” (Wired)

Dec 2015: Report for the European Parliament

There have been isolated examples, as in the case of obligations of the merged AOL / Time Warner to make AOL Instant Messenger interoperable with competing messaging services. These obligations on AOL are widely viewed as having been a dismal failure.

Oct 2017: AOL shuts down AIM

Jan 2019: “Zuckerberg Plans to Integrate WhatsApp, Instagram and Facebook Messenger” (NYT)

Retail

Source: Seeking Alpha

May 1997: Amazon IPO

Mar 1998: American Booksellers Association files antitrust suit against Borders, B&N

Feb 2005: Amazon Prime launches

Jul 2006: “Breaking the Chain: The Antitrust Case Against Wal-Mart” (Harper’s)

Feb 2011: “Borders Files for Bankruptcy” (NYT)

Social

Feb 2004: Facebook founded

Jan 2007: “MySpace Is a Natural Monopoly” (TechNewsWorld)

Seventy percent of Yahoo 360 users, for example, also use other social networking sites — MySpace in particular. Ditto for Facebook, Windows Live Spaces and Friendster … This presents an obvious, long-term business challenge to the competitors. If they cannot build up a large base of unique users, they will always be on MySpace’s periphery.

Feb 2007: “Will Myspace Ever Lose Its Monopoly?” (Guardian)

Jun 2011: “Myspace Sold for $35m in Spectacular Fall from $12bn Heyday” (Guardian)

Music

Source: RIAA

Dec 2003: “The subscription model of buying music is bankrupt. I think you could make available the Second Coming in a subscription model, and it might not be successful.” – Steve Jobs (Rolling Stone)

Apr 2006: Spotify founded

Jul 2009: “Apple’s iPhone and iPod Monopolies Must Go” (PC World)

Jun 2015: Apple Music announced

Video

Source: OnlineMBAPrograms

Apr 2003: Netflix reaches one million subscribers for its DVD-by-mail service

Mar 2005: FTC blocks Blockbuster/Hollywood Video merger

Sep 2006: Amazon launches Prime Video

Jan 2007: Netflix streaming launches

Oct 2007: Hulu launches

May 2010: Hollywood Video’s parent company files for bankruptcy

Sep 2010: Blockbuster files for bankruptcy

The Only Winning Move Is Not to Play

Predicting the future of competition in the tech industry is such a fraught endeavor that even articles about how hard it is to make predictions include incorrect predictions. The authors just cannot help themselves. A March 2012 BBC article “The Future of Technology… Who Knows?” derided the naysayers who predicted doom for Apple’s retail store strategy. Its kicker?

And that is why when you read that the Blackberry is doomed, or that Microsoft will never make an impression on mobile phones, or that Apple will soon dominate the connected TV market, you need to take it all with a pinch of salt.

But Blackberry was doomed and Microsoft never made an impression on mobile phones. (Half credit for Apple TV, which currently has a 15% market share).

Nobel Prize-winning economist Paul Krugman wrote a piece for Red Herring magazine (seriously) in June 1998 with the title “Why most economists’ predictions are wrong.” Headline-be-damned, near the end of the article he made the following prediction:

The growth of the Internet will slow drastically, as the flaw in “Metcalfe’s law”—which states that the number of potential connections in a network is proportional to the square of the number of participants—becomes apparent: most people have nothing to say to each other! By 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.

Robert Metcalfe himself predicted in a 1995 column that the Internet would “go spectacularly supernova and in 1996 catastrophically collapse.” After pledging to “eat his words” if the prediction did not come true, “in front of an audience, he put that particular column into a blender, poured in some water, and proceeded to eat the resulting frappe with a spoon.”

A Change Is Gonna Come

Benedict Evans, a venture capitalist at Andreessen Horowitz, has the best summary of why competition in tech is especially difficult to predict:

IBM, Microsoft and Nokia were not beaten by companies doing what they did, but better. They were beaten by companies that moved the playing field and made their core competitive assets irrelevant. The same will apply to Facebook (and Google, Amazon and Apple).

Elsewhere, Evans tried to reassure his audience that we will not be stuck with the current crop of tech giants forever:

With each cycle in tech, companies find ways to build a moat and make a monopoly. Then people look at the moat and think it’s invulnerable. They’re generally right. IBM still dominates mainframes and Microsoft still dominates PC operating systems and productivity software. But… It’s not that someone works out how to cross the moat. It’s that the castle becomes irrelevant. IBM didn’t lose mainframes and Microsoft didn’t lose PC operating systems. Instead, those stopped being ways to dominate tech. PCs made IBM just another big tech company. Mobile and the web made Microsoft just another big tech company. This will happen to Google or Amazon as well. Unless you think tech progress is over and there’ll be no more cycles … It is deeply counter-intuitive to say ‘something we cannot predict is certain to happen’. But this is nonetheless what’s happened to overturn pretty much every tech monopoly so far.

If this time is different — or if there are more false negatives than false positives in the monopoly prediction game — then the advocates for breaking up Big Tech should try to make that argument instead of falling back on “big is bad” rhetoric. As for us, we’ll bet that we have not yet reached the end of history — tech progress is far from over.