In 2014, Benedict Evans, a venture capitalist at Andreessen Horowitz, wrote “Why Amazon Has No Profits (And Why It Works),” a blog post in which he tried to explain Amazon’s business model. He began with a chart of Amazon’s revenue and net income that has now become (in)famous:

Source: Benedict Evans

A question inevitably followed in antitrust circles: How can a company that makes so little profit on so much revenue be worth so much money? It must be predatory pricing!

Predatory pricing is a rather rare anticompetitive practice because the “predator” runs the risk of bankrupting itself in the process of trying to drive rivals out of business with below-cost pricing. Furthermore, even if a predator successfully clears the field of competition, in developed markets with deep capital markets, keeping out new entrants is extremely unlikely.

Nonetheless, in those rare cases where plaintiffs can demonstrate that a firm actually has a viable scheme to drive competitors from the market with prices that are “too low” and has the ability to recoup its losses once it has cleared the market of those competitors, plaintiffs (including the DOJ) can prevail in court.

In other words, whoa if true.

Khan’s Predatory Pricing Accusation

In 2017, Lina Khan, then a law student at Yale, published “Amazon’s Antitrust Paradox” in a note for the Yale Law Journal and used Evans’ chart as supporting evidence that Amazon was guilty of predatory pricing. In the abstract she says, “Although Amazon has clocked staggering growth, it generates meager profits, choosing to price below-cost and expand widely instead.”

But if Amazon is selling below-cost, where does the money come from to finance those losses?

In her article, Khan hinted at two potential explanations: (1) Amazon is using profits from the cloud computing division (AWS) to cross-subsidize losses in the retail division or (2) Amazon is using money from investors to subsidize short-term losses:

Recently, Amazon has started reporting consistent profits, largely due to the success of Amazon Web Services, its cloud computing business. Its North America retail business runs on much thinner margins, and its international retail business still runs at a loss. But for the vast majority of its twenty years in business, losses—not profits—were the norm. Through 2013, Amazon had generated a positive net income in just over half of its financial reporting quarters. Even in quarters in which it did enter the black, its margins were razor-thin, despite astounding growth.

Just as striking as Amazon’s lack of interest in generating profit has been investors’ willingness to back the company. With the exception of a few quarters in 2014, Amazon’s shareholders have poured money in despite the company’s penchant for losses.

Revising predatory pricing doctrine to reflect the economics of platform markets, where firms can sink money for years given unlimited investor backing, would require abandoning the recoupment requirement in cases of below-cost pricing by dominant platforms.

Below-Cost Pricing Not Subsidized by Investors

But neither explanation withstands scrutiny. First, the money is not from investors. Amazon has not raised equity financing since 2003. Nor is it debt financing: The company’s net debt position has been near-zero or negative for its entire history (excluding the Whole Foods acquisition):

Source: Benedict Evans

Amazon does not require new outside financing because it has had positive operating cash flow since 2002:

Notably for a piece of analysis attempting to explain Amazon’s business practices, the text of Khan’s 93-page law review article does not include the word “cash” even once.

Below-Cost Pricing Not Cross-Subsidized by AWS

Source: The Information

As Priya Anand observed in a recent piece for The Information, since Amazon started breaking out AWS in its financials, operating income for the North America retail business has been significantly positive:

But [Khan] underplays its retail profits in the U.S., where the antitrust debate is focused. As the above chart shows, its North America operation has been profitable for years, and its operating income has been on the rise in recent quarters. While its North America retail operation has thinner margins than AWS, it still generated $2.84 billion in operating income last year, which isn’t exactly a rounding error compared to its $4.33 billion in AWS operating income.

Below-Cost Pricing in Retail Also Known as “Loss Leader” Pricing

Okay, so maybe Amazon isn’t using below-cost pricing in aggregate in its retail division. But it still could be using profits from some retail products to cross-subsidize below-cost pricing for other retail products (e.g., diapers), with the intention of driving competitors out of business to capture monopoly profits. This is essentially what Khan claims happened in the Diapers.com (Quidsi) case. But in the retail industry, diapers are explicitly cited as a loss leader that help retailers to develop a customer relationship with mothers in the hopes of selling them a higher volume of products over time. This is exactly what the founders of Diapers.com told Inc Magazine in a 2012 interview (emphasis added):

We saw brick-and-mortar stores, the Wal-Marts and Targets of the world, using these products to build relationships with mom and the end consumer, bringing them into the store and selling them everything else. So we thought that was an interesting model and maybe we could replicate that online. And so we started with selling the loss leader product to basically build a relationship with mom. And once they had the passion for the brand and they were shopping with us on a weekly or a monthly basis that they’d start to fall in love with that brand. We were losing money on every box of diapers that we sold. We weren’t able to buy direct from the manufacturers.

An anticompetitive scheme could be built into such bundling, but in many if not the overwhelming majority of these cases, consumers are the beneficiaries of lower prices and expanded output produced by these arrangements. It’s hard to definitively say whether any given firm that discounts its products is actually pricing below average variable cost (“AVC”) without far more granular accounting ledgers than are typically  maintained. This is part of the reason why these cases can be so hard to prove.

A successful predatory pricing strategy also requires blocking market entry when the predator eventually raises prices. But the Diapers.com case is an explicit example of repeated entry that would defeat recoupment. In an article for the American Enterprise Institute, Jeffrey Eisenach shares the rest of the story following Amazon’s acquisition of Diapers.com:

Amazon’s conduct did not result in a diaper-retailing monopoly. Far from it. According to Khan, Amazon had about 43 percent of online sales in 2016 — compared with Walmart at 23 percent and Target with 18 percent — and since many people still buy diapers at the grocery store, real shares are far lower.

In the end, Quidsi proved to be a bad investment for Amazon: After spending $545 million to buy the firm and operating it as a stand-alone business for more than six years, it announced in April 2017 it was shutting down all of Quidsi’s operations, Diapers.com included. In the meantime, Quidsi’s founders poured the proceeds of the Amazon sale into a new online retailer — Jet.com — which was purchased by Walmart in 2016 for $3.3 billion. Jet.com cofounder Marc Lore now runs Walmart’s e-commerce operations and has said publicly that his goal is to surpass Amazon as the top online retailer.

Sussman’s Predatory Pricing Accusation

Earlier this year, Shaoul Sussman, a law student at Fordham University, published “Prime Predator: Amazon and the Rationale of Below Average Variable Cost Pricing Strategies Among Negative-Cash Flow Firms” in the Journal of Antitrust Enforcement. The article, which was written up by David Dayen for In These Times, presents a novel two-part argument for how Amazon might be profitably engaging in predatory pricing without raising prices:

  1. Amazon’s “True” Cash Flow Is Negative

Sussman argues that the company has been inflating its free cash flow numbers by excluding “capital leases.” According to Sussman, “If all of those expenses as detailed in its statements are accounted for, Amazon experienced a negative cash outflow of $1.461 billion in 2017.” Even though it’s not dispositive of predatory pricing on its own, Sussman believes that a negative free cash flow implies the company has been selling below-cost to gain market share.

2. Amazon Recoups Losses By Lowering AVC, Not By Raising Prices

Instead of raising prices to recoup losses from pricing below-cost, Sussman argues that Amazon flies under the antitrust radar by keeping consumer prices low and progressively decreasing AVC, ostensibly through using its monopsony power to offload costs on suppliers and partners (although this point is not fully explored in his piece).

But Sussman’s argument contains errors in both legal reasoning as well as its underlying empirical assumptions.

Below-cost pricing?

While there are many different ways to calculate the “cost” of a product or service, generally speaking, “below-cost pricing” means the price is less than marginal cost or AVC. Typically, courts tend to rely on AVC when dealing with predatory pricing cases. And as Herbert Hovenkamp has noted, proving that a price falls below the AVC is exceedingly difficult, particularly when dealing with firms in dynamic markets that sell a number of differentiated but complementary goods or services. Amazon, the focus of Sussman’s article, is a useful example here.

When products are complements, or can otherwise be bundled, firms may also be able to offer discounts that are unprofitable when selling single items. In business this is known as the “razor and blades model” (i.e., sell the razor handle below-cost one time and recoup losses on future sales of blades — although it’s not clear if this ever actually happens). Printer manufacturers are also an oft-cited example here, where printers are often sold below AVC in the expectation that the profits will be realized on the ongoing sale of ink. Amazon’s Kindle functions similarly: Amazon sells the Kindle around its AVC, ostensibly on the belief that it will realize a profit on selling e-books in the Kindle store.

Yet, even ignoring this common and broadly inoffensive practice, Sussman’s argument is odd. In essence, he claims that Amazon is concealing some of its costs in the form of capital leases in an effort to conceal its below-AVC pricing while it works to simultaneously lower its real AVC below the prices it charges consumers. At the end of this process, once its real AVC is actually sufficiently below consumers prices, it will (so the argument goes) be in the position of a monopolist reaping monopoly profits.

The problem with this argument should be immediately apparent. For the moment, let’s ignore the classic recoupment problem where new entrants will be drawn into the market to win some of those monopoly prices based on the new AVC that is possible. The real problem with his logic is that Sussman basically suggests that if Amazon sharply lowers AVC — that is it makes production massively more efficient — and then does not drop prices, they are a “predator.” But by pricing below its AVC in the first place, consumers in essence were given a loan by Amazon — they were able to enjoy what Sussman believes are radically low prices while Amazon works to actually make those prices possible through creating production efficiencies. It seems rather strange to punish a firm for loaning consumers a large measure of wealth. Its doubly odd when you then re-factor the recoupment problem back in: as soon as other firms figure out that a lower AVC is possible, they will enter the market and bid away any monopoly profits from Amazon.

Sussman’s Technical Analysis Is Flawed

While there are issues with Sussman’s general theory of harm, there are also some specific problems with his technical analysis of Amazon’s financial statements.

Capital Leases Are a Fixed Cost

First, capital leases should be not be included in cost calculations for a predatory pricing case because they are fixed — not variable — costs. Again, “below-cost” claims in predatory pricing cases generally use AVC (and sometimes marginal cost) as relevant cost measures.

Capital Leases Are Mostly for Server Farms

Second, the usual story is that Amazon uses its wildly-profitable Amazon Web Services (AWS) division to subsidize predatory pricing in its retail division. But Amazon’s “capital leases” — Sussman’s hidden costs in the free cash flow calculations — are mostly for AWS capital expenditures (i.e., server farms).

According to the most recent annual report: “Property and equipment acquired under capital leases was $5.7 billion, $9.6 billion, and $10.6 billion in 2016, 2017, and 2018, with the increase reflecting investments in support of continued business growth primarily due to investments in technology infrastructure for AWS, which investments we expect to continue over time.”

In other words, any adjustments to the free cash flow numbers for capital leases would make Amazon Web Services appear less profitable, and would not have a large effect on the accounting for Amazon’s retail operation (the only division thus far accused of predatory pricing).

Look at Operating Cash Flow Instead of Free Cash Flow

Again, while cash flow measures cannot prove or disprove the existence of predatory pricing, a positive cash flow measure should make us more skeptical of such accusations. In the retail sector, operating cash flow is the appropriate metric to consider. As shown above, Amazon has had positive (and increasing) operating cash flow since 2002.

Your Theory of Harm Is Also Known as “Investment”

Third, in general, Sussman’s novel predatory pricing theory is indistinguishable from pro-competitive behavior in an industry with high fixed costs. From the abstract (emphasis added):

[N]egative cash flow firm[s] … can achieve greater market share through predatory pricing strategies that involve long-term below average variable cost prices … By charging prices in the present reflecting future lower costs based on prospective technological and scale efficiencies, these firms are able to rationalize their predatory pricing practices to investors and shareholders.

“’Charging prices in the present reflecting future lower costs based on prospective technological and scale efficiencies” is literally what it means to invest in capex and R&D.

Sussman’s paper presents a clever attempt to work around the doctrinal limitations on predatory pricing. But, if courts seriously adopt an approach like this, they will be putting in place a legal apparatus that quite explicitly focuses on discouraging investment. This is one of the last things we should want antitrust law to be doing.

One of the main concerns I had during the IANA transition was the extent to which the newly independent organization would be able to behave impartially, implementing its own policies and bylaws in an objective and non-discriminatory manner, and not be unduly influenced by specific  “stakeholders”. Chief among my concerns at the time was the extent to which an independent ICANN would be able to resist the influence of governments: when a powerful government leaned on ICANN’s board, would it be able to adhere to its own policies and follow the process the larger multistakeholder community put in place?

It seems my concern was not unfounded. Amazon, Inc. has been in a long running struggle with the countries of the Amazonian Basin in South America over the use of the generic top-level domain (gTLD) .amazon. In 2014, the ICANN board (which was still nominally under the control of the US’s NTIA) uncritically accepted the nonbinding advice of the Government Advisory Committee (“GAC”) and denied Amazon Inc.’s application for .amazon. In 2017, an Independent Review Process panel reversed the board decision, because

[the board] failed in its duty to explain and give adequate reasons for its decision, beyond merely citing to its reliance on the GAC advice and the presumption, albeit a strong presumption, that it was based on valid and legitimate public policy concerns.  

Accordingly the board was directed to reconsider the .amazon petition and

make an objective and independent judgment regarding whether there are, in fact, well-founded, merits-based public policy reasons for denying Amazon’s applications.

In the two years since that decision, a number of proposals were discussed between Amazon Inc. and the Amazonian countries as they sought to reach a mutually agreeable resolution to the dispute, none of which were successful. In March of this year, the board acknowledged the failed negotiations and announced that the parties had four more weeks to try again and if no agreement were reached in that time, permitted Amazon Inc. to submit a proposal that would handle the Amazonian countries’ cultural protection concerns.

Predictably, that time elapsed and Amazon, Inc. submitted its proposal, which includes a public interest commitment that would allow the Amazonian countries access to certain second level domains under .amazon for cultural and noncommercial use. For example, Brazil could use a domain such as www.br.amazon to showcase the culturally relevant features of the portion of the Amazonian river that flows through its borders.

Prime facie, this seems like a reasonable way to ensure that the cultural interests of those living in the Amazonian region are adequately protected. Moreover, in its first stated objection to Amazon, Inc. having control of the gTLD, the GAC indicated that this was its concern:

[g]ranting exclusive rights to this specific gTLD to a private company would prevent the use of  this domain for purposes of public interest related to the protection, promotion and awareness raising on issues related to the Amazon biome. It would also hinder the possibility of use of this domain to congregate web pages related to the population inhabiting that geographical region.

Yet Amazon, Inc.’s proposal to protect just these interests was rejected by the Amazonian countries’ governments. The counteroffer from those governments was that they be permitted to co-own and administer the gTLD, that their governance interest be constituted in a steering committee on which Amazon, Inc. be given only a 1/9th vote, that they be permitted a much broader use of the gTLD generally and, judging by the conspicuous lack of language limiting use to noncommercial purposes, that they have the ability to use the gTLD for commercial purposes.

This last point certainly must be a nonstarter. Amazon, Inc.’s use of .amazon is naturally going to be commercial in nature. If eight other “co-owners” were permitted a backdoor to using the ‘.amazon’ name in commerce, trademark dilution seems like a predictable, if not inevitable, result. Moreover, the entire point of allowing brand gTLDs is to help responsible brand managers ensure that consumers are receiving the goods and services they expect on the Internet. Commercial use by the Amazonian countries could easily lead to a situation where merchants selling goods of unknown quality are able to mislead consumers by free riding on Amazon, Inc.’s name recognition.

This is a big moment for Internet governance

Theoretically, the ICANN board could decide this matter as early as this week — but apparently it has opted to treat this meeting as merely an opportunity for more discussion. That the board would consider not following through on its statement in March that it would finally put this issue to rest is not an auspicious sign that the board intends to take its independence seriously.

An independent ICANN must be able to stand up to powerful special interests when it comes to following its own rules and bylaws. This is the very concern that most troubled me before the NTIA cut the organization loose. Introducing more delay suggests that the board lacks the courage of its convictions. The Amazonian countries may end up irritated with the ICANN board, but ICANN is either an independent organization or its not.

Amazon, Inc. followed the prescribed procedures from the beginning; there is simply no good reason to draw out this process any further. The real fear here, I suspect, is that the board knows that this is a straightforward trademark case and is holding out hope that the Amazonian countries will make the necessary concessions that will satisfy Amazon, Inc. After seven years of this process, somehow I suspect that this is not likely and the board simply needs to make a decision on the proposals as submitted.

The truth is that these countries never even applied for use of the gTLD in the first place; they only became interested in the use of the domain once Amazon, Inc. expressed interest. All along, these countries maintained that they merely wanted to protect the cultural heritage of the region — surely a fine goal. Yet, when pressed to the edge of the timeline on the process, they produce a proposal that would theoretically permit them to operate commercial domains.

This is a test for ICANN’s board. If it doesn’t want to risk offending powerful parties, it shouldn’t open up the DNS to gTLDs because, inevitably, there will exist aggrieved parties that cannot be satisfied. Amazon, Inc. has submitted a solid proposal that allows it to protect both its own valid trademark interests in its brand as well as the cultural interests of the Amazonian countries. The board should vote on the proposal this week and stop delaying this process any further.

The once-mighty Blockbuster video chain is now down to a single store, in Bend, Oregon. It appears to be the only video rental store in Bend, aside from those offering “adult” features. Does that make Blockbuster a monopoly?

It seems almost silly to ask if the last firm in a dying industry is a monopolist. But, it’s just as silly to ask if the first firm in an emerging industry is a monopolist. They’re silly questions because they focus on the monopoly itself, rather than the alternative—what if the firm, and therefore the industry—did not exist at all.

A recent post on CEPR’s Vox blog points out something very obvious, but often forgotten: “The deadweight loss from a monopolist’s not producing at all can be much greater than from charging too high a price.”

The figure below is from the post, by Michael Kremer, Christopher Snyder, and Albert Chen. With monopoly pricing (and no price discrimination), consumer surplus is given by CS, profit is given by ∏, and a deadweight loss given by H.

The authors point out if fixed costs (or entry costs) are so high that the firm does not enter the market, the deadweight loss is equal to CS + H.

Too often, competition authorities fall for the Nirvana Fallacy, a tendency to compare messy, real-world economic circumstances today to idealized potential alternatives and to justify policies on the basis of the discrepancy between the real world and some alternative perfect (or near-perfect) world.

In 2005, Blockbuster dropped its bid to acquire competing Hollywood Entertainment Corporation, the then-second-largest video rental chain. Blockbuster said it expected the Federal Trade Commission would reject the deal on antitrust grounds. The merged companies would have made up more than 50 percent of the home video rental market.

Five years later Blockbuster, Hollywood, and third-place Movie Gallery had all filed for bankruptcy.

Blockbuster’s then-CEO, John Antioco, has been ridiculed for passing up an opportunity to buy Netflix for $50 million in 2005. But, Blockbuster knew its retail world was changing and had thought a consolidation might help it survive that change.

But, just as Antioco can be chided for undervaluing Netflix, so should the FTC. The regulators were so focused on Blockbuster-Hollywood market share that they undervalued the competitive pressure Netflix and other services were bringing. With hindsight, it seems obvious that the Blockbuster’s post-merger market share would not have conveyed any significant power over price. What’s not known is whether the merger would have put off the bankruptcy of the three largest video rental retailers.

Also, what’s not known is the extent to which consumers are better or worse off with the exit of Blockbuster, Hollywood, and Movie Gallery.

Nevertheless, the video rental business highlights a key point in an earlier TOTM post: A great deal of competition comes from the flanks, rather than head-on. Head-on competition from rental kiosks, such as Redbox, nibbled at the sales and margins of Blockbuster, Hollywood, and Movie Gallery. But, the real killer of the bricks-and-mortar stores came from a wide range of streaming services.

The lesson for regulators is that competition is nearly always and everywhere present, even if it’s standing on the sidelines.

Zoom, one of Silicon Valley’s lesser-known unicorns, has just gone public. At the time of writing, its shares are trading at about $65.70, placing the company’s value at $16.84 billion. There are good reasons for this success. According to its Form S-1, Zoom’s revenue rose from about $60 million in 2017 to a projected $330 million in 2019, and the company has already surpassed break-even . This growth was notably fueled by a thriving community of users who collectively spend approximately 5 billion minutes per month in Zoom meetings.

To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects. For instance, the value of Skype to one user depends – at least to some extent – on the number of other people that might be willing to use the network. In these settings, it is often said that positive feedback loops may cause the market to tip in favor of a single firm that is then left with an unassailable market position. Although Zoom still faces significant competitive challenges, it has nonetheless established a strong position in a market previously dominated by powerful incumbents who could theoretically count on network effects to stymie its growth.

Further complicating matters, Zoom chose to compete head-on with these incumbents. It did not create a new market or a highly differentiated product. Zoom’s Form S-1 is quite revealing. The company cites the quality of its product as its most important competitive strength. Similarly, when listing the main benefits of its platform, Zoom emphasizes that its software is “easy to use”, “easy to deploy and manage”, “reliable”, etc. In its own words, Zoom has thus gained a foothold by offering an existing service that works better than that of its competitors.

And yet, this is precisely the type of story that a literal reading of the network effects literature would suggest is impossible, or at least highly unlikely. For instance, the foundational papers on network effects often cite the example of the DVORAK keyboard (David, 1985; and Farrell & Saloner, 1985). These early scholars argued that, despite it being the superior standard, the DVORAK layout failed to gain traction because of the network effects protecting the QWERTY standard. In other words, consumers failed to adopt the superior DVORAK layout because they were unable to coordinate on their preferred option. It must be noted, however, that the conventional telling of this story was forcefully criticized by Liebowitz & Margolis in their classic 1995 article, The Fable of the Keys.

Despite Liebowitz & Margolis’ critique, the dominance of the underlying network effects story persists in many respects. And in that respect, the emergence of Zoom is something of a cautionary tale. As influential as it may be, the network effects literature has tended to overlook a number of factors that may mitigate, or even eliminate, the likelihood of problematic outcomes. Zoom is yet another illustration that policymakers should be careful when they make normative inferences from positive economics.

A Coasian perspective

It is now widely accepted that multi-homing and the absence of switching costs can significantly curtail the potentially undesirable outcomes that are sometimes associated with network effects. But other possibilities are often overlooked. For instance, almost none of the foundational network effects papers pay any notice to the application of the Coase theorem (though it has been well-recognized in the two-sided markets literature).

Take a purported market failure that is commonly associated with network effects: an installed base of users prevents the market from switching towards a new standard, even if it is superior (this is broadly referred to as “excess inertia,” while the opposite scenario is referred to as “excess momentum”). DVORAK’s failure is often cited as an example.

Astute readers will quickly recognize that this externality problem is not fundamentally different from those discussed in Ronald Coase’s masterpiece, “The Problem of Social Cost,” or Steven Cheung’s “The Fable of the Bees” (to which Liebowitz & Margolis paid homage in their article’s title). In the case at hand, there are at least two sets of externalities at play. First, early adopters of the new technology impose a negative externality on the old network’s installed base (by reducing its network effects), and a positive externality on other early adopters (by growing the new network). Conversely, installed base users impose a negative externality on early adopters and a positive externality on other remaining users.

Describing these situations (with a haughty confidence reminiscent of Paul Samuelson and Arthur Cecil Pigou), Joseph Farrell and Garth Saloner conclude that:

In general, he or she [i.e. the user exerting these externalities] does not appropriately take this into account.

Similarly, Michael Katz and Carl Shapiro assert that:

In terms of the Coase theorem, it is very difficult to design a contract where, say, the (potential) future users of HDTV agree to subsidize today’s buyers of television sets to stop buying NTSC sets and start buying HDTV sets, thereby stimulating the supply of HDTV programming.

And yet it is far from clear that consumers and firms can never come up with solutions that mitigate these problems. As Daniel Spulber has suggested, referral programs offer a case in point. These programs usually allow early adopters to receive rewards in exchange for bringing new users to a network. One salient feature of these programs is that they do not simply charge a lower price to early adopters; instead, in order to obtain a referral fee, there must be some agreement between the early adopter and the user who is referred to the platform. This leaves ample room for the reallocation of rewards. Users might, for instance, choose to split the referral fee. Alternatively, the early adopter might invest time to familiarize the switching user with the new platform, hoping to earn money when the user jumps ship. Both of these arrangements may reduce switching costs and mitigate externalities.

Danial Spulber also argues that users may coordinate spontaneously. For instance, social groups often decide upon the medium they will use to communicate. Families might choose to stay on the same mobile phone network. And larger groups (such as an incoming class of students) may agree upon a social network to share necessary information, etc. In these contexts, there is at least some room to pressure peers into adopting a new platform.

Finally, firms and other forms of governance may also play a significant role. For instance, employees are routinely required to use a series of networked goods. Common examples include office suites, email clients, social media platforms (such as Slack), or video communications applications (Zoom, Skype, Google Hangouts, etc.). In doing so, firms presumably act as islands of top-down decision-making and impose those products that maximize the collective preferences of employers and employees. Similarly, a single firm choosing to join a network (notably by adopting a standard) may generate enough momentum for a network to gain critical mass. Apple’s decisions to adopt USB-C connectors on its laptops and to ditch headphone jacks on its iPhones both spring to mind. Likewise, it has been suggested that distributed ledger technology and initial coin offerings may facilitate the creation of new networks. The intuition is that so-called “utility tokens” may incentivize early adopters to join a platform, despite initially weak network effects, because they expect these tokens to increase in value as the network expands.

A combination of these arrangements might explain how Zoom managed to grow so rapidly, despite the presence of powerful incumbents. In its own words:

Our rapid adoption is driven by a virtuous cycle of positive user experiences. Individuals typically begin using our platform when a colleague or associate invites them to a Zoom meeting. When attendees experience our platform and realize the benefits, they often become paying customers to unlock additional functionality.

All of this is not to say that network effects will always be internalized through private arrangements, but rather that it is equally wrong to assume that transaction costs systematically prevent efficient coordination among users.

Misguided regulatory responses

Over the past couple of months, several antitrust authorities around the globe have released reports concerning competition in digital markets (UK, EU, Australia), or held hearings on this topic (US). A recurring theme throughout their published reports is that network effects almost inevitably weaken competition in digital markets.

For instance, the report commissioned by the European Commission mentions that:

Because of very strong network externalities (especially in multi-sided platforms), incumbency advantage is important and strict scrutiny is appropriate. We believe that any practice aimed at protecting the investment of a dominant platform should be minimal and well targeted.

The Australian Competition & Consumer Commission concludes that:

There are considerable barriers to entry and expansion for search platforms and social media platforms that reinforce and entrench Google and Facebook’s market power. These include barriers arising from same-side and cross-side network effects, branding, consumer inertia and switching costs, economies of scale and sunk costs.

Finally, a panel of experts in the United Kingdom found that:

Today, network effects and returns to scale of data appear to be even more entrenched and the market seems to have stabilised quickly compared to the much larger degree of churn in the early days of the World Wide Web.

To address these issues, these reports suggest far-reaching policy changes. These include shifting the burden of proof in competition cases from authorities to defendants, establishing specialized units to oversee digital markets, and imposing special obligations upon digital platforms.

The story of Zoom’s emergence and the important insights that can be derived from the Coase theorem both suggest that these fears may be somewhat overblown.

Rivals do indeed find ways to overthrow entrenched incumbents with some regularity, even when these incumbents are shielded by network effects. Of course, critics may retort that this is not enough, that competition may sometimes arrive too late (excess inertia, i.e., “ a socially excessive reluctance to switch to a superior new standard”) or too fast (excess momentum, i.e., “the inefficient adoption of a new technology”), and that the problem is not just one of network effects, but also one of economies of scale, information asymmetry, etc. But this comes dangerously close to the Nirvana fallacy. To begin, it assumes that regulators are able to reliably navigate markets toward these optimal outcomes — which is questionable, at best. Moreover, the regulatory cost of imposing perfect competition in every digital market (even if it were possible) may well outweigh the benefits that this achieves. Mandating far-reaching policy changes in order to address sporadic and heterogeneous problems is thus unlikely to be the best solution.

Instead, the optimal policy notably depends on whether, in a given case, users and firms can coordinate their decisions without intervention in order to avoid problematic outcomes. A case-by-case approach thus seems by far the best solution.

And competition authorities need look no further than their own decisional practice. The European Commission’s decision in the Facebook/Whatsapp merger offers a good example (this was before Margrethe Vestager’s appointment at DG Competition). In its decision, the Commission concluded that the fast-moving nature of the social network industry, widespread multi-homing, and the fact that neither Facebook nor Whatsapp controlled any essential infrastructure, prevented network effects from acting as a barrier to entry. Regardless of its ultimate position, this seems like a vastly superior approach to competition issues in digital markets. The Commission adopted a similar reasoning in the Microsoft/Skype merger. Unfortunately, the Commission seems to have departed from this measured attitude in more recent decisions. In the Google Search case, for example, the Commission assumes that the mere existence of network effects necessarily increases barriers to entry:

The existence of positive feedback effects on both sides of the two-sided platform formed by general search services and online search advertising creates an additional barrier to entry.

A better way forward

Although the positive economics of network effects are generally correct and most definitely useful, some of the normative implications that have been derived from them are deeply flawed. Too often, policymakers and commentators conclude that these potential externalities inevitably lead to stagnant markets where competition is unable to flourish. But this does not have to be the case. The emergence of Zoom shows that superior products may prosper despite the presence of strong incumbents and network effects.

Basing antitrust policies on sweeping presumptions about digital competition – such as the idea that network effects are rampant or the suggestion that online platforms necessarily imply “extreme returns to scale” – is thus likely to do more harm than good. Instead, Antitrust authorities should take a leaf out of Ronald Coase’s book, and avoid blackboard economics in favor of a more granular approach.

In a recent NY Times opinion piece, Tim Wu, like Elizabeth Holmes, lionizes Steve Jobs. Like Jobs with the iPod and iPhone, and Holmes with the Theranos Edison machine, Wu tells us we must simplify the public’s experience of complex policy into a simple box with an intuitive interface. In this spirit he argues that “what the public wants from government is help with complexity,” such that “[t]his generation of progressives … must accept that simplicity and popularity are not a dumbing-down of policy.”

This argument provides remarkable insight into the complexity problems of progressive thought. Three of these are taken up below: the mismatch of comparing the work of the government to the success of Jobs; the mismatch between Wu’s telling of and Jobs’s actual success; and the latent hypocrisy in Wu’s “simplicity for me, complexity for thee” argument.

Contra Wu’s argument, we need politicians that embrace and lay bare the complexity of policy issues. Too much of our political moment is dominated by demagogues on every side of policy debates offering simple solutions to simplified accounts of complex policy issues. We need public intellectuals, and hopefully politicians as well, to make the case for complexity. Our problems are complex and solutions to them hard (and sometimes unavailing). Without leaders willing to steer into complexity, we can never have a polity able to address complexity.

I. “Good enough for government work” isn’t good enough for Jobs

As an initial matter, there is a great deal of wisdom in Wu’s recognition that the public doesn’t want complexity. As I said at the annual Silicon Flatirons conference in February, consumers don’t want a VCR with lots of dials and knobs that let them control lots of specific features—they just want the damn thing to work. And as that example is meant to highlight, once it does work, most consumers are happy to leave well enough alone (as demonstrated by millions of clocks that would continue to blink 12:00 if VCRs weren’t so 1990s).

Where Wu goes wrong, though, is that he fails to recognize that despite this desire for simplicity, for two decades VCR manufacturers designed and sold VCRs with clocks that were never set—a persistent blinking to constantly remind consumers of their own inadequacies. Had the manufacturers had any insight into the consumer desire for simplicity, all those clocks would have been used for something—anything—other than a reminder that consumers didn’t know how to set them. (Though, to their credit, these devices were designed to operate as most consumers desired without imposing any need to set the clock upon them—a model of simplicity in basic operation that allows consumers to opt-in to a more complex experience.)

If the government were populated by visionaries like Jobs, Wu’s prescription would be wise. But Jobs was a once-in-a-generation thinker. No one in a generation of VCR designers had the insight to design a VCR without a clock (or at least a clock that didn’t blink in a constant reminder of the owner’s inability to set it). And similarly few among the ranks of policy designers are likely to have his abilities, either. On the other hand, the public loves the promise of easy solutions to complex problems. Charlatans and demagogues who would cast themselves in his image, like Holmes did with Theranos, can find government posts in abundance.

Of course, in his paean to offering the public less choice, Wu, himself an oftentime designer of government policy, compares the art of policy design to the work of Jobs—not of Holmes. But where he promises a government run in the manner of Apple, he would more likely give us one more in the mold of Theranos.

There is a more pernicious side to Wu’s argument. He speaks of respect for the public, arguing that “Real respect for the public involves appreciating what the public actually wants and needs,” and that “They would prefer that the government solve problems for them.” Another aspect of respect for the public is recognizing their fundamental competence—that progressive policy experts are not the only ones who are able to understand and address complexity. Most people never set their VCRs’ clocks because they felt no need to, not because they were unable to figure out how to do so. Most people choose not to master the intricacies of public policy. But this is not because the progressive expert class is uniquely able to do so. It is—as Wu notes, that most people do not have the unlimited time or attention that would be needed to do so—time and attention that is afforded to him by his social class.

Wu’s assertion that the public “would prefer that the government solve problems for them” carries echoes of Louis Brandeis, who famously said of consumers that they were “servile, self-indulgent, indolent, ignorant.” Such a view naturally gives rise to Wu’s assumption that the public wants the government to solve problems for them. It assumes that they are unable to solve those problems on their own.

But what Brandeis and progressives cast in his mold attribute to servile indolence is more often a reflection that hoi polloi simply do not have the same concerns as Wu’s progressive expert class. If they had the time to care about the issues Wu would devote his government to, they could likely address them on their own. The fact that they don’t is less a reflection of the public’s ability than of its priorities.

II. Jobs had no monopoly on simplicity

There is another aspect to Wu’s appeal to simplicity in design that is, again, captured well in his invocation of Steve Jobs. Jobs was exceptionally successful with his minimalist, simple designs. He made a fortune for himself and more for Apple. His ideas made Apple one of the most successful companies, with one of the largest user bases, in the history of the world.

Yet many people hate Apple products. Some of these users prefer to have more complex, customizable devices—perhaps because they have particularized needs or perhaps simply because they enjoy having that additional control over how their devices operate and the feeling of ownership that that brings. Some users might dislike Apple products because the interface that is “intuitive” to millions of others is not at all intuitive to them. As trivial as it sounds, most PC users are accustomed to two-button mice—transitioning to Apple’s one-button mouse is exceptionally  discomfitting for many of these users. (In fairness, the one-button mouse design used by Apple products is not attributable to Steve Jobs.) And other users still might prefer devices that are simple in other ways, so are drawn to other products that better cater to their precise needs.

Apple has, perhaps, experienced periods of market dominance with specific products. But this has never been durable—Apple has always faced competition. And this has ensured that those parts of the public that were not well-served by Jobs’s design choices were not bound to use them—they always had alternatives.

Indeed, that is the redeeming aspect of the Theranos story: the market did what it was supposed to. While too many consumers may have been harmed by Holmes’ charlatan business practices, the reality is that once she was forced to bring the company’s product to market it was quickly outed as a failure.

This is how the market works. Companies that design good products, like Apple, are rewarded; other companies then step in to compete by offering yet better products or by addressing other segments of the market. Some of those companies succeed; most, like Theranos, fail.

This dynamic simply does not exist with government. Government is a policy monopolist. A simplified, streamlined, policy that effectively serves half the population does not effectively serve the other half. There is no alternative government that will offer competing policy designs. And to the extent that a given policy serves part of the public better than others, it creates winners and losers.

Of course, the right response to the inadequacy of Wu’s call for more, less complex policy is not that we need more, more complex policy. Rather, it’s that we need less policy—at least policy being dictated and implemented by the government. This is one of the stalwart arguments we free market and classical liberal types offer in favor of market economies: they are able to offer a wider range of goods and services that better cater to a wider range of needs of a wider range of people than the government. The reason policy grows complex is because it is trying to address complex problems; and when it fails to address those problems on a first cut, the solution is more often than not to build “patch” fixes on top of the failed policies. The result is an ever-growing book of rules bound together with voluminous “kludges” that is forever out-of-step with the changing realities of a complex, dynamic world.

The solution to so much complexity is not to sweep it under the carpet in the interest of offering simpler, but only partial, solutions catered to the needs of an anointed subset of the public. The solution is to find better ways to address those complex problems—and often times it’s simply the case that the market is better suited to such solutions.

III. A complexity: What does Wu think of consumer protection?

There is a final, and perhaps most troubling, aspect to Wu’s argument. He argues that respect for the public does not require “offering complete transparency and a multiplicity of choices.” Yet that is what he demands of business. As an academic and government official, Wu has been a loud and consistent consumer protection advocate, arguing that consumers are harmed when firms fail to provide transparency and choice—and that the government must use its coercive power to ensure that they do so.

Wu derives his insight that simpler-design-can-be-better-design from the success of Jobs—and recognizes more broadly that the consumer experience of products of the technological revolution (perhaps one could even call it the tech industry) is much better today because of this simplicity than it was in earlier times. Consumers, in other words, can be better off with firms that offer less transparency and choice. This, of course, is intuitive when one recognizes (as Wu has) that time and attention are among the scarcest of resources.

Steve Jobs and Elizabeth Holmes both understood that the avoidance of complexity and minimizing of choices are hallmarks of good design. Jobs built an empire around this; Holmes cost investors hundreds of millions of dollars in her failed pursuit. But while Holmes failed where Jobs succeeded, her failure was not tragic: Theranos was never the only medical testing laboratory in the market and, indeed, was never more than a bit player in that market. For every Apple that thrives, the marketplace erases a hundred Theranoses. But we do not have a market of governments. Wu’s call for policy to be more like Apple is a call for most government policy to fail like Theranos. Perhaps where the challenge is to do more complex policy simply, the simpler solution is to do less, but simpler, policy well.

Conclusion

We need less dumbing down of complex policy in the interest of simplicity; and we need leaders who are able to make citizens comfortable with and understanding of complexity. Wu is right that good policy need not be complex. But the lesson from that is not that complex policy should be made simple. Rather, the lesson is that policy that cannot be made simple may not be good policy after all.

Last month, the European Commission slapped another fine upon Google for infringing European competition rules (€1.49 billion this time). This brings Google’s contribution to the EU budget to a dizzying total of €8.25 billion (to put this into perspective, the total EU budget for 2019 is €165.8 billion). Given this massive number, and the geographic location of Google’s headquarters, it is perhaps not surprising that some high-profile commentators, including former President Obama and President Trump, have raised concerns about potential protectionism on the Commission’s part.

In a new ICLE Issue Brief, we question whether there is any merit to these claims of protectionism. We show that, since the entry into force of Regulation 1/2003 (the main piece of legislation that implements the competition provisions of the EU treaties), US firms have borne the lion’s share of monetary penalties imposed by the Commission for breaches of competition law.

For instance, US companies have been fined a total of €10.91 billion by the European Commission, compared to €1.17 billion for their European counterparts:

Although this discrepancy seems to point towards protectionism, we believe that the case is not so clear-cut. The large fines paid by US firms are notably driven by a small subset of decisions in the tech sector, where the plaintiffs were also American companies. Tech markets also exhibit various features which tend to inflate the amount of fines.

Despite the plausibility of these potential alternative explanations, there may still be some legitimacy to the allegations of protectionism. The European Commission is, by design, a political body. One may thus question the extent to which Europe’s paucity of tech sector giants is driving the Commission’s ideological preference for tech-sector intervention and the protection of the industry’s small competitors.

Click here to read the full article.

(The following is adapted from a recent ICLE Issue Brief on the flawed essential facilities arguments undergirding the EU competition investigations into Amazon’s marketplace that I wrote with Geoffrey Manne.  The full brief is available here. )

Amazon has largely avoided the crosshairs of antitrust enforcers to date. The reasons seem obvious: in the US it handles a mere 5% of all retail sales (with lower shares worldwide), and it consistently provides access to a wide array of affordable goods. Yet, even with Amazon’s obvious lack of dominance in the general retail market, the EU and some of its member states are opening investigations.

Commissioner Margarethe Vestager’s probe into Amazon, which came to light in September, centers on whether Amazon is illegally using its dominant position vis-á-vis third party merchants on its platforms in order to obtain data that it then uses either to promote its own direct sales, or else to develop competing products under its private label brands. More recently, Austria and Germany have launched separate investigations of Amazon rooted in many of the same concerns as those of the European Commission. The German investigation also focuses on whether the contractual relationships that third party sellers enter into with Amazon are unfair because these sellers are “dependent” on the platform.

One of the fundamental, erroneous assumptions upon which these cases are built is the alleged “essentiality” of the underlying platform or input. In truth, these sorts of cases are more often based on stories of firms that chose to build their businesses in a way that relies on a specific platform. In other words, their own decisions — from which they substantially benefited, of course — made their investments highly “asset specific” and thus vulnerable to otherwise avoidable risks. When a platform on which these businesses rely makes a disruptive move, the third parties cry foul, even though the platform was not — nor should have been — under any obligation to preserve the status quo on behalf of third parties.

Essential or not, that is the question

All three investigations are effectively premised on a version of an “essential facilities” theory — the claim that Amazon is essential to these companies’ ability to do business.

There are good reasons that the US has tightly circumscribed the scope of permissible claims invoking the essential facilities doctrine. Such “duty to deal” claims are “at or near the outer boundary” of US antitrust law. And there are good reasons why the EU and its member states should be similarly skeptical.

Characterizing one firm as essential to the operation of other firms is tricky because “[c]ompelling [innovative] firms to share the source of their advantage… may lessen the incentive for the monopolist, the rival, or both to invest in those economically beneficial facilities.” Further, the classification requires “courts to act as central planners, identifying the proper price, quantity, and other terms of dealing—a role for which they are ill-suited.”

The key difficulty is that alleged “essentiality” actually falls on a spectrum. On one end is something like a true monopoly utility that is actually essential to all firms that use its service as a necessary input; on the other is a firm that offers highly convenient services that make it much easier for firms to operate. This latter definition of “essentiality” describes firms like Google and Amazon, but it is not accurate to characterize such highly efficient and effective firms as truly “essential.” Instead, companies that choose to take advantage of the benefits such platforms offer, and to tailor their business models around them, suffer from an asset specificity problem.

Geoffrey Manne noted this problem in the context of the EU’s Google Shopping case:

A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control.

Third-party sellers that rely upon Amazon without a contingency plan are engaging in a calculated risk that, as business owners, they would typically be expected to manage.  The investigations by European authorities are based on the notion that antitrust law might require Amazon to remove that risk by prohibiting it from undertaking certain conduct that might raise costs for its third-party sellers.

Implications and extensions

In the full issue brief, we consider the tensions in EU law between seeking to promote innovation and protect the competitive process, on the one hand, and the propensity of EU enforcers to rely on essential facilities-style arguments on the other. One of the fundamental errors that leads EU enforcers in this direction is that they confuse the distribution channel of the Internet with an antitrust-relevant market definition.

A claim based on some flavor of Amazon-as-essential-facility should be untenable given today’s market realities because Amazon is, in fact, just one mode of distribution among many. Commerce on the Internet is still just commerce. The only thing preventing a merchant from operating a viable business using any of a number of different mechanisms is the transaction costs it would incur adjusting to a different mode of doing business. Casting Amazon’s marketplace as an essential facility insulates third-party firms from the consequences of their own decisions — from business model selection to marketing and distribution choices. Commerce is nothing new and offline distribution channels and retail outlets — which compete perfectly capably with online — are well developed. Granting retailers access to Amazon’s platform on artificially favorable terms is no more justifiable than granting them access to a supermarket end cap, or a particular unit at a shopping mall. There is, in other words, no business or economic justification for granting retailers in the time-tested and massive retail market an entitlement to use a particular mode of marketing and distribution just because they find it more convenient.

Source: Benedict Evans

[N]ew combinations are, as a rule, embodied, as it were, in new firms which generally do not arise out of the old ones but start producing beside them; … in general it is not the owner of stagecoaches who builds railways. – Joseph Schumpeter, January 1934

Elizabeth Warren wants to break up the tech giants — Facebook, Google, Amazon, and Apple — claiming they have too much power and represent a danger to our democracy. As part of our response to her proposal, we shared a couple of headlines from 2007 claiming that MySpace had an unassailable monopoly in the social media market.

Tommaso Valletti, the chief economist of the Directorate-General for Competition (DG COMP) of the European Commission, said, in what we assume was a reference to our posts, “they go on and on with that single example to claim that [Facebook] and [Google] are not a problem 15 years later … That’s not what I would call an empirical regularity.”

We appreciate the invitation to show that prematurely dubbing companies “unassailable monopolies” is indeed an empirical regularity.

It’s Tough to Make Predictions, Especially About the Future of Competition in Tech

No one is immune to this phenomenon. Antitrust regulators often take a static view of competition, failing to anticipate dynamic technological forces that will upend market structure and competition.

Scientists and academics make a different kind of error. They are driven by the need to satisfy their curiosity rather than shareholders. Upon inventing a new technology or discovering a new scientific truth, academics often fail to see the commercial implications of their findings.

Maybe the titans of industry don’t make these kinds of mistakes because they have skin in the game? The profit and loss statement is certainly a merciless master. But it does not give CEOs the power of premonition. Corporate executives hailed as visionaries in one era often become blinded by their success, failing to see impending threats to their company’s core value propositions.

Furthermore, it’s often hard as outside observers to tell after the fact whether business leaders just didn’t see a tidal wave of disruption coming or, worse, they did see it coming and were unable to steer their bureaucratic, slow-moving ships to safety. Either way, the outcome is the same.

Here’s the pattern we observe over and over: extreme success in one context makes it difficult to predict how and when the next paradigm shift will occur in the market. Incumbents become less innovative as they get lulled into stagnation by high profit margins in established lines of business. (This is essentially the thesis of Clay Christensen’s The Innovator’s Dilemma).

Even if the anti-tech populists are powerless to make predictions, history does offer us some guidance about the future. We have seen time and again that apparently unassailable monopolists are quite effectively assailed by technological forces beyond their control.

PCs

Source: Horace Dediu

Jan 1977: Commodore PET released

Jun 1977: Apple II released

Aug 1977: TRS-80 released

Feb 1978: “I.B.M. Says F.T.C. Has Ended Its Typewriter Monopoly Study” (NYT)

Mobile

Source: Comscore

Mar 2000: Palm Pilot IPO’s at $53 billion

Sep 2006: “Everyone’s always asking me when Apple will come out with a cellphone. My answer is, ‘Probably never.’” – David Pogue (NYT)

Apr 2007: “There’s no chance that the iPhone is going to get any significant market share.” Ballmer (USA TODAY)

Jun 2007: iPhone released

Nov 2007: “Nokia: One Billion Customers—Can Anyone Catch the Cell Phone King?” (Forbes)

Sep 2013: “Microsoft CEO Ballmer Bids Emotional Farewell to Wall Street” (Reuters)

If there’s one thing I regret, there was a period in the early 2000s when we were so focused on what we had to do around Windows that we weren’t able to redeploy talent to the new device form factor called the phone.

Search

Source: Distilled

Mar 1998: “How Yahoo! Won the Search Wars” (Fortune)

Once upon a time, Yahoo! was an Internet search site with mediocre technology. Now it has a market cap of $2.8 billion. Some people say it’s the next America Online.

Sep 1998: Google founded

Instant Messaging

Sep 2000: “AOL Quietly Linking AIM, ICQ” (ZDNet)

AOL’s dominance of instant messaging technology, the kind of real-time e-mail that also lets users know when others are online, has emerged as a major concern of regulators scrutinizing the company’s planned merger with Time Warner Inc. (twx). Competitors to Instant Messenger, such as Microsoft Corp. (msft) and Yahoo! Inc. (yhoo), have been pressing the Federal Communications Commission to force AOL to make its services compatible with competitors’.

Dec 2000: “AOL’s Instant Messaging Monopoly?” (Wired)

Dec 2015: Report for the European Parliament

There have been isolated examples, as in the case of obligations of the merged AOL / Time Warner to make AOL Instant Messenger interoperable with competing messaging services. These obligations on AOL are widely viewed as having been a dismal failure.

Oct 2017: AOL shuts down AIM

Jan 2019: “Zuckerberg Plans to Integrate WhatsApp, Instagram and Facebook Messenger” (NYT)

Retail

Source: Seeking Alpha

May 1997: Amazon IPO

Mar 1998: American Booksellers Association files antitrust suit against Borders, B&N

Feb 2005: Amazon Prime launches

Jul 2006: “Breaking the Chain: The Antitrust Case Against Wal-Mart” (Harper’s)

Feb 2011: “Borders Files for Bankruptcy” (NYT)

Social

Feb 2004: Facebook founded

Jan 2007: “MySpace Is a Natural Monopoly” (TechNewsWorld)

Seventy percent of Yahoo 360 users, for example, also use other social networking sites — MySpace in particular. Ditto for Facebook, Windows Live Spaces and Friendster … This presents an obvious, long-term business challenge to the competitors. If they cannot build up a large base of unique users, they will always be on MySpace’s periphery.

Feb 2007: “Will Myspace Ever Lose Its Monopoly?” (Guardian)

Jun 2011: “Myspace Sold for $35m in Spectacular Fall from $12bn Heyday” (Guardian)

Music

Source: RIAA

Dec 2003: “The subscription model of buying music is bankrupt. I think you could make available the Second Coming in a subscription model, and it might not be successful.” – Steve Jobs (Rolling Stone)

Apr 2006: Spotify founded

Jul 2009: “Apple’s iPhone and iPod Monopolies Must Go” (PC World)

Jun 2015: Apple Music announced

Video

Source: OnlineMBAPrograms

Apr 2003: Netflix reaches one million subscribers for its DVD-by-mail service

Mar 2005: FTC blocks Blockbuster/Hollywood Video merger

Sep 2006: Amazon launches Prime Video

Jan 2007: Netflix streaming launches

Oct 2007: Hulu launches

May 2010: Hollywood Video’s parent company files for bankruptcy

Sep 2010: Blockbuster files for bankruptcy

The Only Winning Move Is Not to Play

Predicting the future of competition in the tech industry is such a fraught endeavor that even articles about how hard it is to make predictions include incorrect predictions. The authors just cannot help themselves. A March 2012 BBC article “The Future of Technology… Who Knows?” derided the naysayers who predicted doom for Apple’s retail store strategy. Its kicker?

And that is why when you read that the Blackberry is doomed, or that Microsoft will never make an impression on mobile phones, or that Apple will soon dominate the connected TV market, you need to take it all with a pinch of salt.

But Blackberry was doomed and Microsoft never made an impression on mobile phones. (Half credit for Apple TV, which currently has a 15% market share).

Nobel Prize-winning economist Paul Krugman wrote a piece for Red Herring magazine (seriously) in June 1998 with the title “Why most economists’ predictions are wrong.” Headline-be-damned, near the end of the article he made the following prediction:

The growth of the Internet will slow drastically, as the flaw in “Metcalfe’s law”—which states that the number of potential connections in a network is proportional to the square of the number of participants—becomes apparent: most people have nothing to say to each other! By 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.

Robert Metcalfe himself predicted in a 1995 column that the Internet would “go spectacularly supernova and in 1996 catastrophically collapse.” After pledging to “eat his words” if the prediction did not come true, “in front of an audience, he put that particular column into a blender, poured in some water, and proceeded to eat the resulting frappe with a spoon.”

A Change Is Gonna Come

Benedict Evans, a venture capitalist at Andreessen Horowitz, has the best summary of why competition in tech is especially difficult to predict:

IBM, Microsoft and Nokia were not beaten by companies doing what they did, but better. They were beaten by companies that moved the playing field and made their core competitive assets irrelevant. The same will apply to Facebook (and Google, Amazon and Apple).

Elsewhere, Evans tried to reassure his audience that we will not be stuck with the current crop of tech giants forever:

With each cycle in tech, companies find ways to build a moat and make a monopoly. Then people look at the moat and think it’s invulnerable. They’re generally right. IBM still dominates mainframes and Microsoft still dominates PC operating systems and productivity software. But… It’s not that someone works out how to cross the moat. It’s that the castle becomes irrelevant. IBM didn’t lose mainframes and Microsoft didn’t lose PC operating systems. Instead, those stopped being ways to dominate tech. PCs made IBM just another big tech company. Mobile and the web made Microsoft just another big tech company. This will happen to Google or Amazon as well. Unless you think tech progress is over and there’ll be no more cycles … It is deeply counter-intuitive to say ‘something we cannot predict is certain to happen’. But this is nonetheless what’s happened to overturn pretty much every tech monopoly so far.

If this time is different — or if there are more false negatives than false positives in the monopoly prediction game — then the advocates for breaking up Big Tech should try to make that argument instead of falling back on “big is bad” rhetoric. As for us, we’ll bet that we have not yet reached the end of history — tech progress is far from over.

 

Will the merger between T-Mobile and Sprint make consumers better or worse off? A central question in the review of this merger—as it is in all merger reviews—is the likely effects that the transaction will have on consumers. In this post, we look at one study that opponents of the merger have been using to support their claim that the merger will harm consumers.

Along with my earlier posts on data problems and public policy (1, 2, 3, 4, 5), this provides an opportunity to explore why seemingly compelling studies can be used to muddy the discussion and fool observers into seeing something that isn’t there.

This merger—between the third and fourth largest mobile wireless providers in the United States—has been characterized as a “4-to-3” merger, on the grounds that it will reduce the number of large, ostensibly national carriers from four to three. This, in turn, has led to concerns that further concentration in the wireless telecommunications industry will harm consumers. Specifically, some opponents of the merger claim that “it’s going to be hard for someone to make a persuasive case that reducing four firms to three is actually going to improve competition for the benefit of American consumers.”

A number of previous mergers around the world can or have also been characterized as 4-to-3 mergers in the wireless telecommunications industry. Several econometric studies have attempted to evaluate the welfare effects of 4-to-3 mergers in other countries, as well as the effects of market concentration in the wireless industry more generally. These studies have been used by both proponents and opponents of the proposed merger of T-Mobile and Sprint to support their respective contentions that the merger will benefit or harm consumer welfare.

One particular study has risen to prominence among opponents of 4-to-3 mergers in telecom in general and the T-Mobile/Sprint merger in specific. This is worrying because the study has several fundamental flaws. 

This study, by Finnish consultancy Rewheel, has been cited by, among others, Phillip Berenbroick of Public Knowledge, who in Senate testimony, asserted that “Rewheel found that consumers in markets with three facilities-based providers paid twice as much per gigabyte as consumers in four firm markets.”

The Rewheel report upon which Mr. Berenbroick relied, is, however, marred by a number of significant flaws, which undermine its usefulness.

The Rewheel report

Rewheel’s report purports to analyze the state of 4G pricing across 41 countries that are either members of the EU or the OECD or both. The report’s conclusions are based mainly on two measures:

  1. Estimates of the maximum number of gigabytes available under each plan for a specific hypothetical monthly price, ranging from €5 to €80 a month. In other words, for each plan, Rewheel asks, “How many 4G gigabytes would X euros buy?” Rewheel then ranks countries by the median amount of gigabytes available at each hypothetical price for all the plans surveyed in each country.
  2. Estimates of what Rewheel describes as “fully allocated gigabyte prices.” This is the monthly retail price (including VAT) divided by the number of gigabytes included in each plan. Rewheel then ranks countries by the median price per gigabyte across all the plans surveyed in each country.

Rewheel’s convoluted calculations

Rewheel’s use of the country median across all plans is problematic. In particular it gives all plans equal weight, regardless of consumers’ use of each plan. For example, a plan targeted for a consumer with a “high” level of usage is included with a plan targeted for a consumer with a “low” level of usage. Even though a “high” user would not purchase a “low” plan (which would be relatively expensive for a “high” user), all plans are included, thereby skewing upward the median estimates.

But even if that approach made sense as a way of measuring consumers’ willingness to pay, in execution Rewheel’s analysis contains the following key defects:

  • The Rewheel report is essentially limited to quantity effects alone (i.e., how many gigabytes available under each plan for a given hypothetical price) or price effects alone (i.e., price per included gigabyte for each plan). These measures can mislead the analysis by missing, among other things, innovation and quality effects.
  • Rewheel’s analysis is not based on an impartial assessment of relevant price data. Rather, it is based on hypothetical measures. Such comparisons say nothing about the plans actually chosen by consumers or the actual prices paid by consumers in those countries, rendering Rewheel’s comparisons virtually meaningless. As Affeldt & Nitsche (2014) note in their assessment of the effects of concentration in mobile telecom markets:

Such approaches are taken by Rewheel (2013) and also the Austrian regulator rtr (when tracking prices over time, see rtr (2014)). Such studies face the following problems: They may pick tariffs that are relatively meaningless in the country. They will have to assume one or more consumption baskets (voice minutes, data volume etc.) in order to compare tariffs. This may drive results. Apart from these difficulties such comparisons require very careful tracking of tariffs and their changes. Even if one assumes studying a sample of tariffs is potentially meaningful, a comparison across countries (or over time) would still require taking into account key differences across countries (or over time) like differences in demand, costs, network quality etc.

  • The Rewheel report bases its comparison on dissimilar service levels by not taking into account, for instance, relevant features like comparable network capacity, service security, and, perhaps most important, overall quality of service.

Rewheel’s unsupported conclusions

Rewheel uses its analysis to come to some strong conclusions, such as the conclusion on the first page of its report declaring the median gigabyte price in countries with three carriers is twice as high as in countries with four carriers.

The figure below is a revised version of the figure on the first page of Rewheel’s report. The yellow blocks (gray dots) show the range of prices in countries with three carriers the blue blocks (pink dots) shows the range of prices in countries with four carriers. The darker blocks show the overlap of the two. The figure makes clear that there is substantial overlap in pricing among three and four carrier countries. Thus, it is not obvious that three carrier countries have significantly higher prices (as measured by Rewheel) than four carrier countries.

Rewheel

A simple “eyeballing” of the data can lead to incorrect conclusions, in which case statistical analysis can provide some more certainty (or, at least, some measure of uncertainty). Yet, Rewheel provides no statistical analysis of its calculations, such as measures of statistical significance. However, information on page 5 of the Rewheel report can be used to perform some rudimentary statistical analysis.

I took the information from the columns for hypothetical monthly prices of €30 a month and €50 a month, and converted data into a price per gigabyte to generate the dependent variable. Following Rewheel’s assumption, “unlimited” is converted to 250 gigabytes per month. Greece was dropped from the analysis because Rewheel indicates that no data is available at either hypothetical price level.

My rudimentary statistical analysis includes the following independent variables:

  • Number of carriers (or mobile network operators, MNOs) reported by Rewheel in each country, ranging from three to five. Israel is the only country with five MNOs.
  • A dummy variable for EU28 countries. Rewheel performs separate analysis for EU28 countries, suggesting they think this is an important distinction.
  • GDP per capita for each country, adjusted for purchasing power parity. Several articles in the literature suggest higher GDP countries would be expected to have higher wireless prices.
  • Population density, measured by persons per square kilometer. Several articles in the literature argue that countries with lower population density would have higher costs of providing wireless service which would, in turn, be reflected in higher prices.

The tables below confirm what an eyeballing of the figure suggest: Rewheel’s data show number of MNOs in a country have no statistically significant relationship with price per gigabyte, at either the €30 a month level or the €50 a month level.

RewheelRegression

While the signs on the MNO coefficient are negative (i.e., more carriers in a country is associated with lower prices), they are not statistically significantly different from zero at any of the traditional levels of statistical significance.

Also, the regressions suffer from relatively low measures of goodness-of-fit. The independent variables in the regression explain approximately five percent of the variation in the price per gigabyte. This is likely because of the cockamamie way Rewheel measures price, but is also due to the known problems with performing cross-sectional analysis of wireless pricing, as noted by Csorba & Pápai (2015):

Many regulatory policies are based on a comparison of prices between European countries, but these simple cross-sectional analyses can lead to misleading conclusions because of at least two reasons. First, the price difference between countries of n and (n + 1) active mobile operators can be due to other factors, and the analyst can never be sure of having solved the omitted variable bias problem. Second and more importantly, the effect of an additional operator estimated from a cross-sectional comparison cannot be equated with the effect of an actual entry that might have a long-lasting effect on a single market.

The Rewheel report cannot be relied upon in assessing consumer benefits or harm associated with the T-Mobile/Sprint merger, or any other merger

Rewheel apparently has a rich dataset of wireless pricing plans. Nevertheless, the analyses presented in its report are fundamentally flawed. Moreover, Rewheel’s conclusions regarding three vs. four carrier countries are not only baseless, but clearly unsupported by closer inspection of the information presented in its report. The Rewheel report cannot be relied upon to inform regulatory oversight of the T-Mobile/Spring merger or any other. This study isn’t unique and it should serve as a caution to be wary of studies that merely eyeball information.

[TOTM: The following is the second in a series of posts by TOTM guests and authors on the FTC v. Qualcomm case, currently awaiting decision by Judge Lucy Koh in the Northern District of California. The first post, by Luke Froeb, Michael Doane & Mikhael Shor is here.

This post is authored by Douglas H. Ginsburg, Professor of Law, Antonin Scalia Law School at George Mason University; Senior Judge, United States Court of Appeals for the District of Columbia Circuit; and former Assistant Attorney General in charge of the Antitrust Division of the U.S. Department of Justice; and Joshua D. Wright, University Professor, Antonin Scalia Law School at George Mason University; Executive Director, Global Antitrust Institute; former U.S. Federal Trade Commissioner from 2013-15; and one of the founding bloggers at Truth on the Market.]

[Ginsburg & Wright: Professor Wright is recused from participation in the FTC litigation against Qualcomm, but has provided counseling advice to Qualcomm concerning other regulatory and competition matters. The views expressed here are our own and neither author received financial support.]

The Department of Justice Antitrust Division (DOJ) and Federal Trade Commission (FTC) have spent a significant amount of time in federal court litigating major cases premised upon an anticompetitive foreclosure theory of harm. Bargaining models, a tool used commonly in foreclosure cases, have been essential to the government’s theory of harm in these cases. In vertical merger or conduct cases, the core theory of harm is usually a variant of the claim that the transaction (or conduct) strengthens the firm’s incentives to engage in anticompetitive strategies that depend on negotiations with input suppliers. Bargaining models are a key element of the agency’s attempt to establish those claims and to predict whether and how firm incentives will affect negotiations with input suppliers, and, ultimately, the impact on equilibrium prices and output. Application of bargaining models played a key role in evaluating the anticompetitive foreclosure theories in the DOJ’s litigation to block the proposed merger of AT&T and Time Warner Cable. A similar model is at the center of the FTC’s antitrust claims against Qualcomm and its patent licensing business model.

Modern antitrust analysis does not condemn business practices as anticompetitive without solid economic evidence of an actual or likely harm to competition. This cautious approach was developed in the courts for two reasons. The first is that the difficulty of distinguishing between procompetitive and anticompetitive explanations for the same conduct suggests there is a high risk of error. The second is that those errors are more likely to be false positives than false negatives because empirical evidence and judicial learning have established that unilateral conduct is usually either procompetitive or competitively neutral. In other words, while the risk of anticompetitive foreclosure is real, courts have sensibly responded by requiring plaintiffs to substantiate their claims with more than just theory or scant evidence that rivals have been harmed.

An economic model can help establish the likelihood and/or magnitude of competitive harm when the model carefully captures the key institutional features of the competition it attempts to explain. Naturally, this tends to mean that the economic theories and models proffered by dueling economic experts to predict competitive effects take center stage in antitrust disputes. The persuasiveness of an economic model turns on the robustness of its assumptions about the underlying market. Model predictions that are inconsistent with actual market evidence give one serious pause before accepting the results as reliable.

For example, many industries are characterized by bargaining between providers and distributors. The Nash bargaining framework can be used to predict the outcomes of bilateral negotiations based upon each party’s bargaining leverage. The model assumes that both parties are better off if an agreement is reached, but that as the utility of one party’s outside option increases relative to the bargain, it will capture an increasing share of the surplus. Courts have had to reconcile these seemingly complicated economic models with prior case law and, in some cases, with direct evidence that is apparently inconsistent with the results of the model.

Indeed, Professor Carl Shapiro recently used bargaining models to analyze harm to competition in two prominent cases alleging anticompetitive foreclosure—one initiated by the DOJ and one by the FTC—in which he served as the government’s expert economist. In United States v. AT&T Inc., Dr. Shapiro testified that the proposed transaction between AT&T and Time Warner would give the vertically integrated company leverage to extract higher prices for content from AT&T’s rival, Dish Network. Soon after, Dr. Shapiro presented a similar bargaining model in FTC v. Qualcomm Inc. He testified that Qualcomm leveraged its monopoly power over chipsets to extract higher royalty rates from smartphone OEMs, such as Apple, wishing to license its standard essential patents (SEPs). In each case, Dr. Shapiro’s models were criticized heavily by the defendants’ expert economists for ignoring market realities that play an important role in determining whether the challenged conduct was likely to harm competition.

Judge Leon’s opinion in AT&T/Time Warner—recently upheld on appeal—concluded that Dr. Shapiro’s application of the bargaining model was significantly flawed, based upon unreliable inputs, and undermined by evidence about actual market performance presented by defendant’s expert, Dr. Dennis Carlton. Dr. Shapiro’s theory of harm posited that the combined company would increase its bargaining leverage and extract greater affiliate fees for Turner content from AT&T’s distributor rivals. The increase in bargaining leverage was made possible by the threat of a post-merger blackout of Turner content for AT&T’s rivals. This theory rested on the assumption that the combined firm would have reduced financial exposure from a long-term blackout of Turner content and would therefore have more leverage to threaten a blackout in content negotiations. The purpose of his bargaining model was to quantify how much AT&T could extract from competitors subjected to a long-term blackout of Turner content.

Judge Leon highlighted a number of reasons for rejecting the DOJ’s argument. First, Dr. Shapiro’s model failed to account for existing long-term affiliate contracts, post-litigation offers of arbitration agreements, and the increasing competitiveness of the video programming and distribution industry. Second, Dr. Carlton had demonstrated persuasively that previous vertical integration in the video programming and distribution industry did not have a significant effect on content prices. Finally, Dr. Shapiro’s model primarily relied upon three inputs: (1) the total number of subscribers the unaffiliated distributor would lose in the event of a long-term blackout of Turner content, (2) the percentage of the distributor’s lost subscribers who would switch to AT&T as a result of the blackout, and (3) the profit margin AT&T would derive from the subscribers it gained from the blackout. Many of Dr. Shapiro’s inputs necessarily relied on critical assumptions and/or third-party sources. Judge Leon considered and discredited each input in turn. 

The parties in Qualcomm are, as of the time of this posting, still awaiting a ruling. Dr. Shapiro’s model in that case attempts to predict the effect of Qualcomm’s alleged “no license, no chips” policy. He compared the gains from trade OEMs receive when they purchase a chip from Qualcomm and pay Qualcomm a FRAND royalty to license its SEPs with the gains from trade OEMs receive when they purchase a chip from a rival manufacturer and pay a “royalty surcharge” to Qualcomm to license its SEPs. In other words, the FTC’s theory of harm is based upon the premise that Qualcomm is charging a supra-FRAND rate for its SEPs (the“royalty surcharge”) that squeezes the margins of OEMs. That margin squeeze, the FTC alleges, prevents rival chipset suppliers from obtaining a sufficient return when negotiating with OEMs. The FTC predicts the end result is a reduction in competition and an increase in the price of devices to consumers.

Qualcomm, like Judge Leon in AT&T, questioned the robustness of Dr. Shapiro’s model and its predictions in light of conflicting market realities. For example, Dr. Shapiro, argued that the

leverage that Qualcomm brought to bear on the chips shifted the licensing negotiations substantially in Qualcomm’s favor and led to a significantly higher royalty than Qualcomm would otherwise have been able to achieve.

Yet, on cross-examination, Dr. Shapiro declined to move from theory to empirics when asked if he had quantified the effects of Qualcomm’s practice on any other chip makers. Instead, Dr. Shapiro responded that he had not, but he had “reason to believe that the royalty surcharge was substantial” and had “inevitable consequences.” Under Dr. Shapiro’s theory, one would predict that royalty rates were higher after Qualcomm obtained market power.

As with Dr. Carlton’s testimony inviting Judge Leon to square the DOJ’s theory with conflicting historical facts in the industry, Qualcomm’s economic expert, Dr. Aviv Nevo, provided an analysis of Qualcomm’s royalty agreements from 1990-2017, confirming that there was no economic and meaningful difference between the royalty rates during the time frame when Qualcomm was alleged to have market power and the royalty rates outside of that time frame. He also presented evidence that ex ante royalty rates did not increase upon implementation of the CDMA standard or the LTE standard. Moreover, Dr.Nevo testified that the industry itself was characterized by declining prices and increasing output and quality.

Dr. Shapiro’s model in Qualcomm appears to suffer from many of the same flaws that ultimately discredited his model in AT&T/Time Warner: It is based upon assumptions that are contrary to real-world evidence and it does not robustly or persuasively identify anticompetitive effects. Some observers, including our Scalia Law School colleague and former FTC Chairman, Tim Muris, would apparently find it sufficient merely to allege a theoretical “ability to manipulate the marketplace.” But antitrust cases require actual evidence of harm. We think Professor Muris instead captured the appropriate standard in his important article rejecting attempts by the FTC to shortcut its requirement of proof in monopolization cases:

This article does reject, however, the FTC’s attempt to make it easier for the government to prevail in Section 2 litigation. Although the case law is hardly a model of clarity, one point that is settled is that injury to competitors by itself is not a sufficient basis to assume injury to competition …. Inferences of competitive injury are, of course, the heart of per se condemnation under the rule of reason. Although long a staple of Section 1, such truncation has never been a part of Section 2. In an economy as dynamic as ours, now is hardly the time to short-circuit Section 2 cases. The long, and often sorry, history of monopolization in the courts reveals far too many mistakes even without truncation.

Timothy J. Muris, The FTC and the Law of Monopolization, 67 Antitrust L. J. 693 (2000)

We agree. Proof of actual anticompetitive effects rather than speculation derived from models that are not robust to market realities are an important safeguard to ensure that Section 2 protects competition and not merely individual competitors.

The future of bargaining models in antitrust remains to be seen. Judge Leon certainly did not question the proposition that they could play an important role in other cases. Judge Leon closely dissected the testimony and models presented by both experts in AT&T/Time Warner. His opinion serves as an important reminder. As complex economic evidence like bargaining models become more common in antitrust litigation, judges must carefully engage with the experts on both sides to determine whether there is direct evidence on the likely competitive effects of the challenged conduct. Where “real-world evidence,” as Judge Leon called it, contradicts the predictions of a bargaining model, judges should reject the model rather than the reality. Bargaining models have many potentially important antitrust applications including horizontal mergers involving a bargaining component – such as hospital mergers, vertical mergers, and licensing disputes. The analysis of those models by the Ninth and D.C. Circuits will have important implications for how they will be deployed by the agencies and parties moving forward.

Near the end of her new proposal to break up Facebook, Google, Amazon, and Apple, Senator Warren asks, “So what would the Internet look like after all these reforms?”

It’s a good question, because, as she herself notes, “Twenty-five years ago, Facebook, Google, and Amazon didn’t exist. Now they are among the most valuable and well-known companies in the world.”

To Warren, our most dynamic and innovative companies constitute a problem that needs solving.

She described the details of that solution in a blog post:

First, [my administration would restore competition to the tech sector] by passing legislation that requires large tech platforms to be designated as “Platform Utilities” and broken apart from any participant on that platform.

* * *

For smaller companies…, their platform utilities would be required to meet the same standard of fair, reasonable, and nondiscriminatory dealing with users, but would not be required to structurally separate….

* * *
Second, my administration would appoint regulators committed to reversing illegal and anti-competitive tech mergers….
I will appoint regulators who are committed to… unwind[ing] anti-competitive mergers, including:

– Amazon: Whole Foods; Zappos;
– Facebook: WhatsApp; Instagram;
– Google: Waze; Nest; DoubleClick

Elizabeth Warren’s brave new world

Let’s consider for a moment what this brave new world will look like — not the nirvana imagined by regulators and legislators who believe that decimating a company’s business model will deter only the “bad” aspects of the model while preserving the “good,” as if by magic, but the inevitable reality of antitrust populism.  

Utilities? Are you kidding? For an overview of what the future of tech would look like under Warren’s “Platform Utility” policy, take a look at your water, electricity, and sewage service. Have you noticed any improvement (or reduction in cost) in those services over the past 10 or 15 years? How about the roads? Amtrak? Platform businesses operating under a similar regulatory regime would also similarly stagnate. Enforcing platform “neutrality” necessarily requires meddling in the most minute of business decisions, inevitably creating unintended and costly consequences along the way.

Network companies, like all businesses, differentiate themselves by offering unique bundles of services to customers. By definition, this means vertically integrating with some product markets and not others. Why are digital assistants like Siri bundled into mobile operating systems? Why aren’t the vast majority of third-party apps also bundled into the OS? If you want utilities regulators instead of Google or Apple engineers and designers making these decisions on the margin, then Warren’s “Platform Utility” policy is the way to go.

Grocery Stores. To take one specific case cited by Warren, how much innovation was there in the grocery store industry before Amazon bought Whole Foods? Since the acquisition, large grocery retailers, like Walmart and Kroger, have increased their investment in online services to better compete with the e-commerce champion. Many industry analysts expect grocery stores to use computer vision technology and artificial intelligence to improve the efficiency of check-out in the near future.

Smartphones. Imagine how forced neutrality would play out in the context of iPhones. If Apple can’t sell its own apps, it also can’t pre-install its own apps. A brand new iPhone with no apps — and even more importantly, no App Store — would be, well, just a phone, out of the box. How would users even access a site or app store from which to download independent apps? Would Apple be allowed to pre-install someone else’s apps? That’s discriminatory, too. Maybe it will be forced to offer a menu of all available apps in all categories (like the famously useless browser ballot screen demanded by the European Commission in its Microsoft antitrust case)? It’s hard to see how that benefits consumers — or even app developers.

Source: Free Software Magazine

Internet Search. Or take search. Calls for “search neutrality” have been bandied about for years. But most proponents of search neutrality fail to recognize that all Google’s search results entail bias in favor of its own offerings. As Geoff Manne and Josh Wright noted in 2011 at the height of the search neutrality debate:

[S]earch engines offer up results in the form not only of typical text results, but also maps, travel information, product pages, books, social media and more. To the extent that alleged bias turns on a search engine favoring its own maps, for example, over another firm’s, the allegation fails to appreciate that text results and maps are variants of the same thing, and efforts to restrain a search engine from offering its own maps is no different than preventing it from offering its own search results.

Nevermind that Google with forced non-discrimination likely means Google offering only the antiquated “ten blue links” search results page it started with in 1998 instead of the far more useful “rich” results it offers today; logically it would also mean Google somehow offering the set of links produced by any and all other search engines’ algorithms, in lieu of its own. If you think Google will continue to invest in and maintain the wealth of services it offers today on the strength of the profits derived from those search results, well, Elizabeth Warren is probably already your favorite politician.

Source: Web Design Museum  

And regulatory oversight of algorithmic content won’t just result in an impoverished digital experience; it will inevitably lead to an authoritarian one, as well:

Any agency granted a mandate to undertake such algorithmic oversight, and override or reconfigure the product of online services, thereby controls the content consumers may access…. This sort of control is deeply problematic… [because it saddles users] with a pervasive set of speech controls promulgated by the government. The history of such state censorship is one which has demonstrated strong harms to both social welfare and rule of law, and should not be emulated.

Digital Assistants. Consider also the veritable cage match among the tech giants to offer “digital assistants” and “smart home” devices with ever-more features at ever-lower prices. Today the allegedly non-existent competition among these companies is played out most visibly in this multi-featured market, comprising advanced devices tightly integrated with artificial intelligence, voice recognition, advanced algorithms, and a host of services. Under Warren’s nondiscrimination principle this market disappears. Each device can offer only a connectivity platform (if such a service is even permitted to be bundled with a physical device…) — and nothing more.

But such a world entails not only the end of an entire, promising avenue of consumer-benefiting innovation, it also entails the end of a promising avenue of consumer-benefiting competition. It beggars belief that anyone thinks consumers would benefit by forcing technology companies into their own silos, ensuring that the most powerful sources of competition for each other are confined to their own fiefdoms by order of law.

Breaking business models

Beyond the product-feature dimension, Sen. Warren’s proposal would be devastating for innovative business models. Why is Amazon Prime Video bundled with free shipping? Because the marginal cost of distribution for video is close to zero and bundling it with Amazon Prime increases the value proposition for customers. Why is almost every Google service free to users? Because Google’s business model is supported by ads, not monthly subscription fees. Each of the tech giants has carefully constructed an ecosystem in which every component reinforces the others. Sen. Warren’s plan would not only break up the companies, it would prohibit their business models — the ones that both created and continue to sustain these products. Such an outcome would manifestly harm consumers.

Both of Warren’s policy “solutions” are misguided and will lead to higher prices and less innovation. Her cause for alarm is built on a multitude of mistaken assumptions, but let’s address just a few (Warren in bold):

  • “Nearly half of all e-commerce goes through Amazon.” Yes, but it has only 5% of total retail in the United States. As my colleague Kristian Stout says, “the Internet is not a market; it’s a distribution channel.”
  • “Amazon has used its immense market power to force smaller competitors like Diapers.com to sell at a discounted rate.” The real story, as the founders of Diapers.com freely admitted, is that they sold diapers as what they hoped would be a loss leader, intending to build out sales of other products once they had a base of loyal customers:

And so we started with selling the loss leader product to basically build a relationship with mom. And once they had the passion for the brand and they were shopping with us on a weekly or a monthly basis that they’d start to fall in love with that brand. We were losing money on every box of diapers that we sold. We weren’t able to buy direct from the manufacturers.

Like all entrepreneurs, Diapers.com’s founders took a calculated risk that didn’t pay off as hoped. Amazon subsequently acquired the company (after it had declined a similar buyout offer from Walmart). (Antitrust laws protect consumers, not inefficient competitors). And no, this was not a case of predatory pricing. After many years of trying to make the business profitable as a subsidiary, Amazon shut it down in 2017.

  • “In the 1990s, Microsoft — the tech giant of its time — was trying to parlay its dominance in computer operating systems into dominance in the new area of web browsing. The federal government sued Microsoft for violating anti-monopoly laws and eventually reached a settlement. The government’s antitrust case against Microsoft helped clear a path for Internet companies like Google and Facebook to emerge.” The government’s settlement with Microsoft is not the reason Google and Facebook were able to emerge. Neither company entered the browser market at launch. Instead, they leapfrogged the browser entirely and created new platforms for the web (only later did Google create Chrome).

    Furthermore, if the Microsoft case is responsible for “clearing a path” for Google is it not also responsible for clearing a path for Google’s alleged depredations? If the answer is that antitrust enforcement should be consistently more aggressive in order to rein in Google, too, when it gets out of line, then how can we be sure that that same more-aggressive enforcement standard wouldn’t have curtailed the extent of the Microsoft ecosystem in which it was profitable for Google to become Google? Warren implicitly assumes that only the enforcement decision in Microsoft was relevant to Google’s rise. But Microsoft doesn’t exist in a vacuum. If Microsoft cleared a path for Google, so did every decision not to intervene, which, all combined, created the legal, business, and economic environment in which Google operates.

Warren characterizes Big Tech as a weight on the American economy. In fact, nothing could be further from the truth. These superstar companies are the drivers of productivity growth, all ranking at or near the top for most spending on research and development. And while data may not be the new oil, extracting value from it may require similar levels of capital expenditure. Last year, Big Tech spent as much or more on capex as the world’s largest oil companies:

Source: WSJ

Warren also faults Big Tech for a decline in startups, saying,

The number of tech startups has slumped, there are fewer high-growth young firms typical of the tech industry, and first financing rounds for tech startups have declined 22% since 2012.

But this trend predates the existence of the companies she criticizes, as this chart from Quartz shows:

The exact causes of the decline in business dynamism are still uncertain, but recent research points to a much more mundane explanation: demographics. Labor force growth has been declining, which has led to an increase in average firm age, nudging fewer workers to start their own businesses.

Furthermore, it’s not at all clear whether this is actually a decline in business dynamism, or merely a change in business model. We would expect to see the same pattern, for example, if would-be startup founders were designing their software for acquisition and further development within larger, better-funded enterprises.

Will Rinehart recently looked at the literature to determine whether there is indeed a “kill zone” for startups around Big Tech incumbents. One paper finds that “an increase in fixed costs explains most of the decline in the aggregate entrepreneurship rate.” Another shows an inverse correlation across 50 countries between GDP and entrepreneurship rates. Robert Lucas predicted these trends back in 1978, pointing out that productivity increases would lead to wage increases, pushing marginal entrepreneurs out of startups and into big companies.

It’s notable that many in the venture capital community would rather not have Sen. Warren’s “help”:

Arguably, it is also simply getting harder to innovate. As economists Nick Bloom, Chad Jones, John Van Reenen and Michael Webb argue,

just to sustain constant growth in GDP per person, the U.S. must double the amount of research effort searching for new ideas every 13 years to offset the increased difficulty of finding new ideas.

If this assessment is correct, it may well be that coming up with productive and profitable innovations is simply becoming more expensive, and thus, at the margin, each dollar of venture capital can fund less of it. Ironically, this also implies that larger firms, which can better afford the additional resources required to sustain exponential growth, are a crucial part of the solution, not the problem.

Warren believes that Big Tech is the cause of our social ills. But Americans have more trust in Amazon, Facebook, and Google than in the political institutions that would break them up. It would be wise for her to reflect on why that might be the case. By punishing our most valuable companies for past successes, Warren would chill competition and decrease returns to innovation.

Finally, in what can only be described as tragic irony, the most prominent political figure who shares Warren’s feelings on Big Tech is President Trump. Confirming the horseshoe theory of politics, far-left populism and far-right populism seem less distinguishable by the day. As our colleague Gus Hurwitz put it, with this proposal Warren is explicitly endorsing the unitary executive theory and implicitly endorsing Trump’s authority to direct his DOJ to “investigate specific cases and reach specific outcomes.” Which cases will he want to have investigated and what outcomes will he be seeking? More good questions that Senator Warren should be asking. The notion that competition, consumer welfare, and growth are likely to increase in such an environment is farcical.

Longtime TOTM blogger, Paul Rubin, has a new book now available for preorder on Amazon.

The book’s description reads:

In spite of its numerous obvious failures, many presidential candidates and voters are in favor of a socialist system for the United States. Socialism is consistent with our primitive evolved preferences, but not with a modern complex economy. One reason for the desire for socialism is the misinterpretation of capitalism.   

The standard definition of free market capitalism is that it’s a system based on unbridled competition. But this oversimplification is incredibly misleading—capitalism exists because human beings have organically developed an elaborate system based on trust and collaboration that allows consumers, producers, distributors, financiers, and the rest of the players in the capitalist system to thrive.

Paul Rubin, the world’s leading expert on cooperative capitalism, explains simply and powerfully how we should think about markets, economics, and business—making this book an indispensable tool for understanding and communicating the vast benefits the free market bestows upon societies and individuals.