Archives For microsoft

In the world of video games, the process by which players train themselves or their characters in order to overcome a difficult “boss battle” is called “leveling up.” I find that the phrase also serves as a useful metaphor in the context of corporate mergers. Here, “leveling up” can be thought of as acquiring another firm in order to enter or reinforce one’s presence in an adjacent market where a larger and more successful incumbent is already active.

In video-game terminology, that incumbent would be the “boss.” Acquiring firms choose to level up when they recognize that building internal capacity to compete with the “boss” is too slow, too expensive, or is simply infeasible. An acquisition thus becomes the only way “to beat the boss” (or, at least, to maximize the odds of doing so).

Alas, this behavior is often mischaracterized as a “killer acquisition” or “reverse killer acquisition.” What separates leveling up from killer acquisitions is that the former serve to turn the merged entity into a more powerful competitor, while the latter attempt to weaken competition. In the case of “reverse killer acquisitions,” the assumption is that the acquiring firm would have entered the adjacent market regardless absent the merger, leaving even more firms competing in that market.

In other words, the distinction ultimately boils down to a simple (though hard to answer) question: could both the acquiring and target firms have effectively competed with the “boss” without a merger?

Because they are ubiquitous in the tech sector, these mergers—sometimes also referred to as acquisitions of nascent competitors—have drawn tremendous attention from antitrust authorities and policymakers. All too often, policymakers fail to adequately consider the realistic counterfactual to a merger and mistake leveling up for a killer acquisition. The most recent high-profile example is Meta’s acquisition of the virtual-reality fitness app Within. But in what may be a hopeful sign of a turning of the tide, a federal court appears set to clear that deal over objections from the Federal Trade Commission (FTC).

Some Recent ‘Boss Battles’

The canonical example of leveling up in tech markets is likely Google’s acquisition of Android back in 2005. While Apple had not yet launched the iPhone, it was already clear by 2005 that mobile would become an important way to access the internet (including Google’s search services). Rumors were swirling that Apple, following its tremendously successful iPod, had started developing a phone, and Microsoft had been working on Windows Mobile for a long time.

In short, there was a serious risk that Google would be reliant on a single mobile gatekeeper (i.e., Apple) if it did not move quickly into mobile. Purchasing Android was seen as the best way to do so. (Indeed, averting an analogous sort of threat appears to be driving Meta’s move into virtual reality today.)

The natural next question is whether Google or Android could have succeeded in the mobile market absent the merger. My guess is that the answer is no. In 2005, Google did not produce any consumer hardware. Quickly and successfully making the leap would have been daunting. As for Android:

Google had significant advantages that helped it to make demands from carriers and OEMs that Android would not have been able to make. In other words, Google was uniquely situated to solve the collective action problem stemming from OEMs’ desire to modify Android according to their own idiosyncratic preferences. It used the appeal of its app bundle as leverage to get OEMs and carriers to commit to support Android devices for longer with OS updates. The popularity of its apps meant that OEMs and carriers would have great difficulty in going it alone without them, and so had to engage in some contractual arrangements with Google to sell Android phones that customers wanted. Google was better resourced than Android likely would have been and may have been able to hold out for better terms with a more recognizable and desirable brand name than a hypothetical Google-less Android. In short, though it is of course possible that Android could have succeeded despite the deal having been blocked, it is also plausible that Android became so successful only because of its combination with Google. (citations omitted)

In short, everything suggests that Google’s purchase of Android was a good example of leveling up. Note that much the same could be said about the company’s decision to purchase Fitbit in order to compete against Apple and its Apple Watch (which quickly dominated the market after its launch in 2015).

A more recent example of leveling up is Microsoft’s planned acquisition of Activision Blizzard. In this case, the merger appears to be about improving Microsoft’s competitive position in the platform market for game consoles, rather than in the adjacent market for games.

At the time of writing, Microsoft is staring down the barrel of a gun: Sony is on the cusp of becoming the runaway winner of yet another console generation. Microsoft’s executives appear to have concluded that this is partly due to a lack of exclusive titles on the Xbox platform. Hence, they are seeking to purchase Activision Blizzard, one of the most successful game studios, known among other things for its acclaimed Call of Duty series.

Again, the question is whether Microsoft could challenge Sony by improving its internal game-publishing branch (known as Xbox Game Studios) or whether it needs to acquire a whole new division. This is obviously a hard question to answer, but a cursory glance at the titles shipped by Microsoft’s publishing studio suggest that the issues it faces could not simply be resolved by throwing more money at its existing capacities. Indeed, Microsoft Game Studios seems to be plagued by organizational failings that might only be solved by creating more competition within the Microsoft company. As one gaming journalist summarized:

The current predicament of these titles goes beyond the amount of money invested or the buzzwords used to market them – it’s about Microsoft’s plan to effectively manage its studios. Encouraging independence isn’t an excuse for such a blatantly hands-off approach which allows titles to fester for years in development hell, with some fostering mistreatment to occur. On the surface, it’s just baffling how a company that’s been ranked as one of the top 10 most reputable companies eight times in 11 years (as per RepTrak) could have such problems with its gaming division.

The upshot is that Microsoft appears to have recognized that its own game-development branch is failing, and that acquiring a well-functioning rival is the only way to rapidly compete with Sony. There is thus a strong case to be made that competition authorities and courts should approach the merger with caution, as it has at least the potential to significantly increase competition in the game-console industry.

Finally, leveling up is sometimes a way for smaller firms to try and move faster than incumbents into a burgeoning and promising segment. The best example of this is arguably Meta’s effort to acquire Within, a developer of VR fitness apps. Rather than being an attempt to thwart competition from a competitor in the VR app market, the goal of the merger appears to be to compete with the likes of Google, Apple, and Sony at the platform level. As Mark Zuckerberg wrote back in 2015, when Meta’s VR/AR strategy was still in its infancy:

Our vision is that VR/AR will be the next major computing platform after mobile in about 10 years… The strategic goal is clearest. We are vulnerable on mobile to Google and Apple because they make major mobile platforms. We would like a stronger strategic position in the next wave of computing….

Over the next few years, we’re going to need to make major new investments in apps, platform services, development / graphics and AR. Some of these will be acquisitions and some can be built in house. If we try to build them all in house from scratch, then we risk that several will take too long or fail and put our overall strategy at serious risk. To derisk this, we should acquire some of these pieces from leading companies.

In short, many of the tech mergers that critics portray as killer acquisitions are just as likely to be attempts by firms to compete head-on with incumbents. This “leveling up” is precisely the sort of beneficial outcome that antitrust laws were designed to promote.

Building Products Is Hard

Critics are often quick to apply the “killer acquisition” label to any merger where a large platform is seeking to enter or reinforce its presence in an adjacent market. The preceding paragraphs demonstrate that it’s not that simple, as these mergers often enable firms to improve their competitive position in the adjacent market. For obvious reasons, antitrust authorities and policymakers should be careful not to thwart this competition.

The harder part is how to separate the wheat from the chaff. While I don’t have a definitive answer, an easy first step would be for authorities to more seriously consider the supply side of the equation.

Building a new product is incredibly hard, even for the most successful tech firms. Microsoft famously failed with its Zune music player and Windows Phone. The Google+ social network never gained any traction. Meta’s foray into the cryptocurrency industry was a sobering experience. Amazon’s Fire Phone bombed. Even Apple, which usually epitomizes Silicon Valley firms’ ability to enter new markets, has had its share of dramatic failures: Apple Maps, its Ping social network, and the first Home Pod, to name a few.

To put it differently, policymakers should not assume that internal growth is always a realistic alternative to a merger. Instead, they should carefully examine whether such a strategy is timely, cost-effective, and likely to succeed.

This is obviously a daunting task. Firms will struggle to dispositively show that they need to acquire the target firm in order to effectively compete against an incumbent. The question essentially hinges on the quality of the firm’s existing management, engineers, and capabilities. All of these are difficult—perhaps even impossible—to measure. At the very least, policymakers can improve the odds of reaching a correct decision by approaching these mergers with an open mind.

Under Chair Lina Khan’s tenure, the FTC has opted for the opposite approach and taken a decidedly hostile view of tech acquisitions. The commission sued to block both Meta’s purchase of Within and Microsoft’s acquisition of Activision Blizzard. Likewise, several economists—notably Tommasso Valletti—have called for policymakers to reverse the burden of proof in merger proceedings, and opined that all mergers should be viewed with suspicion because, absent efficiencies, they always reduce competition.

Unfortunately, this skeptical approach is something of a self-fulfilling prophecy: when authorities view mergers with suspicion, they are likely to be dismissive of the benefits discussed above. Mergers will be blocked and entry into adjacent markets will occur via internal growth. 

Large tech companies’ many failed attempts to enter adjacent markets via internal growth suggest that such an outcome would ultimately harm the digital economy. Too many “boss battles” will needlessly be lost, depriving consumers of precious competition and destroying startup companies’ exit strategies.

With just a week to go until the U.S. midterm elections, which potentially herald a change in control of one or both houses of Congress, speculation is mounting that congressional Democrats may seek to use the lame-duck session following the election to move one or more pieces of legislation targeting the so-called “Big Tech” companies.

Gaining particular notice—on grounds that it is the least controversial of the measures—is S. 2710, the Open App Markets Act (OAMA). Introduced by Sen. Richard Blumenthal (D-Conn.), the Senate bill has garnered 14 cosponsors: exactly seven Republicans and seven Democrats. It would, among other things, force certain mobile app stores and operating systems to allow “sideloading” and open their platforms to rival in-app payment systems.

Unfortunately, even this relatively restrained legislation—at least, when compared to Sen. Amy Klobuchar’s (D-Minn.) American Innovation and Choice Online Act or the European Union’s Digital Markets Act (DMA)—is highly problematic in its own right. Here, I will offer seven major questions the legislation leaves unresolved.

1.     Are Quantitative Thresholds a Good Indicator of ‘Gatekeeper Power’?

It is no secret that OAMA has been tailor-made to regulate two specific app stores: Android’s Google Play Store and Apple’s Apple App Store (see here, here, and, yes, even Wikipedia knows it).The text makes this clear by limiting the bill’s scope to app stores with more than 50 million users, a threshold that only Google Play and the Apple App Store currently satisfy.

However, purely quantitative thresholds are a poor indicator of a company’s potential “gatekeeper power.” An app store might have much fewer than 50 million users but cater to a relevant niche market. By the bill’s own logic, why shouldn’t that app store likewise be compelled to be open to competing app distributors? Conversely, it may be easy for users of very large app stores to multi-home or switch seamlessly to competing stores. In either case, raw user data paints a distorted picture of the market’s realities.

As it stands, the bill’s thresholds appear arbitrary and pre-committed to “disciplining” just two companies: Google and Apple. In principle, good laws should be abstract and general and not intentionally crafted to apply only to a few select actors. In OAMA’s case, the law’s specific thresholds are also factually misguided, as purely quantitative criteria are not a good proxy for the sort of market power the bill purportedly seeks to curtail.

2.     Why Does the Bill not Apply to all App Stores?

Rather than applying to app stores across the board, OAMA targets only those associated with mobile devices and “general purpose computing devices.” It’s not clear why.

For example, why doesn’t it cover app stores on gaming platforms, such as Microsoft’s Xbox or Sony’s PlayStation?

Source: Visual Capitalist

Currently, a PlayStation user can only buy digital games through the PlayStation Store, where Sony reportedly takes a 30% cut of all sales—although its pricing schedule is less transparent than that of mobile rivals such as Apple or Google.

Clearly, this bothers some developers. Much like Epic Games CEO Tim Sweeney’s ongoing crusade against the Apple App Store, indie-game publisher Iain Garner of Neon Doctrine recently took to Twitter to complain about Sony’s restrictive practices. According to Garner, “Platform X” (clearly PlayStation) charges developers up to $25,000 and 30% of subsequent earnings to give games a modicum of visibility on the platform, in addition to requiring them to jump through such hoops as making a PlayStation-specific trailer and writing a blog post. Garner further alleges that Sony severely circumscribes developers’ ability to offer discounts, “meaning that Platform X owners will always get the worst deal!” (see also here).

Microsoft’s Xbox Game Store similarly takes a 30% cut of sales. Presumably, Microsoft and Sony both have the same type of gatekeeper power in the gaming-console market that Apple and Google are said to have on their respective platforms, leading to precisely those issues that OAMA ostensibly purports to combat. Namely, that consumers are not allowed to choose alternative app stores through which to buy games on their respective consoles, and developers must acquiesce to Sony’s and Microsoft’s terms if they want their games to reach those players.

More broadly, dozens of online platforms also charge commissions on the sales made by their creators. To cite but a few: OnlyFans takes a 20% cut of sales; Facebook gets 30% of the revenue that creators earn from their followers; YouTube takes 45% of ad revenue generated by users; and Twitch reportedly rakes in 50% of subscription fees.

This is not to say that all these services are monopolies that should be regulated. To the contrary, it seems like fees in the 20-30% range are common even in highly competitive environments. Rather, it is merely to observe that there are dozens of online platforms that demand a percentage of the revenue that creators generate and that prevent those creators from bypassing the platform. As well they should, after all, because creating and improving a platform is not free.

It is nonetheless difficult to see why legislation regulating online marketplaces should focus solely on two mobile app stores. Ultimately, the inability of OAMA’s sponsors to properly account for this carveout diminishes the law’s credibility.

3.     Should Picking Among Legitimate Business Models Be up to Lawmakers or Consumers?

“Open” and “closed” platforms posit two different business models, each with its own advantages and disadvantages. Some consumers may prefer more open platforms because they grant them more flexibility to customize their mobile devices and operating systems. But there are also compelling reasons to prefer closed systems. As Sam Bowman observed, narrowing choice through a more curated system frees users from having to research every possible option every time they buy or use some product. Instead, they can defer to the platform’s expertise in determining whether an app or app store is trustworthy or whether it contains, say, objectionable content.

Currently, users can choose to opt for Apple’s semi-closed “walled garden” iOS or Google’s relatively more open Android OS (which OAMA wants to pry open even further). Ironically, under the pretext of giving users more “choice,” OAMA would take away the possibility of choice where it matters the most—i.e., at the platform level. As Mikolaj Barczentewicz has written:

A sideloading mandate aims to give users more choice. It can only achieve this, however, by taking away the option of choosing a device with a “walled garden” approach to privacy and security (such as is taken by Apple with iOS).

This obviates the nuances between the two and pushes Android and iOS to converge around a single model. But if consumers unequivocally preferred open platforms, Apple would have no customers, because everyone would already be on Android.

Contrary to regulators’ simplistic assumptions, “open” and “closed” are not synonyms for “good” and “bad.” Instead, as Boston University’s Andrei Hagiu has shown, there are fundamental welfare tradeoffs at play between these two perfectly valid business models that belie simplistic characterizations of one being inherently superior to the other.

It is debatable whether courts, regulators, or legislators are well-situated to resolve these complex tradeoffs by substituting businesses’ product-design decisions and consumers’ revealed preferences with their own. After all, if regulators had such perfect information, we wouldn’t need markets or competition in the first place.

4.     Does OAMA Account for the Security Risks of Sideloading?

Platforms retaining some control over the apps or app stores allowed on their operating systems bolsters security, as it allows companies to weed out bad players.

Both Apple and Google do this, albeit to varying degrees. For instance, Android already allows sideloading and third-party in-app payment systems to some extent, while Apple runs a tighter ship. However, studies have shown that it is precisely the iOS “walled garden” model which gives it an edge over Android in terms of privacy and security. Even vocal Apple critic Tim Sweeney recently acknowledged that increased safety and privacy were competitive advantages for Apple.

The problem is that far-reaching sideloading mandates—such as the ones contemplated under OAMA—are fundamentally at odds with current privacy and security capabilities (see here and here).

OAMA’s defenders might argue that the law does allow covered platforms to raise safety and security defenses, thus making the tradeoffs between openness and security unnecessary. But the bill places such stringent conditions on those defenses that platform operators will almost certainly be deterred from risking running afoul of the law’s terms. To invoke the safety and security defenses, covered companies must demonstrate that provisions are applied on a “demonstrably consistent basis”; are “narrowly tailored and could not be achieved through less discriminatory means”; and are not used as a “pretext to exclude or impose unnecessary or discriminatory terms.”

Implementing these stringent requirements will drag enforcers into a micromanagement quagmire. There are thousands of potential spyware, malware, rootkit, backdoor, and phishing (to name just a few) software-security issues—all of which pose distinct threats to an operating system. The Federal Trade Commission (FTC) and the federal courts will almost certainly struggle to control the “consistency” requirement across such varied types.

Likewise, OAMA’s reference to “least discriminatory means” suggests there is only one valid answer to any given security-access tradeoff. Further, depending on one’s preferred balance between security and “openness,” a claimed security risk may or may not be “pretextual,” and thus may or may not be legal.

Finally, the bill text appears to preclude the possibility of denying access to a third-party app or app store for reasons other than safety and privacy. This would undermine Apple’s and Google’s two-tiered quality-control systems, which also control for “objectionable” content such as (child) pornography and social engineering. 

5.     How Will OAMA Safeguard the Rights of Covered Platforms?

OAMA is also deeply flawed from a procedural standpoint. Most importantly, there is no meaningful way to contest the law’s designation as “covered company,” or the harms associated with it.

Once a company is “covered,” it is presumed to hold gatekeeper power, with all the associated risks for competition, innovation, and consumer choice. Remarkably, this presumption does not admit any qualitative or quantitative evidence to the contrary. The only thing a covered company can do to rebut the designation is to demonstrate that it, in fact, has fewer than 50 million users.

By preventing companies from showing that they do not hold the kind of gatekeeper power that harms competition, decreases innovation, raises prices, and reduces choice (the bill’s stated objectives), OAMA severely tilts the playing field in the FTC’s favor. Even the EU’s enforcer-friendly DMA incorporated a last-minute amendment allowing firms to dispute their status as “gatekeepers.” While this defense is not perfect (companies cannot rely on the same qualitative evidence that the European Commission can use against them), at least gatekeeper status can be contested under the DMA.

6.     Should Legislation Protect Competitors at the Expense of Consumers?

Like most of the new wave of regulatory initiatives against Big Tech (but unlike antitrust law), OAMA is explicitly designed to help competitors, with consumers footing the bill.

For example, OAMA prohibits covered companies from using or combining nonpublic data obtained from third-party apps or app stores operating on their platforms in competition with those third parties. While this may have the short-term effect of redistributing rents away from these platforms and toward competitors, it risks harming consumers and third-party developers in the long run.

Platforms’ ability to integrate such data is part of what allows them to bring better and improved products and services to consumers in the first place. OAMA tacitly admits this by recognizing that the use of nonpublic data grants covered companies a competitive advantage. In other words, it allows them to deliver a product that is better than competitors’.

Prohibiting self-preferencing raises similar concerns. Why wouldn’t a company that has invested billions in developing a successful platform and ecosystem not give preference to its own products to recoup some of that investment? After all, the possibility of exercising some control over downstream and adjacent products is what might have driven the platform’s development in the first place. In other words, self-preferencing may be a symptom of competition, and not the absence thereof. Third-party companies also would have weaker incentives to develop their own platforms if they can free-ride on the investments of others. And platforms that favor their own downstream products might simply be better positioned to guarantee their quality and reliability (see here and here).

In all of these cases, OAMA’s myopic focus on improving the lot of competitors for easy political points will upend the mobile ecosystems from which both users and developers derive significant benefit.

7.     Shouldn’t the EU Bear the Risks of Bad Tech Regulation?

Finally, U.S. lawmakers should ask themselves whether the European Union, which has no tech leaders of its own, is really a model to emulate. Today, after all, marks the day the long-awaited Digital Markets Act— the EU’s response to perceived contestability and fairness problems in the digital economy—officially takes effect. In anticipation of the law entering into force, I summarized some of the outstanding issues that will define implementation moving forward in this recent tweet thread.

We have been critical of the DMA here at Truth on the Market on several factual, legal, economic, and procedural grounds. The law’s problems range from it essentially being a tool to redistribute rents away from platforms and to third-parties, despite it being unclear why the latter group is inherently more deserving (Pablo Ibañez Colomo has raised a similar point); to its opacity and lack of clarity, a process that appears tilted in the Commission’s favor; to the awkward way it interacts with EU competition law, ignoring the welfare tradeoffs between the models it seeks to impose and perfectly valid alternatives (see here and here); to its flawed assumptions (see, e.g., here on contestability under the DMA); to the dubious legal and economic value of the theory of harm known as  “self-preferencing”; to the very real possibility of unintended consequences (e.g., in relation to security and interoperability mandates).

In other words, that the United States lags the EU in seeking to regulate this area might not be a bad thing, after all. Despite the EU’s insistence on being a trailblazing agenda-setter at all costs, the wiser thing in tech regulation might be to remain at a safe distance. This is particularly true when one considers the potentially large costs of legislative missteps and the difficulty of recalibrating once a course has been set.

U.S. lawmakers should take advantage of this dynamic and learn from some of the Old Continent’s mistakes. If they play their cards right and take the time to read the writing on the wall, they might just succeed in averting antitrust’s uncertain future.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]

In Free to Choose, Milton Friedman famously noted that there are four ways to spend money[1]:

  1. Spending your own money on yourself. For example, buying groceries or lunch. There is a strong incentive to economize and to get full value.
  2. Spending your own money on someone else. For example, buying a gift for another. There is a strong incentive to economize, but perhaps less to achieve full value from the other person’s point of view. Altruism is admirable, but it differs from value maximization, since—strictly speaking—giving cash would maximize the other’s value. Perhaps the point of a gift is that it does not amount to cash and the maximization of the other person’s welfare from their point of view.
  3. Spending someone else’s money on yourself. For example, an expensed business lunch. “Pass me the filet mignon and Chateau Lafite! Do you have one of those menus without any prices?” There is a strong incentive to get maximum utility, but there is little incentive to economize.
  4. Spending someone else’s money on someone else. For example, applying the proceeds of taxes or donations. There may be an indirect desire to see utility, but incentives for quality and cost management are often diminished.

This framework can be criticized. Altruism has a role. Not all motives are selfish. There is an important role for action to help those less fortunate, which might mean, for instance, that a charity gains more utility from category (4) (assisting the needy) than from category (3) (the charity’s holiday party). It always depends on the facts and the context. However, there is certainly a grain of truth in the observation that charity begins at home and that, in the final analysis, people are best at managing their own affairs.

How would this insight apply to data interoperability? The difficult cases of assisting the needy do not arise here: there is no serious sense in which data interoperability does, or does not, result in destitution. Thus, Friedman’s observations seem to ring true: when spending data, those whose data it is seem most likely to maximize its value. This is especially so where collection of data responds to incentives—that is, the amount of data collected and processed responds to how much control over the data is possible.

The obvious exception to this would be a case of market power. If there is a monopoly with persistent barriers to entry, then the incentive may not be to maximize total utility, and therefore to limit data handling to the extent that a higher price can be charged for the lesser amount of data that does remain available. This has arguably been seen with some data-handling rules: the “Jedi Blue” agreement on advertising bidding, Apple’s Intelligent Tracking Prevention and App Tracking Transparency, and Google’s proposed Privacy Sandbox, all restrict the ability of others to handle data. Indeed, they may fail Friedman’s framework, since they amount to the platform deciding how to spend others’ data—in this case, by not allowing them to collect and process it at all.

It should be emphasized, though, that this is a special case. It depends on market power, and existing antitrust and competition laws speak to it. The courts will decide whether cases like Daily Mail v Google and Texas et al. v Google show illegal monopolization of data flows, so as to fall within this special case of market power. Outside the United States, cases like the U.K. Competition and Markets Authority’s Google Privacy Sandbox commitments and the European Union’s proposed commitments with Amazon seek to allow others to continue to handle their data and to prevent exclusivity from arising from platform dynamics, which could happen if a large platform prevents others from deciding how to account for data they are collecting. It will be recalled that even Robert Bork thought that there was risk of market power harms from the large Microsoft Windows platform a generation ago.[2] Where market power risks are proven, there is a strong case that data exclusivity raises concerns because of an artificial barrier to entry. It would only be if the benefits of centralized data control were to outweigh the deadweight loss from data restrictions that this would be untrue (though query how well the legal processes verify this).

Yet the latest proposals go well beyond this. A broad interoperability right amounts to “open season” for spending others’ data. This makes perfect sense in the European Union, where there is no large domestic technology platform, meaning that the data is essentially owned via foreign entities (mostly, the shareholders of successful U.S. and Chinese companies). It must be very tempting to run an industrial policy on the basis that “we’ll never be Google” and thus to embrace “sharing is caring” as to others’ data.

But this would transgress the warning from Friedman: would people optimize data collection if it is open to mandatory sharing even without proof of market power? It is deeply concerning that the EU’s DATA Act is accompanied by an infographic that suggests that coffee-machine data might be subject to mandatory sharing, to allow competition in services related to the data (e.g., sales of pods; spare-parts automation). There being no monopoly in coffee machines, this simply forces vertical disintegration of data collection and handling. Why put a data-collection system into a coffee maker at all, if it is to be a common resource? Friedman’s category (4) would apply: the data is taken and spent by another. There is no guarantee that there would be sensible decision making surrounding the resource.

It will be interesting to see how common-law jurisdictions approach this issue. At the risk of stating the obvious, the polity in continental Europe differs from that in the English-speaking democracies when it comes to whether the collective, or the individual, should be in the driving seat. A close read of the UK CMA’s Google commitments is interesting, in that paragraph 30 requires no self-preferencing in data collection and requires future data-handling systems to be designed with impacts on competition in mind. No doubt the CMA is seeking to prevent data-handling exclusivity on the basis that this prevents companies from using their data collection to compete. This is far from the EU DATA Act’s position in that it is certainly not a right to handle Google’s data: it is simply a right to continue to process one’s own data.

U.S. proposals are at an earlier stage. It would seem important, as a matter of principle, not to make arbitrary decisions about vertical integration in data systems, and to identify specific market-power concerns instead, in line with common-law approaches to antitrust.

It might be very attractive to the EU to spend others’ data on their behalf, but that does not make it right. Those working on the U.S. proposals would do well to ensure that there is a meaningful market-power gate to avoid unintended consequences.

Disclaimer: The author was engaged for expert advice relating to the UK CMA’s Privacy Sandbox case on behalf of the complainant Marketers for an Open Web.


[1] Milton Friedman, Free to Choose, 1980, pp.115-119

[2] Comments at the Yale Law School conference, Robert H. Bork’s influence on Antitrust Law, Sep. 27-28, 2013.

A bipartisan group of senators unveiled legislation today that would dramatically curtail the ability of online platforms to “self-preference” their own services—for example, when Apple pre-installs its own Weather or Podcasts apps on the iPhone, giving it an advantage that independent apps don’t have. The measure accompanies a House bill that included similar provisions, with some changes.

1. The Senate bill closely resembles the House version, and the small improvements will probably not amount to much in practice.

The major substantive changes we have seen between the House bill and the Senate version are:

  1. Violations in Section 2(a) have been modified to refer only to conduct that “unfairly” preferences, limits, or discriminates between the platform’s products and others, and that “materially harm[s] competition on the covered platform,” rather than banning all preferencing, limits, or discrimination.
  2. The evidentiary burden required throughout the bill has been changed from  “clear and convincing” to a “preponderance of evidence” (in other words, greater than 50%).
  3. An affirmative defense has been added to permit a platform to escape liability if it can establish that challenged conduct that “was narrowly tailored, was nonpretextual, and was necessary to… maintain or enhance the core functionality of the covered platform.”
  4. The minimum market capitalization for “covered platforms” has been lowered from $600 billion to $550 billion.
  5. The Senate bill would assess fines of 15% of revenues from the period during which the conduct occurred, in contrast with the House bill, which set fines equal to the greater of either 15% of prior-year revenues or 30% of revenues from the period during which the conduct occurred.
  6. Unlike the House bill, the Senate bill does not create a private right of action. Only the U.S. Justice Department (DOJ), Federal Trade Commission (FTC), and state attorneys-generals could bring enforcement actions on the basis of the bill.

Item one here certainly mitigates the most extreme risks of the House bill, which was drafted, bizarrely, to ban all “preferencing” or “discrimination” by platforms. If that were made law, it could literally have broken much of the Internet. The softened language reduces that risk somewhat.

However, Section 2(b), which lists types of conduct that would presumptively establish a violation under Section 2(a), is largely unchanged. As outlined here, this would amount to a broad ban on a wide swath of beneficial conduct. And “unfair” and “material” are notoriously slippery concepts. As a practical matter, their inclusion here may not significantly alter the course of enforcement under the Senate legislation from what would ensue under the House version.

Item three, which allows challenged conduct to be defended if it is “necessary to… maintain or enhance the core functionality of the covered platform,” may also protect some conduct. But because the bill requires companies to prove that challenged conduct is not only beneficial, but necessary to realize those benefits, it effectively implements a “guilty until proven innocent” standard that is likely to prove impossible to meet. The threat of permanent injunctions and enormous fines will mean that, in many cases, companies simply won’t be able to justify the expense of endeavoring to improve even the “core functionality” of their platforms in any way that could trigger the bill’s liability provisions. Thus, again, as a practical matter, the difference between the Senate and House bills may be only superficial.

The effect of this will likely be to diminish product innovation in these areas, because companies could not know in advance whether the benefits of doing so would be worth the legal risk. We have previously highlighted existing conduct that may be lost if a bill like this passes, such as pre-installation of apps or embedding maps and other “rich” results in boxes on search engine results pages. But the biggest loss may be things we don’t even know about yet, that just never happen because the reward from experimentation is not worth the risk of being found to be “discriminating” against a competitor.

We dove into the House bill in Breaking Down the American Choice and Innovation Online Act and Breaking Down House Democrats’ Forthcoming Competition Bills.

2. The prohibition on “unfair self-preferencing” is vague and expansive and will make Google, Amazon, Facebook, and Apple’s products worse. Consumers don’t want digital platforms to be dumb pipes, or to act like a telephone network or sewer system. The Internet is filled with a superabundance of information and options, as well as a host of malicious actors. Good digital platforms act as middlemen, sorting information in useful ways and taking on some of the risk that exists when, inevitably, we end up doing business with untrustworthy actors.

When users have the choice, they tend to prefer platforms that do quite a bit of “discrimination”—that is, favoring some sellers over others, or offering their own related products or services through the platform. Most people prefer Amazon to eBay because eBay is chaotic and riskier to use.

Competitors that decry self-preferencing by the largest platforms—integrating two different products with each other, like putting a maps box showing only the search engine’s own maps on a search engine results page—argue that the conduct is enabled only by a platform’s market dominance and does not benefit consumers.

Yet these companies often do exactly the same thing in their own products, regardless of whether they have market power. Yelp includes a map on its search results page, not just restaurant listings. DuckDuckGo does the same. If these companies offer these features, it is presumably because they think their users want such results. It seems perfectly plausible that Google does the same because it thinks its users—literally the same users, in most cases—also want them.

Fundamentally, and as we discuss in Against the Vertical Disrcimination Presumption, there is simply no sound basis to enact such a bill (even in a slightly improved version):

The notion that self-preferencing by platforms is harmful to innovation is entirely speculative. Moreover, it is flatly contrary to a range of studies showing that the opposite is likely true. In reality, platform competition is more complicated than simple theories of vertical discrimination would have it, and there is certainly no basis for a presumption of harm.

We discussed self-preferencing further in Platform Self-Preferencing Can Be Good for Consumers and Even Competitors, and showed that platform “discrimination” is often what consumers want from digital platforms in On the Origin of Platforms: An Evolutionary Perspective.

3. The bill massively empowers an FTC that seems intent to use antitrust to achieve political goals. The House bill would enable competitors to pepper covered platforms with frivolous lawsuits. The bill’s sponsors presumably hope that removing the private right of action will help to avoid that. But the bill still leaves intact a much more serious risk to the rule of law: the bill’s provisions are so broad that federal antitrust regulators will have enormous discretion over which cases they take.

This means that whoever is running the FTC and DOJ will be able to threaten covered platforms with a broad array of lawsuits, potentially to influence or control their conduct in other, unrelated areas. While some supporters of the bill regard this as a positive, most antitrust watchers would greet this power with much greater skepticism. Fundamentally, both bills grant antitrust enforcers wildly broad powers to pursue goals unrelated to competition. FTC Chair Lina Khan has, for example, argued that “the dispersion of political and economic control” ought to be antitrust’s goal. Commissioner Rebecca Kelly-Slaughter has argued that antitrust should be “antiracist”.

Whatever the desirability of these goals, the broad discretionary authority the bills confer on the antitrust agencies means that individual commissioners may have significantly greater scope to pursue the goals that they believe to be right, rather than Congress.

See discussions of this point at What Lina Khan’s Appointment Means for the House Antitrust Bills, Republicans Should Tread Carefully as They Consider ‘Solutions’ to Big Tech, The Illiberal Vision of Neo-Brandeisian Antitrust, and Alden Abbott’s discussion of FTC Antitrust Enforcement and the Rule of Law.

4. The bill adopts European principles of competition regulation. These are, to put it mildly, not obviously conducive to the sort of innovation and business growth that Americans may expect. Europe has no tech giants of its own, a condition that shows little sign of changing. Apple, alone, is worth as much as the top 30 companies in Germany’s DAX index, and the top 40 in France’s CAC index. Landmark European competition cases have seen Google fined for embedding Shopping results in the Search page—not because it hurt consumers, but because it hurt competing pricecomparison websites.

A fundamental difference between American and European competition regimes is that the U.S. system is far more friendly to businesses that obtain dominant market positions because they have offered better products more cheaply. Under the American system, successful businesses are normally given broad scope to charge high prices and refuse to deal with competitors. This helps to increase the rewards and incentive to innovate and invest in order to obtain that strong market position. The European model is far more burdensome.

The Senate bill adopts a European approach to refusals to deal—the same approach that led the European Commission to fine Microsoft for including Windows Media Player with Windows—and applies it across Big Tech broadly. Adopting this kind of approach may end up undermining elements of U.S. law that support innovation and growth.

For more, see How US and EU Competition Law Differ.

5. The proposals are based on a misunderstanding of the state of competition in the American economy, and of antitrust enforcement. It is widely believed that the U.S. economy has seen diminished competition. This is mistaken, particularly with respect to digital markets. Apparent rises in market concentration and profit margins disappear when we look more closely: local-level concentration is falling even as national-level concentration is rising, driven by more efficient chains setting up more stores in areas that were previously served by only one or two firms.

And markup rises largely disappear after accounting for fixed costs like R&D and marketing.

Where profits are rising, in areas like manufacturing, it appears to be mainly driven by increased productivity, not higher prices. Real prices have not risen in line with markups. Where profitability has increased, it has been mainly driven by falling costs.

Nor have the number of antitrust cases brought by federal antitrust agencies fallen. The likelihood of a merger being challenged more than doubled between 1979 and 2017. And there is little reason to believe that the deterrent effect of antitrust has weakened. Many critics of Big Tech have decided that there must be a problem and have worked backwards from that conclusion, selecting whatever evidence supports it and ignoring the evidence that does not. The consequence of such motivated reasoning is bills like this.

See Geoff’s April 2020 written testimony to the House Judiciary Investigation Into Competition in Digital Markets here.

The dystopian novel is a powerful literary genre. It has given us such masterpieces as Nineteen Eighty-Four, Brave New World, and Fahrenheit 451. Though these novels often shed light on the risks of contemporary society and the zeitgeist of the era in which they were written, they also almost always systematically overshoot the mark (intentionally or not) and severely underestimate the radical improvements that stem from the technologies (or other causes) that they fear.

But dystopias are not just a literary phenomenon; they are also a powerful force in policy circles. This is epitomized by influential publications such as The Club of Rome’s 1972 report The Limits of Growth, whose dire predictions of Malthusian catastrophe have largely failed to materialize.

In an article recently published in the George Mason Law Review, we argue that contemporary antitrust scholarship and commentary is similarly afflicted by dystopian thinking. In that respect, today’s antitrust pessimists have set their sights predominantly on the digital economy—”Big Tech” and “Big Data”—in the process of alleging a vast array of potential harms.

Scholars have notably argued that the data created and employed by the digital economy produces network effects that inevitably lead to tipping and to more concentrated markets (e.g., here and here). In other words, firms will allegedly accumulate insurmountable data advantages and thus thwart competitors for extended periods of time.

Some have gone so far as to argue that this threatens the very fabric of western democracy. For instance, parallels between the novel Nineteen Eighty-Four and the power of large digital platforms were plain to see when Epic Games launched an antitrust suit against Apple and its App Store in August 2020. The gaming company released a short video clip parodying Apple’s famous “1984” ad (which, upon its release, was itself widely seen as a critique of the tech incumbents of the time). Similarly, a piece in the New Statesman—titled “Slouching Towards Dystopia: The Rise of Surveillance Capitalism and the Death of Privacy”—concluded that:

Our lives and behaviour have been turned into profit for the Big Tech giants—and we meekly click ‘Accept.’ How did we sleepwalk into a world without privacy?

In our article, we argue that these fears are symptomatic of two different but complementary phenomena, which we refer to as “Antitrust Dystopia” and “Antitrust Nostalgia.”

Antitrust Dystopia is the pessimistic tendency among competition scholars and enforcers to assert that novel business conduct will cause technological advances to have unprecedented, anticompetitive consequences. This is almost always grounded in the belief that “this time is different”—that, despite the benign or positive consequences of previous, similar technological advances, this time those advances will have dire, adverse consequences absent enforcement to stave off abuse.

Antitrust Nostalgia is the biased assumption—often built into antitrust doctrine itself—that change is bad. Antitrust Nostalgia holds that, because a business practice has seemingly benefited competition before, changing it will harm competition going forward. Thus, antitrust enforcement is often skeptical of, and triggered by, various deviations from status quo conduct and relationships (i.e., “nonstandard” business arrangements) when change is, to a first approximation, the hallmark of competition itself.

Our article argues that these two worldviews are premised on particularly questionable assumptions about the way competition unfolds, in this case, in data-intensive markets.

The Case of Big Data Competition

The notion that digital markets are inherently more problematic than their brick-and-mortar counterparts—if there even is a meaningful distinction—is advanced routinely by policymakers, journalists, and other observers. The fear is that, left to their own devices, today’s dominant digital platforms will become all-powerful, protected by an impregnable “data barrier to entry.” Against this alarmist backdrop, nostalgic antitrust scholars have argued for aggressive antitrust intervention against the nonstandard business models and contractual arrangements that characterize these markets.

But as our paper demonstrates, a proper assessment of the attributes of data-intensive digital markets does not support either the dire claims or the proposed interventions.

  1. Data is information

One of the most salient features of the data created and consumed by online firms is that, jargon aside, it is just information. As with other types of information, it thus tends to have at least some traits usually associated with public goods (i.e., goods that are non-rivalrous in consumption and not readily excludable). As the National Bureau of Economic Research’s Catherine Tucker argues, data “has near-zero marginal cost of production and distribution even over long distances,” making it very difficult to exclude others from accessing it. Meanwhile, multiple economic agents can simultaneously use the same data, making it non-rivalrous in consumption.

As we explain in our paper, these features make the nature of modern data almost irreconcilable with the alleged hoarding and dominance that critics routinely associate with the tech industry.

2. Data is not scarce; expertise is

Another important feature of data is that it is ubiquitous. The predominant challenge for firms is not so much in obtaining data but, rather, in drawing useful insights from it. This has two important implications for antitrust policy.

First, although data does not have the self-reinforcing characteristics of network effects, there is a sense that acquiring a certain amount of data and expertise is necessary to compete in data-heavy industries. It is (or should be) equally apparent, however, that this “learning by doing” advantage rapidly reaches a point of diminishing returns.

This is supported by significant empirical evidence. As our survey of the empirical literature shows, data generally entails diminishing marginal returns:

Second, it is firms’ capabilities, rather than the data they own, that lead to success in the marketplace. Critics who argue that firms such as Amazon, Google, and Facebook are successful because of their superior access to data might, in fact, have the causality in reverse. Arguably, it is because these firms have come up with successful industry-defining paradigms that they have amassed so much data, and not the other way around.

This dynamic can be seen at play in the early days of the search-engine market. In 2013, The Atlantic ran a piece titled “What the Web Looked Like Before Google.” By comparing the websites of Google and its rivals in 1998 (when Google Search was launched), the article shows how the current champion of search marked a radical departure from the status quo.

Even if it stumbled upon it by chance, Google immediately identified a winning formula for the search-engine market. It ditched the complicated classification schemes favored by its rivals and opted, instead, for a clean page with a single search box. This ensured that users could access the information they desired in the shortest possible amount of time—thanks, in part, to Google’s PageRank algorithm.

It is hardly surprising that Google’s rivals struggled to keep up with this shift in the search-engine industry. The theory of dynamic capabilities tells us that firms that have achieved success by indexing the web will struggle when the market rapidly moves toward a new paradigm (in this case, Google’s single search box and ten blue links). During the time it took these rivals to identify their weaknesses and repurpose their assets, Google kept on making successful decisions: notably, the introduction of Gmail, its acquisitions of YouTube and Android, and the introduction of Google Maps, among others.

Seen from this evolutionary perspective, Google thrived because its capabilities were perfect for the market at that time, while rivals were ill-adapted.

3.    Data as a byproduct of, and path to, platform monetization

Policymakers should also bear in mind that platforms often must go to great lengths in order to create data about their users—data that these same users often do not know about themselves. Under this framing, data is a byproduct of firms’ activity, rather than an input necessary for rivals to launch a business.

This is especially clear when one looks at the formative years of numerous online platforms. Most of the time, these businesses were started by entrepreneurs who did not own much data but, instead, had a brilliant idea for a service that consumers would value. Even if data ultimately played a role in the monetization of these platforms, it does not appear that it was necessary for their creation.

Data often becomes significant only at a relatively late stage in these businesses’ development. A quick glance at the digital economy is particularly revealing in this regard. Google and Facebook, in particular, both launched their platforms under the assumption that building a successful product would eventually lead to significant revenues.

It took five years from its launch for Facebook to start making a profit. Even at that point, when the platform had 300 million users, it still was not entirely clear whether it would generate most of its income from app sales or online advertisements. It was another three years before Facebook started to cement its position as one of the world’s leading providers of online ads. During this eight-year timespan, Facebook prioritized user growth over the monetization of its platform. The company appears to have concluded (correctly, it turns out) that once its platform attracted enough users, it would surely find a way to make itself highly profitable.

This might explain how Facebook managed to build a highly successful platform despite a large data disadvantage when compared to rivals like MySpace. And Facebook is no outlier. The list of companies that prevailed despite starting with little to no data (and initially lacking a data-dependent monetization strategy) is lengthy. Other examples include TikTok, Airbnb, Amazon, Twitter, PayPal, Snapchat, and Uber.

Those who complain about the unassailable competitive advantages enjoyed by companies with troves of data have it exactly backward. Companies need to innovate to attract consumer data or else consumers will switch to competitors, including both new entrants and established incumbents. As a result, the desire to make use of more and better data drives competitive innovation, with manifestly impressive results. The continued explosion of new products, services, and apps is evidence that data is not a bottleneck to competition, but a spur to drive it.

We’ve Been Here Before: The Microsoft Antitrust Saga

Dystopian and nostalgic discussions concerning the power of successful technology firms are nothing new. Throughout recent history, there have been repeated calls for antitrust authorities to reign in these large companies. These calls for regulation have often led to increased antitrust scrutiny of some form. The Microsoft antitrust cases—which ran from the 1990s to the early 2010s on both sides of the Atlantic—offer a good illustration of the misguided “Antitrust Dystopia.”

In the mid-1990s, Microsoft was one of the most successful and vilified companies in America. After it obtained a commanding position in the desktop operating system market, the company sought to establish a foothold in the burgeoning markets that were developing around the Windows platform (many of which were driven by the emergence of the Internet). These included the Internet browser and media-player markets.

The business tactics employed by Microsoft to execute this transition quickly drew the ire of the press and rival firms, ultimately landing Microsoft in hot water with antitrust authorities on both sides of the Atlantic.

However, as we show in our article, though there were numerous calls for authorities to adopt a precautionary principle-type approach to dealing with Microsoft—and antitrust enforcers were more than receptive to these calls—critics’ worst fears never came to be.

This positive outcome is unlikely to be the result of the antitrust cases that were brought against Microsoft. In other words, the markets in which Microsoft operated seem to have self-corrected (or were misapprehended as competitively constrained) and, today, are generally seen as being unproblematic.

This is not to say that antitrust interventions against Microsoft were necessarily misguided. Instead, our critical point is that commentators and antitrust decisionmakers routinely overlooked or misinterpreted the existing and nonstandard market dynamics that ultimately prevented the worst anticompetitive outcomes from materializing. This is supported by several key factors.

First, the remedies that were imposed against Microsoft by antitrust authorities on both sides of the Atlantic were ultimately quite weak. It is thus unlikely that these remedies, by themselves, prevented Microsoft from dominating its competitors in adjacent markets.

Note that, if this assertion is wrong, and antitrust enforcement did indeed prevent Microsoft from dominating online markets, then there is arguably no need to reform the antitrust laws on either side of the Atlantic, nor even to adopt a particularly aggressive enforcement position. The remedies that were imposed on Microsoft were relatively localized. Accordingly, if antitrust enforcement did indeed prevent Microsoft from dominating other online markets, then it is antitrust enforcement’s deterrent effect that is to thank, and not the remedies actually imposed.

Second, Microsoft lost its bottleneck position. One of the biggest changes that took place in the digital space was the emergence of alternative platforms through which consumers could access the Internet. Indeed, as recently as January 2009, roughly 94% of all Internet traffic came from Windows-based computers. Just over a decade later, this number has fallen to about 31%. Android, iOS, and OS X have shares of roughly 41%, 16%, and 7%, respectively. Consumers can thus access the web via numerous platforms. The emergence of these alternatives reduced the extent to which Microsoft could use its bottleneck position to force its services on consumers in online markets.

Third, it is possible that Microsoft’s own behavior ultimately sowed the seeds of its relative demise. In particular, the alleged barriers to entry (rooted in nostalgic market definitions and skeptical analysis of “ununderstandable” conduct) that were essential to establishing the antitrust case against the company may have been pathways to entry as much as barriers.

Consider this error in the Microsoft court’s analysis of entry barriers: the court pointed out that new entrants faced a barrier that Microsoft didn’t face, in that Microsoft didn’t have to contend with a powerful incumbent impeding its entry by tying up application developers.

But while this may be true, Microsoft did face the absence of any developers at all, and had to essentially create (or encourage the creation of) businesses that didn’t previously exist. Microsoft thus created a huge positive externality for new entrants: existing knowledge and organizations devoted to software development, industry knowledge, reputation, awareness, and incentives for schools to offer courses. It could well be that new entrants, in fact, faced lower barriers with respect to app developers than did Microsoft when it entered.

In short, new entrants may face even more welcoming environments because of incumbents. This enabled Microsoft’s rivals to thrive.

Conclusion

Dystopian antitrust prophecies are generally doomed to fail, just like those belonging to the literary world. The reason is simple. While it is easy to identify what makes dominant firms successful in the present (i.e., what enables them to hold off competitors in the short term), it is almost impossible to conceive of the myriad ways in which the market could adapt. Indeed, it is today’s supra-competitive profits that spur the efforts of competitors.

Surmising that the economy will come to be dominated by a small number of successful firms is thus the same as believing that all market participants can be outsmarted by a few successful ones. This might occur in some cases or for some period of time, but as our article argues, it is bound to happen far less often than pessimists fear.

In short, dystopian scholars have not successfully made the case for precautionary antitrust. Indeed, the economic features of data make it highly unlikely that today’s tech giants could anticompetitively maintain their advantage for an indefinite amount of time, much less leverage this advantage in adjacent markets.

With this in mind, there is one dystopian novel that offers a fitting metaphor to end this Article. The Man in the High Castle tells the story of an alternate present, where Axis forces triumphed over the Allies during the second World War. This turns the dystopia genre on its head: rather than argue that the world is inevitably sliding towards a dark future, The Man in the High Castle posits that the present could be far worse than it is.

In other words, we should not take any of the luxuries we currently enjoy for granted. In the world of antitrust, critics routinely overlook that the emergence of today’s tech industry might have occurred thanks to, and not in spite of, existing antitrust doctrine. Changes to existing antitrust law should thus be dictated by a rigorous assessment of the various costs and benefits they would entail, rather than a litany of hypothetical concerns. The most recent wave of calls for antitrust reform have so far failed to clear this low bar.

[This post adapts elements of “Should ASEAN Antitrust Laws Emulate European Competition Policy?”, published in the Singapore Economic Review (2021). Open access working paper here.]

U.S. and European competition laws diverge in numerous ways that have important real-world effects. Understanding these differences is vital, particularly as lawmakers in the United States, and the rest of the world, consider adopting a more “European” approach to competition.

In broad terms, the European approach is more centralized and political. The European Commission’s Directorate General for Competition (DG Comp) has significant de facto discretion over how the law is enforced. This contrasts with the common law approach of the United States, in which courts elaborate upon open-ended statutes through an iterative process of case law. In other words, the European system was built from the top down, while U.S. antitrust relies on a bottom-up approach, derived from arguments made by litigants (including the government antitrust agencies) and defendants (usually businesses).

This procedural divergence has significant ramifications for substantive law. European competition law includes more provisions akin to de facto regulation. This is notably the case for the “abuse of dominance” standard, in which a “dominant” business can be prosecuted for “abusing” its position by charging high prices or refusing to deal with competitors. By contrast, the U.S. system places more emphasis on actual consumer outcomes, rather than the nature or “fairness” of an underlying practice.

The American system thus affords firms more leeway to exclude their rivals, so long as this entails superior benefits for consumers. This may make the U.S. system more hospitable to innovation, since there is no built-in regulation of conduct for innovators who acquire a successful market position fairly and through normal competition.

In this post, we discuss some key differences between the two systems—including in areas like predatory pricing and refusals to deal—as well as the discretionary power the European Commission enjoys under the European model.

Exploitative Abuses

U.S. antitrust is, by and large, unconcerned with companies charging what some might consider “excessive” prices. The late Associate Justice Antonin Scalia, writing for the Supreme Court majority in the 2003 case Verizon v. Trinko, observed that:

The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system. The opportunity to charge monopoly prices—at least for a short period—is what attracts “business acumen” in the first place; it induces risk taking that produces innovation and economic growth.

This contrasts with European competition-law cases, where firms may be found to have infringed competition law because they charged excessive prices. As the European Court of Justice (ECJ) held in 1978’s United Brands case: “In this case charging a price which is excessive because it has no reasonable relation to the economic value of the product supplied would be such an abuse.”

While United Brands was the EU’s foundational case for excessive pricing, and the European Commission reiterated that these allegedly exploitative abuses were possible when it published its guidance paper on abuse of dominance cases in 2009, the commission had for some time demonstrated apparent disinterest in bringing such cases. In recent years, however, both the European Commission and some national authorities have shown renewed interest in excessive-pricing cases, most notably in the pharmaceutical sector.

European competition law also penalizes so-called “margin squeeze” abuses, in which a dominant upstream supplier charges a price to distributors that is too high for them to compete effectively with that same dominant firm downstream:

[I]t is for the referring court to examine, in essence, whether the pricing practice introduced by TeliaSonera is unfair in so far as it squeezes the margins of its competitors on the retail market for broadband connection services to end users. (Konkurrensverket v TeliaSonera Sverige, 2011)

As Scalia observed in Trinko, forcing firms to charge prices that are below a market’s natural equilibrium affects firms’ incentives to enter markets, notably with innovative products and more efficient means of production. But the problem is not just one of market entry and innovation.  Also relevant is the degree to which competition authorities are competent to determine the “right” prices or margins.

As Friedrich Hayek demonstrated in his influential 1945 essay The Use of Knowledge in Society, economic agents use information gleaned from prices to guide their business decisions. It is this distributed activity of thousands or millions of economic actors that enables markets to put resources to their most valuable uses, thereby leading to more efficient societies. By comparison, the efforts of central regulators to set prices and margins is necessarily inferior; there is simply no reasonable way for competition regulators to make such judgments in a consistent and reliable manner.

Given the substantial risk that investigations into purportedly excessive prices will deter market entry, such investigations should be circumscribed. But the court’s precedents, with their myopic focus on ex post prices, do not impose such constraints on the commission. The temptation to “correct” high prices—especially in the politically contentious pharmaceutical industry—may thus induce economically unjustified and ultimately deleterious intervention.

Predatory Pricing

A second important area of divergence concerns predatory-pricing cases. U.S. antitrust law subjects allegations of predatory pricing to two strict conditions:

  1. Monopolists must charge prices that are below some measure of their incremental costs; and
  2. There must be a realistic prospect that they will able to recoup these initial losses.

In laying out its approach to predatory pricing, the U.S. Supreme Court has identified the risk of false positives and the clear cost of such errors to consumers. It thus has particularly stressed the importance of the recoupment requirement. As the court found in 1993’s Brooke Group Ltd. v. Brown & Williamson Tobacco Corp., without recoupment, “predatory pricing produces lower aggregate prices in the market, and consumer welfare is enhanced.”

Accordingly, U.S. authorities must prove that there are constraints that prevent rival firms from entering the market after the predation scheme, or that the scheme itself would effectively foreclose rivals from entering the market in the first place. Otherwise, the predator would be undercut by competitors as soon as it attempts to recoup its losses by charging supra-competitive prices.

Without the strong likelihood that a monopolist will be able to recoup lost revenue from underpricing, the overwhelming weight of economic evidence (to say nothing of simple logic) is that predatory pricing is not a rational business strategy. Thus, apparent cases of predatory pricing are most likely not, in fact, predatory; deterring or punishing them would actually harm consumers.

By contrast, the EU employs a more expansive legal standard to define predatory pricing, and almost certainly risks injuring consumers as a result. Authorities must prove only that a company has charged a price below its average variable cost, in which case its behavior is presumed to be predatory. Even when a firm charges prices that are between its average variable and average total cost, it can be found guilty of predatory pricing if authorities show that its behavior was part of a plan to eliminate a competitor. Most significantly, in neither case is it necessary for authorities to show that the scheme would allow the monopolist to recoup its losses.

[I]t does not follow from the case‑law of the Court that proof of the possibility of recoupment of losses suffered by the application, by an undertaking in a dominant position, of prices lower than a certain level of costs constitutes a necessary precondition to establishing that such a pricing policy is abusive. (France Télécom v Commission, 2009).

This aspect of the legal standard has no basis in economic theory or evidence—not even in the “strategic” economic theory that arguably challenges the dominant Chicago School understanding of predatory pricing. Indeed, strategic predatory pricing still requires some form of recoupment, and the refutation of any convincing business justification offered in response. For example, ​​in a 2017 piece for the Antitrust Law Journal, Steven Salop lays out the “raising rivals’ costs” analysis of predation and notes that recoupment still occurs, just at the same time as predation:

[T]he anticompetitive conditional pricing practice does not involve discrete predatory and recoupment periods, as in the case of classical predatory pricing. Instead, the recoupment occurs simultaneously with the conduct. This is because the monopolist is able to maintain its current monopoly power through the exclusionary conduct.

The case of predatory pricing illustrates a crucial distinction between European and American competition law. The recoupment requirement embodied in American antitrust law serves to differentiate aggressive pricing behavior that improves consumer welfare—because it leads to overall price decreases—from predatory pricing that reduces welfare with higher prices. It is, in other words, entirely focused on the welfare of consumers.

The European approach, by contrast, reflects structuralist considerations far removed from a concern for consumer welfare. Its underlying fear is that dominant companies could use aggressive pricing to engender more concentrated markets. It is simply presumed that these more concentrated markets are invariably detrimental to consumers. Both the Tetra Pak and France Télécom cases offer clear illustrations of the ECJ’s reasoning on this point:

[I]t would not be appropriate, in the circumstances of the present case, to require in addition proof that Tetra Pak had a realistic chance of recouping its losses. It must be possible to penalize predatory pricing whenever there is a risk that competitors will be eliminated… The aim pursued, which is to maintain undistorted competition, rules out waiting until such a strategy leads to the actual elimination of competitors. (Tetra Pak v Commission, 1996).

Similarly:

[T]he lack of any possibility of recoupment of losses is not sufficient to prevent the undertaking concerned reinforcing its dominant position, in particular, following the withdrawal from the market of one or a number of its competitors, so that the degree of competition existing on the market, already weakened precisely because of the presence of the undertaking concerned, is further reduced and customers suffer loss as a result of the limitation of the choices available to them.  (France Télécom v Commission, 2009).

In short, the European approach leaves less room to analyze the concrete effects of a given pricing scheme, leaving it more prone to false positives than the U.S. standard explicated in the Brooke Group decision. Worse still, the European approach ignores not only the benefits that consumers may derive from lower prices, but also the chilling effect that broad predatory pricing standards may exert on firms that would otherwise seek to use aggressive pricing schemes to attract consumers.

Refusals to Deal

U.S. and EU antitrust law also differ greatly when it comes to refusals to deal. While the United States has limited the ability of either enforcement authorities or rivals to bring such cases, EU competition law sets a far lower threshold for liability.

As Justice Scalia wrote in Trinko:

Aspen Skiing is at or near the outer boundary of §2 liability. The Court there found significance in the defendant’s decision to cease participation in a cooperative venture. The unilateral termination of a voluntary (and thus presumably profitable) course of dealing suggested a willingness to forsake short-term profits to achieve an anticompetitive end. (Verizon v Trinko, 2003.)

This highlights two key features of American antitrust law with regard to refusals to deal. To start, U.S. antitrust law generally does not apply the “essential facilities” doctrine. Accordingly, in the absence of exceptional facts, upstream monopolists are rarely required to supply their product to downstream rivals, even if that supply is “essential” for effective competition in the downstream market. Moreover, as Justice Scalia observed in Trinko, the Aspen Skiing case appears to concern only those limited instances where a firm’s refusal to deal stems from the termination of a preexisting and profitable business relationship.

While even this is not likely the economically appropriate limitation on liability, its impetus—ensuring that liability is found only in situations where procompetitive explanations for the challenged conduct are unlikely—is completely appropriate for a regime concerned with minimizing the cost to consumers of erroneous enforcement decisions.

As in most areas of antitrust policy, EU competition law is much more interventionist. Refusals to deal are a central theme of EU enforcement efforts, and there is a relatively low threshold for liability.

In theory, for a refusal to deal to infringe EU competition law, it must meet a set of fairly stringent conditions: the input must be indispensable, the refusal must eliminate all competition in the downstream market, and there must not be objective reasons that justify the refusal. Moreover, if the refusal to deal involves intellectual property, it must also prevent the appearance of a new good.

In practice, however, all of these conditions have been relaxed significantly by EU courts and the commission’s decisional practice. This is best evidenced by the lower court’s Microsoft ruling where, as John Vickers notes:

[T]he Court found easily in favor of the Commission on the IMS Health criteria, which it interpreted surprisingly elastically, and without relying on the special factors emphasized by the Commission. For example, to meet the “new product” condition it was unnecessary to identify a particular new product… thwarted by the refusal to supply but sufficient merely to show limitation of technical development in terms of less incentive for competitors to innovate.

EU competition law thus shows far less concern for its potential chilling effect on firms’ investments than does U.S. antitrust law.

Vertical Restraints

There are vast differences between U.S. and EU competition law relating to vertical restraints—that is, contractual restraints between firms that operate at different levels of the production process.

On the one hand, since the Supreme Court’s Leegin ruling in 2006, even price-related vertical restraints (such as resale price maintenance (RPM), under which a manufacturer can stipulate the prices at which retailers must sell its products) are assessed under the rule of reason in the United States. Some commentators have gone so far as to say that, in practice, U.S. case law on RPM almost amounts to per se legality.

Conversely, EU competition law treats RPM as severely as it treats cartels. Both RPM and cartels are considered to be restrictions of competition “by object”—the EU’s equivalent of a per se prohibition. This severe treatment also applies to non-price vertical restraints that tend to partition the European internal market.

Furthermore, in the Consten and Grundig ruling, the ECJ rejected the consequentialist, and economically grounded, principle that inter-brand competition is the appropriate framework to assess vertical restraints:

Although competition between producers is generally more noticeable than that between distributors of products of the same make, it does not thereby follow that an agreement tending to restrict the latter kind of competition should escape the prohibition of Article 85(1) merely because it might increase the former. (Consten SARL & Grundig-Verkaufs-GMBH v. Commission of the European Economic Community, 1966).

This treatment of vertical restrictions flies in the face of longstanding mainstream economic analysis of the subject. As Patrick Rey and Jean Tirole conclude:

Another major contribution of the earlier literature on vertical restraints is to have shown that per se illegality of such restraints has no economic foundations.

Unlike the EU, the U.S. Supreme Court in Leegin took account of the weight of the economic literature, and changed its approach to RPM to ensure that the law no longer simply precluded its arguable consumer benefits, writing: “Though each side of the debate can find sources to support its position, it suffices to say here that economics literature is replete with procompetitive justifications for a manufacturer’s use of resale price maintenance.” Further, the court found that the prior approach to resale price maintenance restraints “hinders competition and consumer welfare because manufacturers are forced to engage in second-best alternatives and because consumers are required to shoulder the increased expense of the inferior practices.”

The EU’s continued per se treatment of RPM, by contrast, strongly reflects its “precautionary principle” approach to antitrust. European regulators and courts readily condemn conduct that could conceivably injure consumers, even where such injury is, according to the best economic understanding, exceedingly unlikely. The U.S. approach, which rests on likelihood rather than mere possibility, is far less likely to condemn beneficial conduct erroneously.

Political Discretion in European Competition Law

EU competition law lacks a coherent analytical framework like that found in U.S. law’s reliance on the consumer welfare standard. The EU process is driven by a number of laterally equivalent—and sometimes mutually exclusive—goals, including industrial policy and the perceived need to counteract foreign state ownership and subsidies. Such a wide array of conflicting aims produces lack of clarity for firms seeking to conduct business. Moreover, the discretion that attends this fluid arrangement of goals yields an even larger problem.

The Microsoft case illustrates this problem well. In Microsoft, the commission could have chosen to base its decision on various potential objectives. It notably chose to base its findings on the fact that Microsoft’s behavior reduced “consumer choice.”

The commission, in fact, discounted arguments that economic efficiency may lead to consumer welfare gains, because it determined “consumer choice” among media players was more important:

Another argument relating to reduced transaction costs consists in saying that the economies made by a tied sale of two products saves resources otherwise spent for maintaining a separate distribution system for the second product. These economies would then be passed on to customers who could save costs related to a second purchasing act, including selection and installation of the product. Irrespective of the accuracy of the assumption that distributive efficiency gains are necessarily passed on to consumers, such savings cannot possibly outweigh the distortion of competition in this case. This is because distribution costs in software licensing are insignificant; a copy of a software programme can be duplicated and distributed at no substantial effort. In contrast, the importance of consumer choice and innovation regarding applications such as media players is high. (Commission Decision No. COMP. 37792 (Microsoft)).

It may be true that tying the products in question was unnecessary. But merely dismissing this decision because distribution costs are near-zero is hardly an analytically satisfactory response. There are many more costs involved in creating and distributing complementary software than those associated with hosting and downloading. The commission also simply asserts that consumer choice among some arbitrary number of competing products is necessarily a benefit. This, too, is not necessarily true, and the decision’s implication that any marginal increase in choice is more valuable than any gains from product design or innovation is analytically incoherent.

The Court of First Instance was only too happy to give the commission a pass in its breezy analysis; it saw no objection to these findings. With little substantive reasoning to support its findings, the court fully endorsed the commission’s assessment:

As the Commission correctly observes (see paragraph 1130 above), by such an argument Microsoft is in fact claiming that the integration of Windows Media Player in Windows and the marketing of Windows in that form alone lead to the de facto standardisation of the Windows Media Player platform, which has beneficial effects on the market. Although, generally, standardisation may effectively present certain advantages, it cannot be allowed to be imposed unilaterally by an undertaking in a dominant position by means of tying.

The Court further notes that it cannot be ruled out that third parties will not want the de facto standardisation advocated by Microsoft but will prefer it if different platforms continue to compete, on the ground that that will stimulate innovation between the various platforms. (Microsoft Corp. v Commission, 2007)

Pointing to these conflicting effects of Microsoft’s bundling decision, without weighing either, is a weak basis to uphold the commission’s decision that consumer choice outweighs the benefits of standardization. Moreover, actions undertaken by other firms to enhance consumer choice at the expense of standardization are, on these terms, potentially just as problematic. The dividing line becomes solely which theory the commission prefers to pursue.

What such a practice does is vest the commission with immense discretionary power. Any given case sets up a “heads, I win; tails, you lose” situation in which defendants are easily outflanked by a commission that can change the rules of its analysis as it sees fit. Defendants can play only the cards that they are dealt. Accordingly, Microsoft could not successfully challenge a conclusion that its behavior harmed consumers’ choice by arguing that it improved consumer welfare, on net.

By selecting, in this instance, “consumer choice” as the standard to be judged, the commission was able to evade the constraints that might have been imposed by a more robust welfare standard. Thus, the commission can essentially pick and choose the objectives that best serve its interests in each case. This vastly enlarges the scope of potential antitrust liability, while also substantially decreasing the ability of firms to predict when their behavior may be viewed as problematic. It leads to what, in U.S. courts, would be regarded as an untenable risk of false positives that chill innovative behavior and create nearly unwinnable battles for targeted firms.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]

As one of the few economic theorists in this symposium, I believe my comparative advantage is in that: economic theory. In this post, I want to remind people of the basic economic theories that we have at our disposal, “off the shelf,” to make sense of the U.S. Department of Justice’s lawsuit against Google. I do not mean this to be a proclamation of “what economics has to say about X,” but merely just to help us frame the issue.

In particular, I’m going to focus on the economic concerns of Google paying phone manufacturers (Apple, in particular) to be the default search engine installed on phones. While there is not a large literature on the economic effects of default contracts, there is a large literature on something that I will argue is similar: trade promotions, such as slotting contracts, where a manufacturer pays a retailer for shelf space. Despite all the bells and whistles of the Google case, I will argue that, from an economic point of view, the contracts that Google signed are just trade promotions. No more, no less. And trade promotions are well-established as part of a competitive process that ultimately helps consumers. 

However, it is theoretically possible that such trade promotions hurt customers, so it is theoretically possible that Google’s contracts hurt consumers. Ultimately, the theoretical possibility of anticompetitive behavior that harms consumers does not seem plausible to me in this case.

Default Status

There are two reasons that Google paying Apple to be its default search engine is similar to a trade promotion. First, the deal brings awareness to the product, which nudges certain consumers/users to choose the product when they would not otherwise do so. Second, the deal does not prevent consumers from choosing the other product.

In the case of retail trade promotions, a promotional space given to Coca-Cola makes it marginally easier for consumers to pick Coke, and therefore some consumers will switch from Pepsi to Coke. But it does not reduce any consumer’s choice. The store will still have both items.

This is the same for a default search engine. The marginal searchers, who do not have a strong preference for either search engine, will stick with the default. But anyone can still install a new search engine, install a new browser, etc. It takes a few clicks, just as it takes a few steps to walk down the aisle to get the Pepsi; it is still an available choice.

If we were to stop the analysis there, we could conclude that consumers are worse off (if just a tiny bit). Some customers will have to change the default app. We also need to remember that this contract is part of a more general competitive process. The retail stores are also competing with one another, as are smartphone manufacturers.

Despite popular claims to the contrary, Apple cannot charge anything it wants for its phone. It is competing with Samsung, etc. Therefore, Apple has to pass through some of Google’s payments to customers in order to compete with Samsung. Prices are lower because of this payment. As I phrased it elsewhere, Google is effectively subsidizing the iPhone. This cross-subsidization is a part of the competitive process that ultimately benefits consumers through lower prices.

These contracts lower consumer prices, even if we assume that Apple has market power. Those who recall your Econ 101 know that a monopolist chooses a quantity where the marginal revenue equals marginal cost. With a payment from Google, the marginal cost of producing a phone is lower, therefore Apple will increase the quantity and lower price. This is shown below:

One of the surprising things about markets is that buyers’ and sellers’ incentives can be aligned, even though it seems like they must be adversarial. Companies can indirectly bargain for their consumers. Commenting on Standard Fashion Co. v. Magrane-Houston Co., where a retail store contracted to only carry Standard’s products, Robert Bork (1978, pp. 306–7) summarized this idea as follows:

The store’s decision, made entirely in its own interest, necessarily reflects the balance of competing considerations that determine consumer welfare. Put the matter another way. If no manufacturer used exclusive dealing contracts, and if a local retail monopolist decided unilaterally to carry only Standard’s patterns because the loss in product variety was more than made up in the cost saving, we would recognize that decision was in the consumer interest. We do not want a variety that costs more than it is worth … If Standard finds it worthwhile to purchase exclusivity … the reason is not the barring of entry, but some more sensible goal, such as obtaining the special selling effort of the outlet.

How trade promotions could harm customers

Since Bork’s writing, many theoretical papers have shown exceptions to Bork’s logic. There are times that the retailers’ incentives are not aligned with the customers. And we need to take those possibilities seriously.

The most common way to show the harm of these deals (or more commonly exclusivity deals) is to assume:

  1. There are large, fixed costs so that a firm must acquire a sufficient number of customers in order to enter the market; and
  2. An incumbent can lock in enough customers to prevent the entrant from reaching an efficient size.

Consumers can be locked-in because there is some fixed cost of changing suppliers or because of some coordination problems. If that’s true, customers can be made worse off, on net, because the Google contracts reduce consumer choice.

To understand the logic, let’s simplify the model to just search engines and searchers. Suppose there are two search engines (Google and Bing) and 10 searchers. However, to operate profitably, each search engine needs at least three searchers. If Google can entice eight searchers to use its product, Bing cannot operate profitably, even if Bing provides a better product. This holds even if everyone knows Bing would be a better product. The consumers are stuck in a coordination failure.

We should be skeptical of coordination failure models of inefficient outcomes. The problem with any story of coordination failures is that it is highly sensitive to the exact timing of the model. If Bing can preempt Google and offer customers an even better deal (the new entrant is better by assumption), then the coordination failure does not occur.

To argue that Bing could not execute a similar contract, the most common appeal is that the new entrant does not have the capital to pay upfront for these contracts, since it will only make money from its higher-quality search engine down the road. That makes sense until you remember that we are talking about Microsoft. I’m skeptical that capital is the real constraint. It seems much more likely that Google just has a more popular search engine.

The other problem with coordination failure arguments is that they are almost non-falsifiable. There is no way to tell, in the model, whether Google is used because of a coordination failure or whether it is used because it is a better product. If Google is a better product, then the outcome is efficient. The two outcomes are “observationally equivalent.” Compare this to the standard theory of monopoly, where we can (in principle) establish an inefficiency if the price is greater than marginal cost. While it is difficult to measure marginal cost, it can be done.

There is a general economic idea in these models that we need to pay attention to. If Google takes an action that prevents Bing from reaching efficient size, that may be an externality, sometimes called a network effect, and so that action may hurt consumer welfare.

I’m not sure how seriously to take these network effects. If more searchers allow Bing to make a better product, then literally any action (competitive or not) by Google is an externality. Making a better product that takes away consumers from Bing lowers Bing’s quality. That is, strictly speaking, an externality. Surely, that is not worthy of antitrust scrutiny simply because we find an externality.

And Bing also “takes away” searchers from Google, thus lowering Google’s possible quality. With network effects, bigger is better and it may be efficient to have only one firm. Surely, that’s not an argument we want to put forward as a serious antitrust analysis.

Put more generally, it is not enough to scream “NETWORK EFFECT!” and then have the antitrust authority come in, lawsuits-a-blazing. Well, it shouldn’t be enough.

For me to take the network effect argument seriously from an economic point of view, compared to a legal perspective, I would need to see a real restriction on consumer choice, not just an externality. One needs to argue that:

  1. No competitor can cover their fixed costs to make a reasonable search engine; and
  2. These contracts are what prevent the competing search engines from reaching size.

That’s the challenge I would like to put forward to supporters of the lawsuit. I’m skeptical.

[TOTM: The following is part of a symposium by TOTM guests and authors marking the release of Nicolas Petit’s “Big Tech and the Digital Economy: The Moligopoly Scenario.” The entire series of posts is available here.

This post is authored by Shane Greenstein (Professor of Business Administration, Harvard Business School).
]

In his book, Nicolas Petit approaches antitrust issues by analyzing their economic foundations, and he aspires to bridge gaps between those foundations and the common points of view. In light of the divisiveness of today’s debates, I appreciate Petit’s calm and deliberate view of antitrust, and I respect his clear and engaging prose.

I spent a lot of time with this topic when writing a book (How the Internet Became Commercial, 2015, Princeton Press). If I have something unique to add to a review of Petit’s book, it comes from the role Microsoft played in the events in my book.

Many commentators have speculated on what precise charges could be brought against Facebook, Google/Alphabet, Apple, and Amazon. For the sake of simplicity, let’s call these the “big four.” While I have no special insight to bring to such speculation, for this post I can do something different, and look forward by looking back. For the time being, Microsoft has been spared scrutiny by contemporary political actors. (It seems safe to presume Microsoft’s managers prefer to be left out.) While it is tempting to focus on why this has happened, let’s focus on a related issue: What shadow did Microsoft’s trials cast on the antitrust issues facing the big four?

Two types of lessons emerged from Microsoft’s trials, and both tend to be less appreciated by economists. One set of lessons emerged from the media flood of the flotsam and jetsam of sensationalistic factoids and sound bites, drawn from Congressional and courtroom testimony. That yielded lessons about managing sound and fury – i.e., mostly about reducing the cringe-worthy quotes from CEOs and trial witnesses.

Another set of lessons pertained to the role and limits of economic reasoning. Many decision makers reasoned by analogy and metaphor. That is especially so for lawyers and executives. These metaphors do not make economic reasoning wrong, but they do tend to shape how an antitrust question takes center stage with a judge, as well as in the court of public opinion. These metaphors also influence the stories a CEO tells to employees.

If you asked me to forecast how things will go for the big four, based on what I learned from studying Microsoft’s trials, I forecast that the outcome depends on which metaphor and analogy gets the upper hand.

In that sense, I want to argue that Microsoft’s experience depended on “the fox and shepherd problem.” When is a platform leader better thought of as a shepherd, helping partners achieve a healthy outcome, or as a fox in charge of a henhouse, ready to sacrifice a partner for self-serving purposes? I forecast the same metaphors will shape experience of the big four.

Gaps and analysis

The fox-shepherd problem never shows up when a platform leader is young and its platform is small. As the platform reaches bigger scale, however, the problem becomes more salient. Conflicts of interests emerge and focus attention on platform leadership.

Petit frames these issues within a Schumpeterian vision. In this view, firms compete for dominant positions over time, potentially with one dominant firm replacing another. Potential competition has a salutary effect if established firms perceive a threat from the future shadow of such competitors, motivating innovation. In this view, antitrust’s role might be characterized as “keeping markets open so there is pressure on the dominant firm from potential competition.”

In the Microsoft trial economists framed the Schumpeterian tradeoff in the vocabulary of economics. Firms who supply complements at one point could become suppliers of substitutes at a later point if they are allowed to. In other words, platform leaders today support complements that enhance the value of the platform, while also having the motive and ability to discourage those same business partners from developing services that substitute for the platform’s services, which could reduce the platform’s value. Seen through this lens, platform leaders inherently face a conflict of interest, and antitrust law should intervene if platform leaders could place excessive limitations on existing business partners.

This economic framing is not wrong. Rather, it is necessary, but not sufficient. If I take a sober view of events in the Microsoft trial, I am not convinced the economics alone persuaded the judge in Microsoft’s case, or, for that matter, the public.

As judges sort through the endless detail of contracting provisions, they need a broad perspective, one that sharpens their focus on a key question. One central question in particular inhabits a lot of a judge’s mindshare: how did the platform leader use its discretion, and for what purposes? In case it is not obvious, shepherds deserve a lot of discretion, while only a fool gives a fox much license.

Before the trial, when it initially faced this question from reporters and Congress, Microsoft tried to dismiss the discussion altogether. Their representatives argued that high technology differs from every other market in its speed and productivity, and, therefore, ought to be thought of as incomparable to other antitrust examples. This reflected the high tech elite’s view of their own exceptionalism.

Reporters dutifully restated this argument, and, long story short, it did not get far with the public once the sensationalism started making headlines, and it especially did not get far with the trial judge. To be fair, if you watched recent congressional testimony, it appears as if the lawyers for the big four instructed their CEOs not to try it this approach this time around.

Origins

Well before lawyers and advocates exaggerate claims, the perspective of both sides usually have some merit, and usually the twain do not meet. Most executives tend to remember every detail behind growth, and know the risks confronted and overcome, and usually are reluctant to give up something that works for their interests, and sometimes these interests can be narrowly defined. In contrast, many partners will know examples of a rule that hindered them, and point to complaints that executives ignored, and aspire to have rules changed, and, again, their interests tend to be narrow.

Consider the quality-control process today for iPhone apps as an example. The merits and absurdity of some of Apples conduct get a lot of attention in online forums, especially the 30% take for Apple. Apple can reasonably claim the present set of rules work well overall, and only emerged after considerable experimentation, and today they seek to protect all who benefit from the entire system, like a shepherd. It is no surprise however, that some partners accuse Apple of tweaking rules to their own benefit, and using the process to further Apple’s ambitions at the expense of the partner’s, like a fox in a henhouse. So it goes.

More generally, based on publically available information, all of the big four already face this debate. Self-serving behavior shows up in different guise in different parts of the big four’s business, but it is always there. As noted, Apple’s apps compete with the apps of others, so it has incentives to shape distribution of other apps. Amazon’s products compete with some products coming from its third—party sellers, and it too faces mixed incentives. Google’s services compete with online services who also advertise on their search engine, and they too face issues over their charges for listing on the Play store. Facebook faces an additional issues, because it has bought firms that were trying to grow their own platforms to compete with Facebook.

Look, those four each contain rather different businesses in their details, which merits some caution in making a sweeping characterization. My only point: the question about self-serving behavior arises in each instance. That frames a fox-shepherd problem for prosecutors in each case.

Lessons from prior experience

Circling back to lessons of the past for antitrust today, the Shepherd-Fox problem was one of the deeper sources of miscommunication leading up to the Microsoft trial. In the late 1990s Microsoft could reasonably claim to be a shepherd for all its platform’s partners, and it could reasonably claim to have improved the platform in ways that benefited partners. Moreover, for years some of the industry gossip about their behavior stressed misinformed nonsense. Accordingly, Microsoft’s executives had learned to trust their own judgment and to mistrust the complaints of outsiders. Right in line with that mistrust, many employees and executives took umbrage to being characterized as a fox in a henhouse, dismissing the accusations out of hand.

Those habits-of-mind poorly positioned the firm for a court case. As any observer of the trial knowns, When prosecutors came looking, they found lots of examples that looked like fox-like behavior. Onerous contract restrictions and cumbersome processes for business partners produced plenty of bad optics in court, and fueled the prosecution’s case that the platform had become too self-serving at the expense of competitive processes. Prosecutors had plenty to work with when it came time to prove motive, intent, and ability to misuse discretion. 

What is the lesson for the big four? Ask an executive in technology today, and sometimes you will hear the following: As long as a platform’s actions can be construed as friendly to customers, the platform leader will be off the hook. That is not wrong lessons, but it is an incomplete one. Looking with hindsight and foresight, that perspective seems too sanguine about the prospects for the big four. Microsoft had done plenty for its customers, but so what? There was plenty of evidence of acting like a fox in a hen-house. The bigger lesson is this: all it took were a few bad examples to paint a picture of a pattern, and every firm has such examples.

Do not get me wrong. I am not saying a fox and hen-house analogy is fair or unfair to platform leaders. Rather, I am saying that economists like to think the economic trade-off between the interests of platform leaders, platform partners, and platform customers emerge from some grand policy compromise. That is not how prosecutors think, nor how judges decide. In the Microsoft case there was no such grand consideration. The economic framing of the case only went so far. As it was, the decision was vulnerable to metaphor, shrewdly applied and convincingly argued. Done persuasively, with enough examples of selfish behavior, excuses about “helping customers” came across as empty.

Policy

Some advocates argue, somewhat philosophically, that platforms deserve discretion, and governments are bound to err once they intervene. I have sympathy with that point of view, but only up to a point. Below are two examples from outside antitrust where government routinely do not give the big four a blank check.

First, when it started selling ads, Google banned ads for cigarettes, porn and alcohol, and it downgraded in its quality score for websites that used deceptive means to attract users. That helped the service foster trust with new users, enabling it to grow. After it became bigger should Google have continued to have unqualified discretion to shepherd the entire ad system? Nobody thinks so. A while ago the Federal Trade Commission decided to investigate deceptive online advertising, just as it investigates deceptive advertising in other media. It is not a big philosophical step to next ask whether Google should have unfettered discretion to structure the ad business, search process, and related e-commerce to its own benefit.

Here is another example, this one about Facebook. Over the years Facebook cycled through a number of rules for sharing information with business partners, generally taking a “relaxed” attitude enforcing those policies. Few observers cared when Facebook was small, but many governments started to care after Facebook grew to billions of users. Facebook’s lax monitoring did not line up with the preferences of many governments. It should not come as a surprise now that many governments want to regulate Facebook’s handling of data. Like it or not, this question lies squarely within the domain of government privacy policy. Again, the next step is small. Why should other parts of its business remain solely in Facebook’s discretion, like its ability to buy other businesses?

This gets us to the other legacy of the Microsoft case: As we think about future policy dilemmas, are there a general set of criteria for the antitrust issues facing all four firms? Veterans of court cases will point out that every court case is its own circus. Just because Microsoft failed to be persuasive in its day does not imply any of the big four will be unpersuasive.

Looking back on the Microsoft trial, it did not articulate a general set of principles about acceptable or excusable self-serving behavior from a platform leader. It did not settle what criteria best determine when a court should consider a platform leader’s behavior closer to that of a shepherd or a fox. The appropriate general criteria remains unclear.

The DOJ and 20 state AGs sued Microsoft on May 18, 1998 for unlawful maintenance of its monopoly position in the PC market. The government accused the desktop giant of tying its operating system (Windows) and its web browser (Internet Explorer). Microsoft had indeed become dominant in the PC market by the late 1980s:

Source: Asymco

But after the introduction of smartphones in the mid-2000s, Microsoft’s market share of personal computing units (including PCs, smartphones, and tablets) collapsed:

Source: Benedict Evans

Steven Sinofsy pointed out why this was a classic case of disruptive innovation rather than sustaining innovation: “Google and Microsoft were competitors but only by virtue of being tech companies hiring engineers. After that, almost nothing about what was being made or sold was similar even if things could ultimately be viewed as substitutes. That is literally the definition of innovation.”

Browsers

Microsoft grew to dominance during the PC era by bundling its desktop operating system (Windows) with its productivity software (Office) and modularizing the hardware providers. By 1995, Bill Gates had realized that the internet was the next big thing, calling it “The Internet Tidal Wave” in a famous internal memo. Gates feared that the browser would function as “middleware” and disintermediate Microsoft from its relationship with the end-user. At the time, Netscape Navigator was gaining market share from the first browser to popularize the internet, Mosaic (so-named because it supported a multitude of protocols).

Later that same year, Microsoft released its own browser, Internet Explorer, which would be bundled with its Windows operating system. Internet Explorer soon grew to dominate the market:

Source: Browser Wars

Steven Sinofsky described how the the browser threatened to undermine the Windows platform (emphasis added):

Microsoft saw browsers as a platform threat to Windows. Famously. Browsers though were an app — running everywhere, distributed everywhere. Microsoft chose to compete as though browsing was on par with Windows (i.e., substitutes).

That meant doing things like IBM did — finding holes in distribution where browsers could “sneak” in (e.g., OEM deals) and seeing how to make Microsoft browser work best and only with Windows. Sound familiar? It does to me.

Imagine (some of us did) a world instead where Microsoft would have built a browser that was an app distributed everywhere, running everywhere. That would have been a very different strategy. One that some imagined, but not when Windows was central.

Showing how much your own gravity as a big company can make even obvious steps strategically weak: Microsoft knew browsers had to be cross-platform so it built Internet Explorer for Mac and Unix. Neat. But wait, the main strategic differentiator for Internet Explorer was ActiveX which was clearly Windows only.

So even when trying to compete in a new market the strategy was not going to work technically and customers would immediately know. Either they would ignore the key part of Windows or the key part of x-platform. This is what a big company “master plan” looks like … Active Desktop.

Regulators claimed victory but the loss already happened. But for none of the reasons the writers of history say at least [in my humble opinion]. As a reminder, Microsoft stopped working on Internet Explorer 7 years before Chrome even existed — literally didn’t release a new version for 5+ years.

One of the most important pieces of context for this case is that other browsers were also free for personal use (even if they weren’t bundled with an operating system). At the time, Netscape was free for individuals. Mosaic was free for non-commercial use. Today, Chrome and Firefox are free for all users. Chrome makes money for Google by increasing the value of its ecosystem and serving as a complement for its other products (particularly search). Firefox is able to more than cover its costs by charging Google (and others) to be the default option in its browser. 

By bundling Internet Explorer with Windows for free, Microsoft was arguably charging the market rate. In highly competitive markets, economic theory tells us the price should approach marginal cost — which in software is roughly zero. As James Pethokoukis argued, there are many more reasons to be skeptical about the popular narrative surrounding the Microsoft case. The reasons for doubt range across features, products, and markets, including server operating systems, mobile devices, and search engines. Let’s examine a few of them.

Operating Systems

In a 2007 article for Wired titled “I Blew It on Microsoft,” Lawrence Lessig, a Harvard law professor, admits that his predictions about the future of competition in computer operating systems failed to account for the potential of open-source solutions:

We pro-regulators were making an assumption that history has shown to be completely false: That something as complex as an OS has to be built by a commercial entity. Only crazies imagined that volunteers outside the control of a corporation could successfully create a system over which no one had exclusive command. We knew those crazies. They worked on something called Linux.

According to Web Technology Surveys, as of April 2019, about 70 percent of servers use a Linux-based operating system while the remaining 30 percent use Windows.

Mobile

In 2007, Steve Ballmer believed that Microsoft would be the dominant company in smartphones, saying in an interview with USA Today (emphasis added):

There’s no chance that the iPhone is going to get any significant market share. No chance. It’s a $500 subsidized item. They may make a lot of money. But if you actually take a look at the 1.3 billion phones that get sold, I’d prefer to have our software in 60% or 70% or 80% of them, than I would to have 2% or 3%, which is what Apple might get.

But as Ballmer himself noted in 2013, Microsoft was too committed to the Windows platform to fully pivot its focus to mobile:

If there’s one thing I regret, there was a period in the early 2000s when we were so focused on what we had to do around Windows that we weren’t able to redeploy talent to the new device form factor called the phone.

This is another classic example of the innovator’s dilemma. Microsoft enjoyed high profit margins in its Windows business, which caused the company to underrate the significance of the shift from PCs to smartphones.

Search

To further drive home how dependent Microsoft was on its legacy products, this 2009 WSJ piece notes that the company had a search engine ad service in 2000 and shut it down to avoid cannibalizing its core business:

Nearly a decade ago, early in Mr. Ballmer’s tenure as CEO, Microsoft had its own inner Google and killed it. In 2000, before Google married Web search with advertising, Microsoft had a rudimentary system that did the same, called Keywords, running on the Web. Advertisers began signing up. But Microsoft executives, in part fearing the company would cannibalize other revenue streams, shut it down after two months.

Ben Thompson says we should wonder if the case against Microsoft was a complete waste of everyone’s time (and money): 

In short, to cite Microsoft as a reason for antitrust action against Google in particular is to get history completely wrong: Google would have emerged with or without antitrust action against Microsoft; if anything the real question is whether or not Google’s emergence shows that the Microsoft lawsuit was a waste of time and money.

The most obvious implications of the Microsoft case were negative: (1) PCs became bloated with “crapware” (2) competition in the browser market failed to materialize for many years (3) PCs were less safe because Microsoft couldn’t bundle security software, and (4) some PC users missed out on using first-party software from Microsoft because it couldn’t be bundled with Windows. When weighed against these large costs, the supposed benefits pale in comparison.

Conclusion

In all three cases I’ve discussed in this series — AT&T, IBM, and Microsoft — the real story was not that antitrust enforcers divined the perfect time to break up — or regulate — the dominant tech company. The real story was that slow and then sudden technological change outpaced the organizational inertia of incumbents, permanently displacing the former tech giants from their dominant position in the tech ecosystem. 

The next paradigm shift will be near-impossible to predict. Those who know which technology — and when — it will be would make a lot more money implementing their ideas than they would by playing pundit in the media. Regardless of whether the future winner will be Google, Facebook, Amazon, Apple, Microsoft, or some unknown startup company, antitrust enforcers should remember that the proper goal of public policy in this domain is to maximize total innovation — from firms both large and small. Fetishizing innovation by small companies — and using law enforcement to harass big companies in the hopes for an indirect benefit to competition — will make us all worse off in the long run.

The case against AT&T began in 1974. The government alleged that AT&T had monopolized the market for local and long-distance telephone service as well as telephone equipment. In 1982, the company entered into a consent decree to be broken up into eight pieces (the “Baby Bells” plus the parent company), which was completed in 1984. As a remedy, the government required the company to divest its local operating companies and guarantee equal access to all long-distance and information service providers (ISPs).

Source: Mohanram & Nanda

As the chart above shows, the divestiture broke up AT&T’s national monopoly into seven regional monopolies. In general, modern antitrust analysis focuses on the local product market (because that’s the relevant level for consumer decisions). In hindsight, how did breaking up a national monopoly into seven regional monopolies increase consumer choice? It’s also important to note that, prior to its structural breakup, AT&T was a government-granted monopoly regulated by the FCC. Any antitrust remedy should be analyzed in light of the company’s unique relationship with regulators.

Breaking up one national monopoly into seven regional monopolies is not an effective way to boost innovation. And there are economies of scale and network effects to be gained by owning a national network to serve a national market. In the case of AT&T, those economic incentives are why the Baby Bells forged themselves back together in the decades following the breakup.

Source: WSJ

As Clifford Winston and Robert Crandall noted

Appearing to put Ma Bell back together again may embarrass the trustbusters, but it should not concern American consumers who, in two decades since the breakup, are overwhelmed with competitive options to provide whatever communications services they desire.

Moreover, according to Crandall & Winston (2003), the lower prices following the breakup of AT&T weren’t due to the structural remedy at all (emphasis added):

But on closer examination, the rise in competition and lower long-distance prices are attributable to just one aspect of the 1982 decree; specifically, a requirement that the Bell companies modify their switching facilities to provide equal access to all long-distance carriers. The Federal Communications Commission (FCC) could have promulgated such a requirement without the intervention of the antitrust authorities. For example, the Canadian regulatory commission imposed equal access on its vertically integrated carriers, including Bell Canada, in 1993. As a result, long-distance competition developed much more rapidly in Canada than it had in the United States (Crandall and Hazlett, 2001). The FCC, however, was trying to block MCI from competing in ordinary long-distance services when the AT&T case was filed by the Department of Justice in 1974. In contrast to Canadian and more recent European experience, a lengthy antitrust battle and a disruptive vertical dissolution were required in the U.S. market to offset the FCC’s anti-competitive policies. Thus, antitrust policy did not triumph in this case over restrictive practices by a monopolist to block competition, but instead it overcame anticompetitive policies by a federal regulatory agency.

A quick look at the data on telephone service in the US, EU, and Canada show that the latter two were able to achieve similar reductions in price without breaking up their national providers.

Source: Crandall & Jackson (2011)

The paradigm shift from wireline to wireless

The technological revolution spurred by the transition from wireline telephone service to wireless telephone service shook up the telecommunications industry in the 1990s. The rapid change caught even some of the smartest players by surprise. In 1980, the management consulting firm McKinsey and Co. produced a report for AT&T predicting how large the cellular market might become by the year 2000. Their forecast said that 900,000 cell phones would be in use. The actual number was more than 109 million.

Along with the rise of broadband, the transition to wireless technology led to an explosion in investment. In contrast, the breakup of AT&T in 1984 had no discernible effect on the trend in industry investment:

The lesson for antitrust enforcers is clear: breaking up national monopolies into regional monopolies is no remedy. In certain cases, mandating equal access to critical networks may be warranted. Most of all, technology shocks will upend industries in ways that regulators — and dominant incumbents — fail to predict.

The Department of Justice began its antitrust case against IBM on January 17, 1969. The DOJ sued under the Sherman Antitrust Act, claiming IBM tried to monopolize the market for “general-purpose digital computers.” The case lasted almost thirteen years, ending on January 8, 1982 when Assistant Attorney General William Baxter declared the case to be “without merit” and dropped the charges. 

The case lasted so long, and expanded in scope so much, that by the time the trial began, “more than half of the practices the government raised as antitrust violations were related to products that did not exist in 1969.” Baltimore law professor Robert Lande said it was “the largest legal case of any kind ever filed.” Yale law professor Robert Bork called it “the antitrust division’s Vietnam.”

As the case dragged on, IBM was faced with increasingly perverse incentives. As NYU law professor Richard Epstein pointed out (emphasis added), 

Oddly, enough IBM was able to strengthen its antitrust-related legal position by reducing its market share, which it achieved through raising prices. When the suit was discontinued that share had fallen dramatically since 1969 from about 50 percent of the market to 37 percent in 1982. Only after the government suit ended did IBM lower its prices in order to increase market share.

Source: Levy & Welzer

In an interview with Vox, Tim Wu claimed that without the IBM case, Apple wouldn’t exist and we might still be using mainframe computers (emphasis added):

Vox: You said that Apple wouldn’t exist without the IBM case.

Wu: Yeah, I did say that. The case against IBM took 13 years and we didn’t get a verdict but in that time, there was the “policeman at the elbow” effect. IBM was once an all-powerful company. It’s not clear that we would have had an independent software industry, or that it would have developed that quickly, the idea of software as a product, [without this case]. That was one of the immediate benefits of that excavation.

And then the other big one is that it gave a lot of room for the personal computer to get started, and the software that surrounds the personal computer — two companies came in, Apple and Microsoft. They were sort of born in the wake of the IBM lawsuit. You know they were smart guys, but people did need the pressure off their backs.

Nobody is going to start in the shadow of Facebook and get anywhere. Snap’s been the best, but how are they doing? They’ve been halted. I think it’s a lot harder to imagine this revolutionary stuff that happened in the ’80s. If IBM had been completely unwatched by regulators, by enforcement, doing whatever they wanted, I think IBM would have held on and maybe we’d still be using mainframes, or something — a very different situation.

Steven Sinofsky, a former Microsoft executive and current Andreessen Horowitz board partner, had a different take on the matter, attributing IBM’s (belated) success in PCs to its utter failure in minicomputers (emphasis added):

IBM chose to prevent third parties from interoperating with mainframes sometimes at crazy levels (punch card formats). And then chose to defend until the end their business model of leasing … The minicomputer was a direct threat not because of technology but because of those attributes. I’ve heard people say IBM went into PCs because the antitrust loss caused them to look for growth or something. Ha. PCs were spun up because IBM was losing Minis. But everything about the PC was almost a fluke organizationally and strategically. The story of IBM regulation is told as though PCs exist because of the case.

The more likely story is that IBM got swamped by the paradigm shift from mainframes to PCs. IBM was dominant in mainframe computers which were sold to the government and large enterprises. Microsoft, Intel, and other leaders in the PC market sold to small businesses and consumers, which required an entirely different business model than IBM was structured to implement.

ABB – Always Be Bundling (Or Unbundling)

“There’s only two ways I know of to make money: bundling and unbundling.” – Jim Barksdale

In 1969, IBM unbundled its software and services from hardware sales. As many industry observers note, this action precipitated the rise of the independent software development industry. But would this have happened regardless of whether there was an ongoing antitrust case? Given that bundling and unbundling is ubiquitous in the history of the computer industry, the answer is likely yes.

As the following charts show, IBM first created an integrated solution in the mainframe market, controlling everything from raw materials and equipment to distribution and service. When PCs disrupted mainframes, the entire value chain was unbundled. Later, Microsoft bundled its operating system with applications software. 

Source: Clayton Christensen

The first smartphone to disrupt the PC market was the Apple iPhone — an integrated solution. And once the technology became “good enough” to meet the average consumer’s needs, Google modularized everything except the operating system (Android) and the app store (Google Play).

Source: SlashData
Source: Jake Nielson

Another key prong in Tim Wu’s argument that the government served as an effective “policeman at the elbow” in the IBM case is that the company adopted an open model when it entered the PC market and did not require an exclusive license from Microsoft to use its operating system. But exclusivity is only one term in a contract negotiation. In an interview with Playboy magazine in 1994, Bill Gates explained how he was able to secure favorable terms from IBM (emphasis added):

Our restricting IBM’s ability to compete with us in licensing MS-DOS to other computer makers was the key point of the negotiation. We wanted to make sure only we could license it. We did the deal with them at a fairly low price, hoping that would help popularize it. Then we could make our move because we insisted that all other business stay with us. We knew that good IBM products are usually cloned, so it didn’t take a rocket scientist to figure out that eventually we could license DOS to others. We knew that if we were ever going to make a lot of money on DOS it was going to come from the compatible guys, not from IBM. They paid us a fixed fee for DOS. We didn’t get a royalty, even though we did make some money on the deal. Other people paid a royalty. So it was always advantageous to us, the market grew and other hardware guys were able to sell units.

In this version of the story, IBM refrained from demanding an exclusive license from Microsoft not because it was fearful of antitrust enforcers but because Microsoft made significant concessions on price and capped its upside by agreeing to a fixed fee rather than a royalty. These economic and technical explanations for why IBM wasn’t able to leverage its dominant position in mainframes into the PC market are more consistent with the evidence than Wu’s “policeman at the elbow” theory.

In my next post, I will discuss the other major antitrust case that came to an end in 1982: AT&T.

Source: Benedict Evans

[N]ew combinations are, as a rule, embodied, as it were, in new firms which generally do not arise out of the old ones but start producing beside them; … in general it is not the owner of stagecoaches who builds railways. – Joseph Schumpeter, January 1934

Elizabeth Warren wants to break up the tech giants — Facebook, Google, Amazon, and Apple — claiming they have too much power and represent a danger to our democracy. As part of our response to her proposal, we shared a couple of headlines from 2007 claiming that MySpace had an unassailable monopoly in the social media market.

Tommaso Valletti, the chief economist of the Directorate-General for Competition (DG COMP) of the European Commission, said, in what we assume was a reference to our posts, “they go on and on with that single example to claim that [Facebook] and [Google] are not a problem 15 years later … That’s not what I would call an empirical regularity.”

We appreciate the invitation to show that prematurely dubbing companies “unassailable monopolies” is indeed an empirical regularity.

It’s Tough to Make Predictions, Especially About the Future of Competition in Tech

No one is immune to this phenomenon. Antitrust regulators often take a static view of competition, failing to anticipate dynamic technological forces that will upend market structure and competition.

Scientists and academics make a different kind of error. They are driven by the need to satisfy their curiosity rather than shareholders. Upon inventing a new technology or discovering a new scientific truth, academics often fail to see the commercial implications of their findings.

Maybe the titans of industry don’t make these kinds of mistakes because they have skin in the game? The profit and loss statement is certainly a merciless master. But it does not give CEOs the power of premonition. Corporate executives hailed as visionaries in one era often become blinded by their success, failing to see impending threats to their company’s core value propositions.

Furthermore, it’s often hard as outside observers to tell after the fact whether business leaders just didn’t see a tidal wave of disruption coming or, worse, they did see it coming and were unable to steer their bureaucratic, slow-moving ships to safety. Either way, the outcome is the same.

Here’s the pattern we observe over and over: extreme success in one context makes it difficult to predict how and when the next paradigm shift will occur in the market. Incumbents become less innovative as they get lulled into stagnation by high profit margins in established lines of business. (This is essentially the thesis of Clay Christensen’s The Innovator’s Dilemma).

Even if the anti-tech populists are powerless to make predictions, history does offer us some guidance about the future. We have seen time and again that apparently unassailable monopolists are quite effectively assailed by technological forces beyond their control.

PCs

Source: Horace Dediu

Jan 1977: Commodore PET released

Jun 1977: Apple II released

Aug 1977: TRS-80 released

Feb 1978: “I.B.M. Says F.T.C. Has Ended Its Typewriter Monopoly Study” (NYT)

Mobile

Source: Comscore

Mar 2000: Palm Pilot IPO’s at $53 billion

Sep 2006: “Everyone’s always asking me when Apple will come out with a cellphone. My answer is, ‘Probably never.’” – David Pogue (NYT)

Apr 2007: “There’s no chance that the iPhone is going to get any significant market share.” Ballmer (USA TODAY)

Jun 2007: iPhone released

Nov 2007: “Nokia: One Billion Customers—Can Anyone Catch the Cell Phone King?” (Forbes)

Sep 2013: “Microsoft CEO Ballmer Bids Emotional Farewell to Wall Street” (Reuters)

If there’s one thing I regret, there was a period in the early 2000s when we were so focused on what we had to do around Windows that we weren’t able to redeploy talent to the new device form factor called the phone.

Search

Source: Distilled

Mar 1998: “How Yahoo! Won the Search Wars” (Fortune)

Once upon a time, Yahoo! was an Internet search site with mediocre technology. Now it has a market cap of $2.8 billion. Some people say it’s the next America Online.

Sep 1998: Google founded

Instant Messaging

Sep 2000: “AOL Quietly Linking AIM, ICQ” (ZDNet)

AOL’s dominance of instant messaging technology, the kind of real-time e-mail that also lets users know when others are online, has emerged as a major concern of regulators scrutinizing the company’s planned merger with Time Warner Inc. (twx). Competitors to Instant Messenger, such as Microsoft Corp. (msft) and Yahoo! Inc. (yhoo), have been pressing the Federal Communications Commission to force AOL to make its services compatible with competitors’.

Dec 2000: “AOL’s Instant Messaging Monopoly?” (Wired)

Dec 2015: Report for the European Parliament

There have been isolated examples, as in the case of obligations of the merged AOL / Time Warner to make AOL Instant Messenger interoperable with competing messaging services. These obligations on AOL are widely viewed as having been a dismal failure.

Oct 2017: AOL shuts down AIM

Jan 2019: “Zuckerberg Plans to Integrate WhatsApp, Instagram and Facebook Messenger” (NYT)

Retail

Source: Seeking Alpha

May 1997: Amazon IPO

Mar 1998: American Booksellers Association files antitrust suit against Borders, B&N

Feb 2005: Amazon Prime launches

Jul 2006: “Breaking the Chain: The Antitrust Case Against Wal-Mart” (Harper’s)

Feb 2011: “Borders Files for Bankruptcy” (NYT)

Social

Feb 2004: Facebook founded

Jan 2007: “MySpace Is a Natural Monopoly” (TechNewsWorld)

Seventy percent of Yahoo 360 users, for example, also use other social networking sites — MySpace in particular. Ditto for Facebook, Windows Live Spaces and Friendster … This presents an obvious, long-term business challenge to the competitors. If they cannot build up a large base of unique users, they will always be on MySpace’s periphery.

Feb 2007: “Will Myspace Ever Lose Its Monopoly?” (Guardian)

Jun 2011: “Myspace Sold for $35m in Spectacular Fall from $12bn Heyday” (Guardian)

Music

Source: RIAA

Dec 2003: “The subscription model of buying music is bankrupt. I think you could make available the Second Coming in a subscription model, and it might not be successful.” – Steve Jobs (Rolling Stone)

Apr 2006: Spotify founded

Jul 2009: “Apple’s iPhone and iPod Monopolies Must Go” (PC World)

Jun 2015: Apple Music announced

Video

Source: OnlineMBAPrograms

Apr 2003: Netflix reaches one million subscribers for its DVD-by-mail service

Mar 2005: FTC blocks Blockbuster/Hollywood Video merger

Sep 2006: Amazon launches Prime Video

Jan 2007: Netflix streaming launches

Oct 2007: Hulu launches

May 2010: Hollywood Video’s parent company files for bankruptcy

Sep 2010: Blockbuster files for bankruptcy

The Only Winning Move Is Not to Play

Predicting the future of competition in the tech industry is such a fraught endeavor that even articles about how hard it is to make predictions include incorrect predictions. The authors just cannot help themselves. A March 2012 BBC article “The Future of Technology… Who Knows?” derided the naysayers who predicted doom for Apple’s retail store strategy. Its kicker?

And that is why when you read that the Blackberry is doomed, or that Microsoft will never make an impression on mobile phones, or that Apple will soon dominate the connected TV market, you need to take it all with a pinch of salt.

But Blackberry was doomed and Microsoft never made an impression on mobile phones. (Half credit for Apple TV, which currently has a 15% market share).

Nobel Prize-winning economist Paul Krugman wrote a piece for Red Herring magazine (seriously) in June 1998 with the title “Why most economists’ predictions are wrong.” Headline-be-damned, near the end of the article he made the following prediction:

The growth of the Internet will slow drastically, as the flaw in “Metcalfe’s law”—which states that the number of potential connections in a network is proportional to the square of the number of participants—becomes apparent: most people have nothing to say to each other! By 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.

Robert Metcalfe himself predicted in a 1995 column that the Internet would “go spectacularly supernova and in 1996 catastrophically collapse.” After pledging to “eat his words” if the prediction did not come true, “in front of an audience, he put that particular column into a blender, poured in some water, and proceeded to eat the resulting frappe with a spoon.”

A Change Is Gonna Come

Benedict Evans, a venture capitalist at Andreessen Horowitz, has the best summary of why competition in tech is especially difficult to predict:

IBM, Microsoft and Nokia were not beaten by companies doing what they did, but better. They were beaten by companies that moved the playing field and made their core competitive assets irrelevant. The same will apply to Facebook (and Google, Amazon and Apple).

Elsewhere, Evans tried to reassure his audience that we will not be stuck with the current crop of tech giants forever:

With each cycle in tech, companies find ways to build a moat and make a monopoly. Then people look at the moat and think it’s invulnerable. They’re generally right. IBM still dominates mainframes and Microsoft still dominates PC operating systems and productivity software. But… It’s not that someone works out how to cross the moat. It’s that the castle becomes irrelevant. IBM didn’t lose mainframes and Microsoft didn’t lose PC operating systems. Instead, those stopped being ways to dominate tech. PCs made IBM just another big tech company. Mobile and the web made Microsoft just another big tech company. This will happen to Google or Amazon as well. Unless you think tech progress is over and there’ll be no more cycles … It is deeply counter-intuitive to say ‘something we cannot predict is certain to happen’. But this is nonetheless what’s happened to overturn pretty much every tech monopoly so far.

If this time is different — or if there are more false negatives than false positives in the monopoly prediction game — then the advocates for breaking up Big Tech should try to make that argument instead of falling back on “big is bad” rhetoric. As for us, we’ll bet that we have not yet reached the end of history — tech progress is far from over.