A new scholarly study of economic concentration sheds further light on the flawed nature of the Neo-Brandeisian claim that the United States has a serious “competition problem” due to decades of increasing concentration and ineffective antitrust enforcement (see here and here, for example). In a recent article, economist Yueran Ma—assistant professor at the University of Chicago’s Booth School of Business—found that economies of scale (an efficiency) were associated with a U.S. economy-wide rise in concentration in economic activities (not antitrust markets) and a growth in output over the last century. In particular, Ma explained (emphasis added):
New research observing 100 years of concentration in economic activities and investment in research and development shows that the dominance of large businesses has been increasing for at least a century and, as Marx conjectured, may be a feature of the increasingly stronger economies of scale that accompany industrial development. . . .
To understand the broad historical currents of concentration, we collected financial information of all US corporations by size groups for the past 100 years. . . .
To be clear, our focus is not market concentration for a particular product, which would require defining markets based on consumption activities. Instead, our focus is the business size distribution in the US, namely the extent to which larger businesses dominate in the total volume of production activities across the economy. . . .
The data reveals a persistent rise in the dominance of the top 1 percent and top 0.1 percent of businesses in the US. From 1918 to 1975, the SOI provided size groups sorted by net income (green line with circles). Starting in 1959, the SOI also provided size groups sorted by sales (red line with diamonds). The longest and most comprehensive size groups are sorted by assets, available since 1931 (blue line with triangles). No matter the measure you choose, the long-run increase in corporate concentration is clear. . . .
Just as Stigler, Marx and Lenin had predicted, the reason for increased concentration appears to be economies of scale. Among different industries, we find that the timing and the degree of rising concentration align closely with rising investment in research and development (R&D) and information technology (IT), measured using additional data from the Bureau of Economic Analysis (BEA). These types of investments usually require a certain degree of scale due to upfront spending, while also producing technological changes that enhance economies of scale. Accordingly, we use investment intensity in R&D and IT as a general indicator of firms exploiting economies of scale. . . .
We also find that increases in concentration are positively associated with industry growth. In particular, over the medium term (e.g., twenty years), industries that experience higher increases in concentration are also the ones that experience higher growth in real gross output. Correspondingly, their shares of economic output expand as well. . . .
A[ ] natural question is whether regulatory policies and antitrust enforcement drive the main trends we find. For instance, regulatory restrictions on interstate banking could have a direct impact on the size of banks (and we indeed observe rising concentration in banking when these restrictions were lifted). In most other sectors, we are not aware of policies that align with the patterns of rising concentration in our data. The past century witnessed several regimes of antitrust enforcement—however, rising corporate concentration has been a secular trend throughout these different antitrust regimes. We do not observe a significant relationship between corporate concentration in our data and standard aggregate antitrust enforcement measures, such as the number of antitrust cases filed by the Department of Justice (DOJ) or the budget of the DOJ’s antitrust division. Overall, we do not find evidence that antitrust shapes the economy-wide business size distribution, although it could have a more visible impact on the market for a particular product (which is closer to the domain of antitrust analyses).
Even if higher concentration in production activities comes from economies of scale, some contemporary observers fear that economies of scale will ultimately weaken competition and cultivate monopoly power (Lenin highlighted such concerns as well). Analyzing this question requires reliable measurement of market power. So far, most studies do not find rising markups (the standard measure of market power) before the 1980s, and some argue that markups have increased since the 1980s. Combined with our findings, the evidence suggests that stronger economies of scale does not always lead to stronger market power. It is possible that such a link may exist under certain conditions, and future research could shed more light on this topic.
In broad terms, Ma’s study describes a long-term rise in economic concentration (again, something entirely different from antitrust-relevant market concentration) in tandem with substantial increases in economies of scale and output expansion—overall, a story of long-term welfare enhancement. Antitrust-enforcement levels are not portrayed as significantly related to this trend, and there is no showing that rising economies of scale inevitably enhance market power. (Even possible increases in markups, whose existence has been contested, do not necessarily reflect an increase in market power, see here and here.)
Admittedly, Ma was engaged in a positive analysis of concentration, not a normative assessment. But her research certainly lends no support to the normative neo-Brandeisian notion that a drastic interventionist-minded overhaul of antitrust is required to address major competitive ills. To the contrary, one could logically infer that a dramatic rise in antitrust interventionism is not only uncalled for, but it could threaten the beneficial nature of rising economies of scale and output that have been shown to characterize the U.S. economy. One would hope that this inference would give Congress and U.S. antitrust enforcers pause before they embark on a novel interventionist path.
Why do digital industries routinely lead to one company having a very large share of the market (at least if one defines markets narrowly)? To anyone familiar with competition policy discussions, the answer might seem obvious: network effects, scale-related economies, and other barriers to entry lead to winner-take-all dynamics in platform industries. Accordingly, it is that believed the first platform to successfully unlock a given online market enjoys a determining first-mover advantage.
But are network effects and the like the only ways to explain why these markets look like this? While there is no definitive answer, scholars routinely overlook an alternative explanation that tends to undercut the narrative that tech markets have become non-contestable.
The alternative model is simple: faced with zero prices and the almost complete absence of switching costs, users have every reason to join their preferred platform. If user preferences are relatively uniform and one platform has a meaningful quality advantage, then there is every reason to expect that most consumers will all join the same one—even though the market remains highly contestable. On the other side of the equation, because platforms face very few capacity constraints, there are few limits to a given platform’s growth. As will be explained throughout this piece, this intuition is as old as economics itself.
The Bertrand Paradox
In 1883, French mathematician Joseph Bertrand published a powerful critique of two of the most high-profile economic thinkers of his time: the late Antoine Augustin Cournot and Léon Walras (it would be another seven years before Alfred Marshall published his famous principles of economics).
Bertrand criticized several of Cournot and Walras’ widely accepted findings. This included Cournot’s conclusion that duopoly competition would lead to prices above marginal cost—or, in other words, that duopolies were imperfectly competitive.
By reformulating the problem slightly, Bertand arrived at the opposite conclusion. He argued that each firm’s incentive to undercut its rival would ultimately lead to marginal cost pricing, and one seller potentially capturing the entire market:
There is a decisive objection [to Cournot’s model]: According to his hypothesis, no [supracompetitive] equilibrium is possible. There is no limit to price decreases; whatever the joint price being charged by firms, a competitor could always undercut this price and, with few exceptions, attract all consumers. If the competitor is allowed to get away with this [i.e. the rival does not react], it will double its profits.
This result is mainly driven by the assumption that, unlike in Cournot’s model, firms can immediately respond to their rival’s chosen price/quantity. In other words, Bertrand implicitly framed the competitive process as price competition, rather than quantity competition (under price competition, firms do not face any capacity constraints and they cannot commit to producing given quantities of a good):
If Cournot’s calculations mask this result, it is because of a remarkable oversight. Referring to them as D and D’, Cournot deals with the quantities sold by each of the two competitors and treats them as independent variables. He assumes that if one were to change by the will of one of the two sellers, the other one could remain fixed. The opposite is evidently true.
This later came to be known as the “Bertrand paradox”—the notion that duopoly-market configurations can produce the same outcome as perfect competition (i.e., P=MC).
But while Bertrand’s critique was ostensibly directed at Cournot’s model of duopoly competition, his underlying point was much broader. Above all, Bertrand seemed preoccupied with the notion that expressing economic problems mathematically merely gives them a veneer of accuracy. In that sense, he was one of the first economists (at least to my knowledge) to argue that the choice of assumptions has a tremendous influence on the predictions of economic models, potentially rendering them unreliable:
On other occasions, Cournot introduces assumptions that shield his reasoning from criticism—scholars can always present problems in a way that suits their reasoning.
All of this is not to say that Bertrand’s predictions regarding duopoly competition necessarily hold in real-world settings; evidence from experimental settings is mixed. Instead, the point is epistemological. Bertrand’s reasoning was groundbreaking because he ventured that market structures are not the sole determinants of consumer outcomes. More broadly, he argued that assumptions regarding the competitive process hold significant sway over the results that a given model may produce (and, as a result, over normative judgements concerning the desirability of given market configurations).
The Theory of Contestable Markets
Bertrand is certainly not the only economist to have suggested market structures alone do not determine competitive outcomes. In the early 1980s, William Baumol (and various co-authors) went one step further. Baumol argued that, under certain conditions, even monopoly market structures could deliver perfectly competitive outcomes. This thesis thus rejected the Structure-Conduct-Performance (“SCP”) Paradigm that dominated policy discussions of the time.
Baumol’s main point was that industry structure is not the main driver of market “contestability,” which is the key determinant of consumer outcomes. In his words:
In the limit, when entry and exit are completely free, efficient incumbent monopolists and oligopolists may in fact be able to prevent entry. But they can do so only by behaving virtuously, that is, by offering to consumers the benefits which competition would otherwise bring. For every deviation from good behavior instantly makes them vulnerable to hit-and-run entry.
For instance, it is widely accepted that “perfect competition” leads to low prices because firms are price-takers; if one does not sell at marginal cost, it will be undercut by rivals. Observers often assume this is due to the number of independent firms on the market. Baumol suggests this is wrong. Instead, the result is driven by the sanction that firms face for deviating from competitive pricing.
In other words, numerous competitors are a sufficient, but not necessary condition for competitive pricing. Monopolies can produce the same outcome when there is a present threat of entry and an incumbent’s deviation from competitive pricing would be sanctioned. This is notably the case when there are extremely low barriers to entry.
Take this hypothetical example from the world of cryptocurrencies. It is largely irrelevant to a user whether there are few or many crypto exchanges on which to trade coins, nonfungible tokens (NFTs), etc. What does matter is that there is at least one exchange that meets one’s needs in terms of both price and quality of service. This could happen because there are many competing exchanges, or because a failure to meet my needs by the few (or even one) exchange that does exist would attract the entry of others to which I could readily switch—thus keeping the behavior of the existing exchanges in check.
This has far-reaching implications for antitrust policy, as Baumol was quick to point out:
This immediately offers what may be a new insight on antitrust policy. It tells us that a history of absence of entry in an industry and a high concentration index may be signs of virtue, not of vice. This will be true when entry costs in our sense are negligible.
Given what precedes, Baumol surmised that industry structure must be driven by endogenous factors—such as firms’ cost structures—rather than the intensity of competition that they face. For instance, scale economies might make monopoly (or another structure) the most efficient configuration in some industries. But so long as rivals can sanction incumbents for failing to compete, the market remains contestable. Accordingly, at least in some industries, both the most efficient and the most contestable market configuration may entail some level of concentration.
To put this last point in even more concrete terms, online platform markets may have features that make scale (and large market shares) efficient. If so, there is every reason to believe that competition could lead to more, not less, concentration.
How Contestable Are Digital Markets?
The insights of Bertrand and Baumol have important ramifications for contemporary antitrust debates surrounding digital platforms. Indeed, it is critical to ascertain whether the (relatively) concentrated market structures we see in these industries are a sign of superior efficiency (and are consistent with potentially intense competition), or whether they are merely caused by barriers to entry.
The barrier-to-entry explanation has been repeated ad nauseam in recent scholarly reports, competition decisions, and pronouncements by legislators. There is thus little need to restate that thesis here. On the other hand, the contestability argument is almost systematically ignored.
Several factors suggest that online platform markets are far more contestable than critics routinely make them out to be.
First and foremost, consumer switching costs are extremely low for most online platforms. To cite but a few examples: Changing your default search engine requires at most a couple of clicks; joining a new social network can be done by downloading an app and importing your contacts to the app; and buying from an alternative online retailer is almost entirely frictionless, thanks to intermediaries such as PayPal.
These zero or near-zero switching costs are compounded by consumers’ ability to “multi-home.” In simple terms, joining TikTok does not require users to close their Facebook account. And the same applies to other online services. As a result, there is almost no opportunity cost to join a new platform. This further reduces the already tiny cost of switching.
Decades of app development have greatly improved the quality of applications’ graphical user interfaces (GUIs), to such an extent that costs to learn how to use a new app are mostly insignificant. Nowhere is this more apparent than for social media and sharing-economy apps (it may be less true for productivity suites that enable more complex operations). For instance, remembering a couple of intuitive swipe motions is almost all that is required to use TikTok. Likewise, ridesharing and food-delivery apps merely require users to be familiar with the general features of other map-based applications. It is almost unheard of for users to complain about usability—something that would have seemed impossible in the early 21st century, when complicated interfaces still plagued most software.
A second important argument in favor of contestability is that, by and large, online platforms face only limited capacity constraints. In other words, platforms can expand output rapidly (though not necessarily costlessly).
Perhaps the clearest example of this is the sudden rise of the Zoom service in early 2020. As a result of the COVID pandemic, Zoom went from around 10 million daily active users in early 2020 to more than 300 million by late April 2020. Despite being a relatively data-intensive service, Zoom did not struggle to meet this new demand from a more than 30-fold increase in its user base. The service never had to turn down users, reduce call quality, or significantly increase its price. In short, capacity largely followed demand for its service. Online industries thus seem closer to the Bertrand model of competition, where the best platform can almost immediately serve any consumers that demand its services.
Of course, none of this should be construed to declare that online markets are perfectly contestable. The central point is, instead, that critics are too quick to assume they are not. Take the following examples.
Scholars routinely cite the putatively strong concentration of digital markets to argue that big tech firms do not face strong competition, but this is a non sequitur. As Bertrand and Baumol (and others) show, what matters is not whether digital markets are concentrated, but whether they are contestable. If a superior rival could rapidly gain user traction, this alone will discipline the behavior of incumbents.
Markets where incumbents do not face significant entry from competitors are just as consistent with vigorous competition as they are with barriers to entry. Rivals could decline to enter either because incumbents have aggressively improved their product offerings or because they are shielded by barriers to entry (as critics suppose). The former is consistent with competition, the latter with monopoly slack.
Similarly, it would be wrong to presume, as many do, that concentration in online markets is necessarily driven by network effects and other scale-related economies. As ICLE scholars have argued elsewhere (here, here and here), these forces are not nearly as decisive as critics assume (and it is debatable that they constitute barriers to entry).
Finally, and perhaps most importantly, this piece has argued that many factors could explain the relatively concentrated market structures that we see in digital industries. The absence of switching costs and capacity constraints are but two such examples. These explanations, overlooked by many observers, suggest digital markets are more contestable than is commonly perceived.
In short, critics’ failure to meaningfully grapple with these issues serves to shape the prevailing zeitgeist in tech-policy debates. Cournot and Bertrand’s intuitions about oligopoly competition may be more than a century old, but they continue to be tested empirically. It is about time those same standards were applied to tech-policy debates.
[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]
Judges sometimes claim that they do not pick winners when they decide antitrust cases. Nothing could be further from the truth.
Competitive conduct by its nature harms competitors, and so if antitrust were merely to prohibit harm to competitors, antitrust would then destroy what it is meant to promote.
What antitrust prohibits, therefore, is not harm to competitors but rather harm to competitors that fails to improve products. Only in this way is antitrust able to distinguish between the good firm that harms competitors by making superior products that consumers love and that competitors cannot match and the bad firm that harms competitors by degrading their products without offering consumers anything better than what came before.
That means, however, that antitrust must pick winners: antitrust must decide what is an improvement and what not. And a more popular search engine is a clear winner.
But one should not take its winningness for granted. For once upon a time there was another winner that the courts always picked, blocking antitrust case after antitrust case. Until one day the courts stopped picking it.
That was the economy of scale.
The Structure of the Google Case
Like all antitrust cases that challenge the exercise of power, the government’s case against Google alleges denial of an input to competitors in some market. Here the input is default search status in smartphones, the competitors are rival search providers, and the market is search advertising. The basic structure of the case is depicted in the figure below.
Although brought as a monopolization case under Section 2 of the Sherman Act, this is at heart an exclusive dealing case of the sort normally brought under Section 1 of the Sherman Act: the government’s core argument is that Google uses contracts with smartphone makers, pursuant to which the smartphone makers promise to make Google, and not competitors, the search default, to harm competing search advertising providers and by extension competition in the search advertising market.
The government must show anticompetitive conduct, monopoly power, and consumer harm in order to prevail.
Let us assume that there is monopoly power. The company has more than 70% of the search advertising market, which is in the zone normally required to prove that element of a monopolization claim.
The problem of anticompetitive conduct is only slightly more difficult.
Anticompetitive conduct is only ever one thing in antitrust: denial of an essential input to a competitor. There is no other way to harm rivals.
(To be sure, antitrust prohibits harm to competition, not competitors, but that means only that harm to competitors necessary but insufficient for liability. The consumer harm requirement decides whether the requisite harm to competitors is also harm to competition.)
It is not entirely clear just how important default search status really is to running a successful search engine, but let us assume that it is essential, as the government suggests.
Then the question whether Google’s contracts are anticompetitive turns on how much of the default search input Google’s contracts foreclose to rival search engines. If a lot, then the rivals are badly harmed. If a little, then there may be no harm at all.
The answer here is that there is a lot of foreclosure, at least if the government’s complaint is to be believed. Through its contracts with Apple and makers of Android phones, Google has foreclosed default search status to rivals on virtually every single smartphone.
That leaves consumer harm. And here is where things get iffy.
Usage as a Product Improvement: A Very Convenient Argument
The inquiry into consumer harm evokes measurements of the difference between demand curves and price lines, or extrapolations of compensating and equivalent variation using indifference curves painstakingly pieced together based on the assumptions of revealed preference.
But while the parties may pay experts plenty to spin such yarns, and judges may pretend to listen to them, in the end, for the judges, it always comes down to one question only: did exclusive dealing improve the product?
If it did, then the judge assumes that the contracts made consumers better off and the defendant wins. And if it did not, then off with their heads.
So, does foreclosing all this default search space to competitors make Google search advertising more valuable to advertisers?
Those who leap to Google’s defense say yes, for default search status increases the number of people who use Google’s search engine. And the more people use Google’s search engine, the more Google learns about how best to answer search queries and which advertisements will most interest which searchers. And that ensures that even more people will use Google’s search engine, and that Google will do an even better job of targeting ads on its search engine.
And that in turn makes Google’s search advertising even better: able to reach more people and to target ads more effectively to them.
None of that would happen if defaults were set to other engines and users spurned Google, and so foreclosing default search space to rivals undoubtedly improves Google’s product.
This is a nice argument. Indeed, it is almost too nice, for it seems to suggest that almost anything Google might do to steer users away from competitors and to itself deserves antitrust immunity. Suppose Google were to brandish arms to induce you to run your next search on Google. That would be a crime, but, on this account, not an antitrust crime. For getting you to use Google does make Google better.
The argument that locking up users improves the product is of potential use not just to Google but to any of the many tech companies that run on advertising—Facebook being a notable example—so it potentially immunizes an entire business model from antitrust scrutiny.
It turns out that has happened before.
Economies of Scale as a Product Improvement: Once a Convenient Argument
Once upon a time, antitrust exempted another kind of business for which products improve the more people used them. The business was industrial production, and it differs from online advertising only in the irrelevant characteristic that the improvement that comes with expanding use is not in the quality of the product but in the cost per unit of producing it.
The hallmark of the industrial enterprise is high fixed costs and low marginal costs. The textile mill differs from pre-industrial piecework weaving in that once a $10 million investment in machinery has been made, the mill can churn out yard after yard of cloth for pennies. The pieceworker, by contrast, makes a relatively small up-front investment—the cost of raising up the hovel in which she labors and making her few tools—but spends the same large amount of time to produce each new yard of cloth.
Large fixed costs and low marginal costs lie at the heart of the bounty of the modern age: the more you produce, the lower the unit cost, and so the lower the price at which you can sell your product. This is a recipe for plenty.
But it also means that, so long as consumer demand in a given market is lower than the capacity of any particular plant, driving buyers to a particular seller and away from competitors always improves the product, in the sense that it enables the firm to increase volume and reduce unit cost, and therefore to sell the product at a lower price.
If the promise of the modern age is goods at low prices, then the implication is that antitrust should never punish firms for driving rivals from the market and taking over their customers. Indeed, efficiency requires that only one firm should ever produce in any given market, at least in any market for which a single plant is capable of serving all customers.
For antitrust in the late 19th and early 20th centuries, beguiled by this advantage to size, exclusive dealing, refusals to deal, even the knife in a competitor’s back: whether these ran afoul of other areas of law or not, it was all for the better because it allowed industrial enterprises to achieve economies of scale.
It is no accident that, a few notable triumphs aside, antitrust did not come into its own until the mid-1930s, 40 years after its inception, on the heels of an intellectual revolution that explained, for the first time, why it might actually be better for consumers to have more than one seller in a market.
These theories suggested that consumers might care as much about product quality as they do about product cost, and indeed would be willing to abandon a low-cost product for a higher-quality, albeit more expensive, one.
From this perspective, the world of economies of scale and monopoly production was the drab world of Soviet state-owned enterprises churning out one type of shoe, one brand of cleaning detergent, and so on.
The world of capitalism and technological advance, by contrast, was one in which numerous firms produced batches of differentiated products in amounts sometimes too small fully to realize all scale economies, but for which consumers were nevertheless willing to pay because the products better fit their preferences.
What is more, the striving of monopolistically competitive firms to lure away each other’s customers with products that better fit their tastes led to disruptive innovation— “creative destruction” was Schumpeter’s famous term for it—that brought about not just different flavors of the same basic concept but entirely new concepts. The competition to create a better flip phone, for example, would lead inevitably to a whole new paradigm, the smartphone.
This reasoning combined with work in the 1940s and 1950s on economic growth that quantified for the first time the key role played by technological change in the vigor of capitalist economies—the famous Solow residual—to suggest that product improvements, and not the cost reductions that come from capital accumulation and their associated economies of scale, create the lion’s share of consumer welfare. Innovation, not scale, was king.
Antitrust responded by, for the first time in its history, deciding between kinds of product improvements, rather than just in favor of improvements, casting economies of scale out of the category of improvements subject to antitrust immunity, while keeping quality improvements immune.
Casting economies of scale out of the protected product improvement category gave antitrust something to do for the first time. It meant that big firms had to plead more than just the cost advantages of being big in order to obtain license to push their rivals around. And government could now start reliably to win cases, rather than just the odd cause célèbre.
It is this intellectual watershed, and not Thurman Arnold’s tenacity, that was responsible for antitrust’s emergence as a force after World War Two.
Usage-Based Improvements Are Not Like Economies of Scale
The improvements in advertising that come from user growth fall squarely on the quality side of the ledger—the value they create is not due to the ability to average production costs over more ad buyers—and so they count as the kind of product improvements that antitrust continues to immunize today.
But given the pervasiveness of this mode of product improvement in the tech economy—the fact that virtually any tech firm that sells advertising can claim to be improving a product by driving users to itself and away from competitors—it is worth asking whether we have not reached a new stage in economic development in which this form of product improvement ought, like economies of scale, to be denied protection.
Shouldn’t the courts demand more and better innovation of big tech firms than just the same old big-data-driven improvements they serve up year after year?
Galling as it may be to those who, like myself, would like to see more vigorous antitrust enforcement in general, the answer would seem to be “no.” For what induced the courts to abandon antitrust immunity for economies of scale in the mid-20th century was not the mere fact that immunizing economies of scale paralyzed antitrust. Smashing big firms is not, after all, an end in itself.
Instead, monopolistic competition, creative destruction and the Solow residual induced the change, because they suggested both that other kinds of product improvement are more important than economies of scale and, crucially, that protecting economies of scale impedes development of those other kinds of improvements.
A big firm that excludes competitors in order to reach scale economies not only excludes competitors who might have produced an identical or near-identical product, but also excludes competitors who might have produced a better-quality product, one that consumers would have preferred to purchase even at a higher price.
To cast usage-based improvements out of the product improvement fold, a case must be made that excluding competitors in order to pursue such improvements will block a different kind of product improvement that contributes even more to consumer welfare.
If we could say, for example, that suppressing search competitors suppresses more-innovative search engines that ad buyers would prefer, even if those innovative search engines were to lack the advantages that come from having a large user base, then a case might be made that user growth should no longer count as a product improvement immune from antitrust scrutiny.
And even then, the case against usage-based improvements would need to be general enough to justify an epochal change in policy, rather than be limited to a particular technology in a particular lawsuit. For the courts hate to balance in individual cases, statements to the contrary in their published opinions notwithstanding.
But there is nothing in the Google complaint, much less the literature, to suggest that usage-based improvements are problematic in this way. Indeed, much of the value created by the information revolution seems to inhere precisely in its ability to centralize usage.
Americans Keep Voting to Centralize the Internet
In the early days of the internet, theorists mistook its decentralized architecture for a feature, rather than a bug. But internet users have since shown, time and again, that they believe the opposite.
For example, the basic protocols governing email were engineered to allow every American to run his own personal email server.
But Americans hated the freedom that created—not least the spam—and opted instead to get their email from a single server: the one run by Google as Gmail.
The basic protocols governing web traffic were also designed to allow every American to run whatever other communications services he wished—chat, video chat, RSS, webpages—on his own private server in distributed fashion.
But Americans hated the freedom that created—not least having to build and rebuild friend networks across platforms–—and they voted instead overwhelmingly to get their social media from a single server: Facebook.
Indeed, the basic protocols governing internet traffic were designed to allow every business to store and share its own data from its own computers, in whatever form.
But American businesses hated that freedom—not least the cost of having to buy and service their own data storage machines—and instead 40% of the internet is now stored and served from Amazon Web Services.
Similarly, advertisers have the option of placing advertisements on the myriad independently-run websites that make up the internet—known in the business as the “open web”—by placing orders through competitive ad exchanges. But advertisers have instead voted mostly to place ads on the handful of highly centralized platforms known as “walled gardens,” including Facebook, Google’s YouTube and, of course, Google Search.
The communications revolution, they say, is all about “bringing people together.” It turns out that’s true.
And that Google should win on consumer harm.
Remember the Telephone
Indeed, the same mid-20th century antitrust that thought so little of economies of scale as a defense immunized usage-based improvements when it encountered them in that most important of internet precursors: the telephone.
The telephone, like most internet services, gets better as usage increases. The more people are on a particular telephone network, the more valuable the network becomes to subscribers.
Just as with today’s internet services, the advantage of a large user base drove centralization of telephone services a century ago into the hands of a single firm: AT&T. Aside from a few business executives who liked the look of a desk full of handsets, consumers wanted one phone line that they could use to call everyone.
Although the government came close to breaking AT&T up in the early 20th century, the government eventually backed off, because a phone system in which you must subscribe to the right carrier to reach a friend just doesn’t make sense.
Instead, Congress and state legislatures stepped in to take the edge off monopoly by regulating phone pricing. And when antitrust finally did break AT&T up in 1982, it did so in a distinctly regulatory fashion, requiring that AT&T’s parts connect each other’s phone calls, something that Congress reinforced in the Telecommunications Act of 1996.
The message was clear: the sort of usage-based improvements one finds in communications are real product improvements. And antitrust can only intervene if it has a way to preserve them.
The equivalent of interconnection in search, that the benefits of usage, in the form of data and attention, be shared among competing search providers, might be feasible. But it is hard to imagine the court in the Google case ordering interconnection without the benefit of decades of regulatory experience with the defendant’s operations that the district court in 1982 could draw upon in the AT&T case.
The solution for the tech giants today is the same as the solution for AT&T a century ago: to regulate rather than to antitrust.
Microsoft Not to the Contrary, Because Users Were in Common
Parallels to the government’s 1990s-era antitrust case against Microsoft are not to the contrary.
As Sam Weinstein has pointed out to me, Microsoft, like Google, was at heart an exclusive dealing case: Microsoft contracted with computer manufacturers to prevent Netscape Navigator, an early web browser, from serving as the default web browser on Windows PCs.
That prevented Netscape, the argument went, from growing to compete with Windows in the operating system market, much the way the Google’s Chrome browser has become a substitute for Windows on low-end notebook computers today.
The D.C. Circuit agreed that default status was an essential input for Netscape as it sought eventually to compete with Windows in the operating system market.
The court also accepted the argument that the exclusive dealing did not improve Microsoft’s operating system product.
This at first seems to contradict the notion that usage improves products, for, like search advertising, operating systems get better as their user bases increase. The more people use an operating system, the more application developers are willing to write for the system, and the better the system therefore becomes.
It seems to follow that keeping competitors off competing operating systems and on Windows made Windows better. If the court nevertheless held Microsoft liable, it must be because the court refused to extend antitrust immunity to usage-based improvements.
The trouble with this line of argument is that it ignores the peculiar thing about the Microsoft case: that while the government alleged that Netscape was a potential competitor of Windows, Netscape was also an application that ran on Windows.
That means that, unlike Google and rival search engines, Windows and Netscape shared users.
So, Microsoft’s exclusive dealing did not increase its user base and therefore could not have improved Windows, at least not by making Windows more appealing for applications developers. Driving Netscape from Windows did not enable developers to reach even one more user. Conversely, allowing Netscape to be the default browser on Windows would not have reduced the number of Windows users, because Netscape ran on Windows.
By contrast, a user who runs a search in Bing does not run the same search simultaneously in Google, and so Bing users are not Google users. Google’s exclusive dealing therefore increases its user base and improves Google’s product, whereas Microsoft’s exclusive dealing served only to reduce Netscape’s user base and degrade Netscape’s product.
Indeed, if letting Netscape be the default browser on Windows was a threat to Windows, it was not because it prevented Microsoft from improving its product, but because Netscape might eventually have become an operating system, and indeed a better operating system, than Windows, and consumers and developers, who could be on both at the same time if they wished, might have nevertheless chosen eventually to go with Netscape alone.
Though it does not help the government in the Google case, Microsoft still does offer a beacon of hope for those concerned about size, for Microsoft’s subsequent history reminds us that yesterday’s behemoth is often today’s also ran.
And the favorable settlement terms Microsoft ultimately used to escape real consequences for its conduct 20 years ago imply that, at least in high-tech markets, we don’t always need antitrust for that to be true.
One of my favorite stories in the ongoing saga over the regulation (and thus the future) of Internet search emerged earlier this week with claims by Google that Microsoft has been copying its answers–using Google search results to bolster the relevance of its own results for certain search terms. The full story from Internet search journalist extraordinaire, Danny Sullivan, is here, with a follow up discussing Microsoft’s response here. The New York Times is also on the case with some interesting comments from a former Googler that feed nicely into the Schumpeterian competition angle (discussed below). And Microsoft consultant (“though on matters unrelated to issues discussed here”) and Harvard Business prof Ben Edelman coincidentally echoes precisely Microsoft’s response in a blog post here.
What I find so great about this story is how it seems to resolve one of the most significant strands of the ongoing debate–although it does so, from Microsoft’s point of view, unintentionally, to be sure.
Here’s what I mean. Back when Microsoft first started being publicly identified as a significant instigator of regulatory and antitrust attention paid to Google, the company, via its chief competition counsel, Dave Heiner, defended its stance in large part on the following ground:
All of this is quite important because search is so central to how people navigate the Internet, and because advertising is the main monetization mechanism for a wide range of Web sites and Web services. Both search and online advertising are increasingly controlled by a single firm, Google. That can be a problem because Google’s business is helped along by significant network effects (just like the PC operating system business). Search engine algorithms “learn” by observing how users interact with search results. Google’s algorithms learn less common search terms better than others because many more people are conducting searches on these terms on Google.
These and other network effects make it hard for competing search engines to catch up. Microsoft’s well-received Bing search engine is addressing this challenge by offering innovations in areas that are less dependent on volume. But Bing needs to gain volume too, in order to increase the relevance of search results for less common search terms. That is why Microsoft and Yahoo! are combining their search volumes. And that is why we are concerned about Google business practices that tend to lock in publishers and advertisers and make it harder for Microsoft to gain search volume. (emphasis added).
Claims of “network effects” “increasing returns to scale” and the absence of “minimum viable scale” for competitors run rampant (and unsupported) in the various cases against Google. The TradeComet complaint, for example, claims that
[t]he primary barrier to entry facing vertical search websites is the inability to draw enough search traffic to reach the critical mass necessary to become independently sustainable.
But now we discover (what we should have known all along) that “learning by doing” is not the only way to obtain the data necessary to generate relevant search results: “Learning by copying” works, as well. And there’s nothing wrong with it–in fact, the very process of Schumpeterian creative destruction assumes imitation.
Neither perfect knowledge of the past nor complete awareness of the current state of the arts gives sufficient foresight to indicate profitable action . . . [and] the pervasive effects of uncertainty prevent the ascertainment of actions which are supposed to be optimal in achieving profits. Now the consequence of this is that modes of behavior replace optimum equilibrium conditions as guiding rules of action. First, wherever successful enterprises are observed, the elements common to these observable successes will be associated with success and copied by others in their pursuit of profits or success. “Nothing succeeds like success.”
So on the one hand, I find the hand wringing about Microsoft’s “copying” Google’s results to be completely misplaced–just as the pejorative connotations of “embrace and extend” deployed against Microsoft itself when it was the target of this sort of scrutiny were bogus. But, at the same time, I see this dynamic essentially decimating Microsoft’s (and others’) claims that Google has an unassailable position because no competitor can ever hope to match its size, and thus its access to information essential to the quality of search results, particularly when it comes to so-called “long-tail” search terms.
Long-tail search terms are queries that are extremely rare and, thus, for which there is little user history (information about which results searchers found relevant and clicked on) to guide future search results. As Ben Edelman writes in his blog post (linked above) on this issue (trotting out, even while implicitly undercutting, the “minimum viable scale” canard):
Of course the reality is that Google’s high market share means Google gets far more searches than any other search engine. And Google’s popularity gives it a real advantage: For an obscure search term that gets 100 searches per month at Google, Bing might get just five or 10. Also, for more popular terms, Google can slice its data into smaller groups — which results are most useful to people from Boston versus New York, which results are best during the day versus at night, and so forth. So Google is far better equipped to figure out what results users favor and to tailor its listings accordingly. Meanwhile, Microsoft needs additional data, such as Toolbar and Related Sites data, to attempt to improve its results in a similar way.
But of course the “additional data” that Microsoft has access to here is, to a large extent, the same data that Google has. Although Danny Sullivan’s follow up story (also linked above) suggests that Bing doesn’t do all it could to make use of Google’s data (for example, Bing does not, it seems, copy Google search results wholesale, nor does it use user behavior as extensively as it could (by, for example, seeing searches in Google and then logging the next page visited, which would give Bing a pretty good idea which sites in Google’s results users found most relevant)), it doesn’t change the fundamental fact that Microsoft and other search engines can overcome a significant amount of the so-called barrier to entry afforded by Google’s impressive scale by simply imitating much of what Google does (and, one hopes, also innovating enough to offer something better).
Perhaps Google is “better equipped to figure out what users favor.” But it seems to me that only a trivial amount of this advantage is plausibly attributable to Google’s scale instead of its engineering and innovation. The fact that Microsoft can (because of its own impressive scale in various markets) and does take advantage of accessible data to benefit indirectly from Google’s own prowess in search is a testament to the irrelevance of these unfortunately-pervasive scale and network effect arguments.