[TOTM: The following is part of a digital symposium by TOTM guests and authors on Antitrust’s Uncertain Future: Visions of Competition in the New Regulatory Landscape. Information on the authors and the entire series of posts is available here.]
In Free to Choose, Milton Friedman famously noted that there are four ways to spend money[1]:
Spending your own money on yourself. For example, buying groceries or lunch. There is a strong incentive to economize and to get full value.
Spending your own money on someone else. For example, buying a gift for another. There is a strong incentive to economize, but perhaps less to achieve full value from the other person’s point of view. Altruism is admirable, but it differs from value maximization, since—strictly speaking—giving cash would maximize the other’s value. Perhaps the point of a gift is that it does not amount to cash and the maximization of the other person’s welfare from their point of view.
Spending someone else’s money on yourself. For example, an expensed business lunch. “Pass me the filet mignon and Chateau Lafite! Do you have one of those menus without any prices?” There is a strong incentive to get maximum utility, but there is little incentive to economize.
Spending someone else’s money on someone else. For example, applying the proceeds of taxes or donations. There may be an indirect desire to see utility, but incentives for quality and cost management are often diminished.
This framework can be criticized. Altruism has a role. Not all motives are selfish. There is an important role for action to help those less fortunate, which might mean, for instance, that a charity gains more utility from category (4) (assisting the needy) than from category (3) (the charity’s holiday party). It always depends on the facts and the context. However, there is certainly a grain of truth in the observation that charity begins at home and that, in the final analysis, people are best at managing their own affairs.
How would this insight apply to data interoperability? The difficult cases of assisting the needy do not arise here: there is no serious sense in which data interoperability does, or does not, result in destitution. Thus, Friedman’s observations seem to ring true: when spending data, those whose data it is seem most likely to maximize its value. This is especially so where collection of data responds to incentives—that is, the amount of data collected and processed responds to how much control over the data is possible.
The obvious exception to this would be a case of market power. If there is a monopoly with persistent barriers to entry, then the incentive may not be to maximize total utility, and therefore to limit data handling to the extent that a higher price can be charged for the lesser amount of data that does remain available. This has arguably been seen with some data-handling rules: the “Jedi Blue” agreement on advertising bidding, Apple’s Intelligent Tracking Prevention and App Tracking Transparency, and Google’s proposed Privacy Sandbox, all restrict the ability of others to handle data. Indeed, they may fail Friedman’s framework, since they amount to the platform deciding how to spend others’ data—in this case, by not allowing them to collect and process it at all.
It should be emphasized, though, that this is a special case. It depends on market power, and existing antitrust and competition laws speak to it. The courts will decide whether cases like Daily Mail v Google and Texas et al. v Google show illegal monopolization of data flows, so as to fall within this special case of market power. Outside the United States, cases like the U.K. Competition and Markets Authority’s Google Privacy Sandbox commitments and the European Union’s proposed commitments with Amazon seek to allow others to continue to handle their data and to prevent exclusivity from arising from platform dynamics, which could happen if a large platform prevents others from deciding how to account for data they are collecting. It will be recalled that even Robert Bork thought that there was risk of market power harms from the large Microsoft Windows platform a generation ago.[2] Where market power risks are proven, there is a strong case that data exclusivity raises concerns because of an artificial barrier to entry. It would only be if the benefits of centralized data control were to outweigh the deadweight loss from data restrictions that this would be untrue (though query how well the legal processes verify this).
Yet the latest proposals go well beyond this. A broad interoperability right amounts to “open season” for spending others’ data. This makes perfect sense in the European Union, where there is no large domestic technology platform, meaning that the data is essentially owned via foreign entities (mostly, the shareholders of successful U.S. and Chinese companies). It must be very tempting to run an industrial policy on the basis that “we’ll never be Google” and thus to embrace “sharing is caring” as to others’ data.
But this would transgress the warning from Friedman: would people optimize data collection if it is open to mandatory sharing even without proof of market power? It is deeply concerning that the EU’s DATA Act is accompanied by an infographic that suggests that coffee-machine data might be subject to mandatory sharing, to allow competition in services related to the data (e.g., sales of pods; spare-parts automation). There being no monopoly in coffee machines, this simply forces vertical disintegration of data collection and handling. Why put a data-collection system into a coffee maker at all, if it is to be a common resource? Friedman’s category (4) would apply: the data is taken and spent by another. There is no guarantee that there would be sensible decision making surrounding the resource.
It will be interesting to see how common-law jurisdictions approach this issue. At the risk of stating the obvious, the polity in continental Europe differs from that in the English-speaking democracies when it comes to whether the collective, or the individual, should be in the driving seat. A close read of the UK CMA’s Google commitments is interesting, in that paragraph 30 requires no self-preferencing in data collection and requires future data-handling systems to be designed with impacts on competition in mind. No doubt the CMA is seeking to prevent data-handling exclusivity on the basis that this prevents companies from using their data collection to compete. This is far from the EU DATA Act’s position in that it is certainly not a right to handle Google’s data: it is simply a right to continue to process one’s own data.
U.S. proposals are at an earlier stage. It would seem important, as a matter of principle, not to make arbitrary decisions about vertical integration in data systems, and to identify specific market-power concerns instead, in line with common-law approaches to antitrust.
It might be very attractive to the EU to spend others’ data on their behalf, but that does not make it right. Those working on the U.S. proposals would do well to ensure that there is a meaningful market-power gate to avoid unintended consequences.
Disclaimer: The author was engaged for expert advice relating to the UK CMA’s Privacy Sandbox case on behalf of the complainant Marketers for an Open Web.
[1] Milton Friedman, Free to Choose, 1980, pp.115-119
[2] Comments at the Yale Law School conference, Robert H. Bork’s influence on Antitrust Law, Sep. 27-28, 2013.
This post is the first in a three-part series. The second installment can be found here and the third can be found here.
The interplay among political philosophy, competition, and competition law remains, with some notable exceptions, understudied in the literature. Indeed, while examinations of the intersection between economics and competition law have taught us much, relatively little has been said about the value frameworks within which different visions of competition and competition law operate.
As Ronald Coase reminds us, questions of economics and political philosophy are interrelated, so that “problems of welfare economics must ultimately dissolve into a study of aesthetics and morals.” When we talk about economics, we talk about political philosophy, and vice versa. Every political philosophy reproduces economic prescriptions that reflect its core tenets. And every economic arrangement, in turn, evokes the normative values that undergird it. This is as true for socialism and fascism as it is for liberalism and neoliberalism.
Many economists have understood this. Milton Friedman, for instance, who spent most of his career studying social welfare, not ethics, admitted in Free to Choose that he was ultimately concerned with the preservation of a value: the liberty of the individual. Similarly, the avowed purpose of Friedrich Hayek’s The Constitution of Liberty was to maximize the state of human freedom, with coercion—i.e., the opposite of freedom—described as evil. James Buchanan fought to preserve political philosophy within the economic discipline, particularly worrying that:
Political economy was becoming unmoored from the types of philosophic and institutional analysis which were previously central to the field. In its flight from reality, Buchanan feared economics was in danger of abandoning social-philosophic issues for exclusively technical questions.
— John Kroencke, “Three Essays in the History of Economics”
Against this background, I propose to look at competition and competition law from a perspective that explicitly recognizes this connection. The goal is not to substitute, but rather to complement, our comparatively broad understanding of competition economics with a better grasp of the deeper normative implications of regulating competition in a certain way. If we agree with Robert Bork that antitrust is a subcategory of ideology that reflects and reacts upon deeper tensions in our society, the exercise might also be relevant beyond the relatively narrow confines of antitrust scholarship (which, on the other hand, seem to be getting wider and wider).
The Classical Liberal Revolution and the Unshackling of Competition
Mercantilism
When Adam Smith’s The Wealth of Nations was published in 1776, heavy economic regulation of the market through laws, by-laws, tariffs, and special privileges was the norm. Restrictions on imports were seen as protecting national wealth by preventing money from flowing out of the country—a policy premised on the conflation of money with wealth. A morass of legally backed and enforceable monopoly rights, granted either by royal decree or government-sanctioned by-laws, marred competition. Guilds reigned over tradesmen by restricting entry into the professions and segregating markets along narrow geographic lines. At every turn, economic activity was shot through with rules, restrictions, and regulations.
The Revolution in Political Economy
Classical liberals like Smith departed from the then-dominant mercantilist paradigm by arguing that nations prospered through trade and competition, and not protectionism and monopoly privileges. He demonstrated that both the seller and the buyer benefited from trade; and theorized the market as an automatic mechanism that allocated resources efficiently through the spontaneous, self-interested interaction of individuals.
Undergirding this position was the notion of the natural order, which Smith carried over from his own Theory of Moral Sentiments and which elaborated on arguments previously espoused by the French physiocrats (a neologism meaning “the rule of nature”), such as Anne Robert Jacques Turgot, François Quesnay, and Jacques Claude Marie Vincent de Gournay. The basic premise was that there existed a harmonious order of things established and maintained by means of subconscious balancing of the egoism of the individual and the greatest welfare for all.
The implications of this modest insight, which clashed directly with established mercantilist orthodoxy, were tremendous. If human freedom maximized social welfare, the justification for detailed government intervention in the economy was untenable. The principles of laissez-faire (a term probably coined by Gournay, who had been Turgot’s mentor) instead prescribed that the government should adopt a “night watchman” role, tending to modest tasks such as internal and external defense, the mediation of disputes, and certain public works that were not deemed profitable for the individual.
Freeing Competition from the Mercantilist Yoke
Smith’s general attitude also carried over to competition. Following the principles described above, classical liberals believed that price and product adjustments following market interactions among tradesmen (i.e., competition) would automatically maximize social utility. As Smith argued:
In general, if any branch of trade, or any division of labor, be advantageous to the public, the freer and more general the competition, it will always be the more so.
This did not mean that competition occurred in a legal void. Rather, Smith’s point was that there was no need to construct a comprehensive system of competition regulation, as markets would oversee themselves so long as a basic legal and institutional framework was in place and government refrained from actively abetting monopolies. Under this view, the only necessary “competition law” would be those individual laws that made competition possible, such as private property rights, contracts, unfair competition laws, and the laws against government and guild restrictions.
Liberal Political Philosophy: Utilitarian and Deontological Perspectives on Liberty and Individuality
Of course, this sort of volte face in political economy needed to be buttressed by a robust philosophical conception of the individual and the social order. Such ontological and moral theories were articulated in, among others, the Theory of Moral Sentiments and John Stuart Mill’s On Liberty. At the heart of the liberal position was the idea that undue restrictions on human freedom and individuality were not only intrinsically despotic, but also socially wasteful, as they precluded men from enjoying the fruits of the exercise of such freedoms. For instance, infringing the freedom to trade and to compete would rob the public of cheaper goods, while restrictions on freedom of expression would arrest the development of thoughts and ideas through open debate.
It is not clear whether the material or the ethical argument for freedom came first. In other words, whether classical liberalism constituted an ex-post rationalization of a moral preference for individual liberty, or precisely the reverse. The question may be immaterial, as classical liberals generally believed that the deontological and the consequentialist cases for liberty—save in the most peripheral of cases (e.g., violence against others)—largely overlapped.
Conclusion
In sum, classical liberalism offered a holistic, integrated view of societies, markets, morals, and individuals that was revolutionary for the time. The notion of competition as a force to be unshackled—rather than actively constructed and chaperoned—flowed organically from that account and its underlying values and assumptions. These included such values as personal freedom and individualism, along with foundational metaphysical presuppositions, such as the existence of a harmonious natural order that seamlessly guided individual actions for the benefit of the whole.
Where such base values and presumptions are eroded, however, the notion of a largely spontaneous, self-sustaining competitive process loses much of its rational, ethical, and moral legitimacy. Competition thus ceases to be tenable on its “own two feet” and must either be actively engineered and protected, or abandoned altogether as a viable organizing principle. In this sense, the crisis of liberalism the West experienced in the late 19th and early 20th centuries—which attacked the very foundations of classical liberal doctrine—can also be read as a crisis of competition.
In my next post, I’ll discuss the collectivist backlash against liberalism.
Interrogations concerning the role that economic theory should play in policy decisions are nothing new. Milton Friedman famously drew a distinction between “positive” and “normative” economics, notably arguing that theoretical models were valuable, despite their unrealistic assumptions. Kenneth Arrow and Gerard Debreu’s highly theoretical work on General Equilibrium Theory is widely acknowledged as one of the most important achievements of modern economics.
But for all their intellectual value and academic merit, the use of models to inform policy decisions is not uncontroversial. There is indeed a long and unfortunate history of influential economic models turning out to be poor depictions (and predictors) of real-world outcomes.
This raises a key question: should policymakers use economic models to inform their decisions and, if so, how? This post uses the economics of externalities to illustrate both the virtues and pitfalls of economic modeling. Throughout economic history, externalities have routinely been cited to support claims of market failure and calls for government intervention. However, as explained below, these fears have frequently failed to withstand empirical scrutiny.
Today, similar models are touted to support government intervention in digital industries. Externalities are notably said to prevent consumers from switching between platforms, allegedly leading to unassailable barriers to entry and deficient venture-capital investment. Unfortunately, as explained below, the models that underpin these fears are highly abstracted and far removed from underlying market realities.
Ultimately, this post argues that, while models provide a powerful way of thinking about the world, naïvely transposing them to real-world settings is misguided. This is not to say that models are useless—quite the contrary. Indeed, “falsified” models can shed powerful light on economic behavior that would otherwise prove hard to understand.
Bees
Fears surrounding economic externalities are as old as modern economics. For example, in the 1950s, economists routinely cited bee pollination as a source of externalities and, ultimately, market failure.
The basic argument was straightforward: Bees and orchards provide each other with positive externalities. Bees cross-pollinate flowers and orchards contain vast amounts of nectar upon which bees feed, thus improving honey yields. Accordingly, several famous economists argued that there was a market failure; bees fly where they please and farmers cannot prevent bees from feeding on their blossoming flowers—allegedly causing underinvestment in both. This led James Meade to conclude:
[T]he apple-farmer provides to the beekeeper some of his factors free of charge. The apple-farmer is paid less than the value of his marginal social net product, and the beekeeper receives more than the value of his marginal social net product.
If, then, apple producers are unable to protect their equity in apple-nectar and markets do not impute to apple blossoms their correct shadow value, profit-maximizing decisions will fail correctly to allocate resources at the margin. There will be failure “by enforcement.” This is what I would call an ownership externality. It is essentially Meade’s “unpaid factor” case.
It took more than 20 years and painstaking research by Steven Cheung to conclusively debunk these assertions. So how did economic agents overcome this “insurmountable” market failure?
The answer, it turns out, was extremely simple. While bees do fly where they please, the relative placement of beehives and orchards has a tremendous impact on both fruit and honey yields. This is partly because bees have a very limited mean foraging range (roughly 2-3km). This left economic agents with ample scope to prevent free-riding.
Using these natural sources of excludability, they built a web of complex agreements that internalize the symbiotic virtues of beehives and fruit orchards. To cite Steven Cheung’s research:
Pollination contracts usually include stipulations regarding the number and strength of the colonies, the rental fee per hive, the time of delivery and removal of hives, the protection of bees from pesticide sprays, and the strategic placing of hives. Apiary lease contracts differ from pollination contracts in two essential aspects. One is, predictably, that the amount of apiary rent seldom depends on the number of colonies, since the farmer is interested only in obtaining the rent per apiary offered by the highest bidder. Second, the amount of apiary rent is not necessarily fixed. Paid mostly in honey, it may vary according to either the current honey yield or the honey yield of the preceding year.
But what of neighboring orchards? Wouldn’t these entail a more complex externality (i.e., could one orchard free-ride on agreements concluded between other orchards and neighboring apiaries)? Apparently not:
Acknowledging the complication, beekeepers and farmers are quick to point out that a social rule, or custom of the orchards, takes the place of explicit contracting: during the pollination period the owner of an orchard either keeps bees himself or hires as many hives per area as are employed in neighboring orchards of the same type. One failing to comply would be rated as a “bad neighbor,” it is said, and could expect a number of inconveniences imposed on him by other orchard owners. This customary matching of hive densities involves the exchange of gifts of the same kind, which apparently entails lower transaction costs than would be incurred under explicit contracting, where farmers would have to negotiate and make money payments to one another for the bee spillover.
In short, not only did the bee/orchard externality model fail, but it failed to account for extremely obvious counter-evidence. Even a rapid flip through the Yellow Pages (or, today, a search on Google) would have revealed a vibrant market for bee pollination. In short, the bee externalities, at least as presented in economic textbooks, were merely an economic “fable.” Unfortunately, they would not be the last.
The Lighthouse
Lighthouses provide another cautionary tale. Indeed, Henry Sidgwick, A.C. Pigou, John Stuart Mill, and Paul Samuelson all cited the externalities involved in the provision of lighthouse services as a source of market failure.
Here, too, the problem was allegedly straightforward. A lighthouse cannot prevent ships from free-riding on its services when they sail by it (i.e., it is mostly impossible to determine whether a ship has paid fees and to turn off the lighthouse if that is not the case). Hence there can be no efficient market for light dues (lighthouses were seen as a “public good”). As Paul Samuelson famously put it:
Take our earlier case of a lighthouse to warn against rocks. Its beam helps everyone in sight. A businessman could not build it for a profit, since he cannot claim a price from each user. This certainly is the kind of activity that governments would naturally undertake.
He added that:
[E]ven if the operators were able—say, by radar reconnaissance—to claim a toll from every nearby user, that fact would not necessarily make it socially optimal for this service to be provided like a private good at a market-determined individual price. Why not? Because it costs society zero extra cost to let one extra ship use the service; hence any ships discouraged from those waters by the requirement to pay a positive price will represent a social economic loss—even if the price charged to all is no more than enough to pay the long-run expenses of the lighthouse.
More than a century after it was first mentioned in economics textbooks, Ronald Coase finally laid the lighthouse myth to rest—rebutting Samuelson’s second claim in the process.
What piece of evidence had eluded economists for all those years? As Coase observed, contemporary economists had somehow overlooked the fact that large parts of the British lighthouse system were privately operated, and had been for centuries:
[T]he right to operate a lighthouse and to levy tolls was granted to individuals by Acts of Parliament. The tolls were collected at the ports by agents (who might act for several lighthouses), who might be private individuals but were commonly customs officials. The toll varied with the lighthouse and ships paid a toll, varying with the size of the vessel, for each lighthouse passed. It was normally a rate per ton (say 1/4d or 1/2d) for each voyage. Later, books were published setting out the lighthouses passed on different voyages and the charges that would be made.
In other words, lighthouses used a simple physical feature to create “excludability” and prevent free-riding. The main reason ships require lighthouses is to avoid hitting rocks when they make their way to a port. By tying port fees and light dues, lighthouse owners—aided by mild government-enforced property rights—could easily earn a return on their investments, thus disproving the lighthouse free-riding myth.
Ultimately, this meant that a large share of the British lighthouse system was privately operated throughout the 19th century, and this share would presumably have been more pronounced if government-run “Trinity House” lighthouses had not crowded out private investment:
The position in 1820 was that there were 24 lighthouses operated by Trinity House and 22 by private individuals or organizations. But many of the Trinity House lighthouses had not been built originally by them but had been acquired by purchase or as the result of the expiration of a lease.
Of course, this system was not perfect. Some ships (notably foreign ones that did not dock in the United Kingdom) might free-ride on this arrangement. It also entailed some level of market power. The ability to charge light dues meant that prices were higher than the “socially optimal” baseline of zero (the marginal cost of providing light is close to zero). Though it is worth noting that tying port fees and light dues might also have decreased double marginalization, to the benefit of sailors.
Samuelson was particularly weary of this market power that went hand in hand with the private provision of public goods, including lighthouses:
Being able to limit a public good’s consumption does not make it a true-blue private good. For what, after all, are the true marginal costs of having one extra family tune in on the program? They are literally zero. Why then prevent any family which would receive positive pleasure from tuning in on the program from doing so?
However, as Coase explained, light fees represented only a tiny fraction of a ship’s costs. In practice, they were thus unlikely to affect market output meaningfully:
[W]hat is the gain which Samuelson sees as coming from this change in the way in which the lighthouse service is financed? It is that some ships which are now discouraged from making a voyage to Britain because of the light dues would in future do so. As it happens, the form of the toll and the exemptions mean that for most ships the number of voyages will not be affected by the fact that light dues are paid. There may be some ships somewhere which are laid up or broken up because of the light dues, but the number cannot be great, if indeed there are any ships in this category.
Samuelson’s critique also falls prey to the Nirvana Fallacy pointed out by Harold Demsetz: markets might not be perfect, but neither is government intervention. Market power and imperfect appropriability are the two (paradoxical) pitfalls of the first; “white elephants,” underinvestment, and lack of competition (and the information it generates) tend to stem from the latter.
Which of these solutions is superior, in each case, is an empirical question that early economists had simply failed to consider—assuming instead that market failure was systematic in markets that present prima facie externalities. In other words, models were taken as gospel without any circumspection about their relevance to real-world settings.
The Tragedy of the Commons
Externalities were also said to undermine the efficient use of “common pool resources,” such grazing lands, common irrigation systems, and fisheries—resources where one agent’s use diminishes that of others, and where exclusion is either difficult or impossible.
The most famous formulation of this problem is Garret Hardin’s highly influential (over 47,000 cites) “tragedy of the commons.” Hardin cited the example of multiple herdsmen occupying the same grazing ground:
The rational herdsman concludes that the only sensible course for him to pursue is to add another animal to his herd. And another; and another … But this is the conclusion reached by each and every rational herdsman sharing a commons. Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit—in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons.
In more technical terms, each economic agent purportedly exerts an unpriced negative externality on the others, thus leading to the premature depletion of common pool resources. Hardin extended this reasoning to other problems, such as pollution and allegations of global overpopulation.
Although Hardin hardly documented any real-world occurrences of this so-called tragedy, his policy prescriptions were unequivocal:
The most important aspect of necessity that we must now recognize, is the necessity of abandoning the commons in breeding. No technical solution can rescue us from the misery of overpopulation. Freedom to breed will bring ruin to all.
As with many other theoretical externalities, empirical scrutiny revealed that these fears were greatly overblown. In her Nobel-winning work, Elinor Ostrom showed that economic agents often found ways to mitigate these potential externalities markedly. For example, mountain villages often implement rules and norms that limit the use of grazing grounds and wooded areas. Likewise, landowners across the world often set up “irrigation communities” that prevent agents from overusing water.
Along similar lines, Julian Morris and I conjecture that informal arrangements and reputational effects might mitigate opportunistic behavior in the standard essential patent industry.
These bottom-up solutions are certainly not perfect. Many common institutions fail—for example, Elinor Ostrom documents several problematic fisheries, groundwater basins and forests, although it is worth noting that government intervention was sometimes behind these failures. To cite but one example:
Several scholars have documented what occurred when the Government of Nepal passed the “Private Forest Nationalization Act” […]. Whereas the law was officially proclaimed to “protect, manage and conserve the forest for the benefit of the entire country”, it actually disrupted previously established communal control over the local forests. Messerschmidt (1986, p.458) reports what happened immediately after the law came into effect:
Nepalese villagers began freeriding — systematically overexploiting their forest resources on a large scale.
In any case, the question is not so much whether private institutions fail, but whether they do so more often than government intervention. be it regulation or property rights. In short, the “tragedy of the commons” is ultimately an empirical question: what works better in each case, government intervention, propertization, or emergent rules and norms?
More broadly, the key lesson is that it is wrong to blindly apply models while ignoring real-world outcomes. As Elinor Ostrom herself put it:
The intellectual trap in relying entirely on models to provide the foundation for policy analysis is that scholars then presume that they are omniscient observers able to comprehend the essentials of how complex, dynamic systems work by creating stylized descriptions of some aspects of those systems.
Dvorak Keyboards
In 1985, Paul David published an influential paper arguing that market failures undermined competition between the QWERTY and Dvorak keyboard layouts. This version of history then became a dominant narrative in the field of network economics, including works by Joseph Farrell & Garth Saloner, and Jean Tirole.
The basic claim was that QWERTY users’ reluctance to switch toward the putatively superior Dvorak layout exerted a negative externality on the rest of the ecosystem (and a positive externality on other QWERTY users), thus preventing the adoption of a more efficient standard. As Paul David put it:
Although the initial lead acquired by QWERTY through its association with the Remington was quantitatively very slender, when magnified by expectations it may well have been quite sufficient to guarantee that the industry eventually would lock in to a de facto QWERTY standard. […]
Competition in the absence of perfect futures markets drove the industry prematurely into standardization on the wrong system — where decentralized decision making subsequently has sufficed to hold it.
Unfortunately, many of the above papers paid little to no attention to actual market conditions in the typewriter and keyboard layout industries. Years later, Stan Liebowitz and Stephen Margolis undertook a detailed analysis of the keyboard layout market. They almost entirely rejected any notion that QWERTY prevailed despite it being the inferior standard:
Yet there are many aspects of the QWERTY-versus-Dvorak fable that do not survive scrutiny. First, the claim that Dvorak is a better keyboard is supported only by evidence that is both scant and suspect. Second, studies in the ergonomics literature find no significant advantage for Dvorak that can be deemed scientifically reliable. Third, the competition among producers of typewriters, out of which the standard emerged, was far more vigorous than is commonly reported. Fourth, there were far more typing contests than just the single Cincinnati contest. These contests provided ample opportunity to demonstrate the superiority of alternative keyboard arrangements. That QWERTY survived significant challenges early in the history of typewriting demonstrates that it is at least among the reasonably fit, even if not the fittest that can be imagined.
In short, there was little to no evidence supporting the view that QWERTY inefficiently prevailed because of network effects. The falsification of this narrative also weakens broader claims that network effects systematically lead to either excess momentum or excess inertia in standardization. Indeed, it is tempting to characterize all network industries with heavily skewed market shares as resulting from market failure. Yet the QWERTY/Dvorak story suggests that such a conclusion would be premature.
Killzones, Zoom, and TikTok
If you are still reading at this point, you might think that contemporary scholars would know better than to base calls for policy intervention on theoretical externalities. Alas, nothing could be further from the truth.
For instance, a recent paper by Sai Kamepalli, Raghuram Rajan and Luigi Zingales conjectures that the interplay between mergers and network externalities discourages the adoption of superior independent platforms:
If techies expect two platforms to merge, they will be reluctant to pay the switching costs and adopt the new platform early on, unless the new platform significantly outperforms the incumbent one. After all, they know that if the entering platform’s technology is a net improvement over the existing technology, it will be adopted by the incumbent after merger, with new features melded with old features so that the techies’ adjustment costs are minimized. Thus, the prospect of a merger will dissuade many techies from trying the new technology.
Although this key behavioral assumption drives the results of the theoretical model, the paper presents no evidence to support the contention that it occurs in real-world settings. Admittedly, the paper does present evidence of reduced venture capital investments after mergers involving large tech firms. But even on their own terms, this data simply does not support the authors’ behavioral assumption.
And this is no isolated example. Over the past couple of years, several scholars have called for more muscular antitrust intervention in networked industries. A common theme is that network externalities, switching costs, and data-related increasing returns to scale lead to inefficient consumer lock-in, thus raising barriers to entry for potential rivals (here, here, here).
But there are also countless counterexamples, where firms have easily overcome potential barriers to entry and network externalities, ultimately disrupting incumbents.
Zoom is one of the most salient instances. As I have written previously:
To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects.
Along similar lines, Geoffrey Manne and Alec Stapp have put forward a multitude of other examples. These include: The demise of Yahoo; the disruption of early instant-messaging applications and websites; MySpace’s rapid decline; etc. In all these cases, outcomes do not match the predictions of theoretical models.
More recently, TikTok’s rapid rise offers perhaps the greatest example of a potentially superior social-networking platform taking significant market share away from incumbents. According to the Financial Times, TikTok’s video-sharing capabilities and its powerful algorithm are the most likely explanations for its success.
While these developments certainly do not disprove network effects theory, they eviscerate the common belief in antitrust circles that superior rivals are unable to overthrow incumbents in digital markets. Of course, this will not always be the case. As in the previous examples, the question is ultimately one of comparing institutions—i.e., do markets lead to more or fewer error costs than government intervention? Yet this question is systematically omitted from most policy discussions.
In Conclusion
My argument is not that models are without value. To the contrary, framing problems in economic terms—and simplifying them in ways that make them cognizable—enables scholars and policymakers to better understand where market failures might arise, and how these problems can be anticipated and solved by private actors. In other words, models alone cannot tell us that markets will fail, but they can direct inquiries and help us to understand why firms behave the way they do, and why markets (including digital ones) are organized in a given way.
In that respect, both the theoretical and empirical research cited throughout this post offer valuable insights for today’s policymakers.
For a start, as Ronald Coase famously argued in what is perhaps his most famous work, externalities (and market failure more generally) are a function of transaction costs. When these are low (relative to the value of a good), market failures are unlikely. This is perhaps clearest in the “Fable of the Bees” example. Given bees’ short foraging range, there were ultimately few real-world obstacles to writing contracts that internalized the mutual benefits of bees and orchards.
Perhaps more importantly, economic research sheds light on behavior that might otherwise be seen as anticompetitive. The rules and norms that bind farming/beekeeping communities, as well as users of common pool resources, could easily be analyzed as a cartel by naïve antitrust authorities. Yet externality theory suggests they play a key role in preventing market failure.
Along similar lines, mergers and acquisitions (as well as vertical integration, more generally) can reduce opportunism and other externalities that might otherwise undermine collaboration between firms (here, here and here). And much of the same is true for certain types of unilateral behavior. Tying video games to consoles (and pricing the console below cost) can help entrants overcome network externalities that might otherwise shield incumbents. Likewise, Google tying its proprietary apps to the open source Android operating system arguably enabled it to earn a return on its investments, thus overcoming the externality problem that plagues open source software.
All of this raises a tantalizing prospect that deserves far more attention than it is currently given in policy circles: authorities around the world are seeking to regulate the tech space. Draft legislation has notably been tabled in the United States, European Union and the United Kingdom. These draft bills would all make it harder for large tech firms to implement various economic hierarchies, including mergers and certain contractual arrangements.
This is highly paradoxical. If digital markets are indeed plagued by network externalities and high transaction costs, as critics allege, then preventing firms from adopting complex hierarchies—which have traditionally been seen as a way to solve externalities—is just as likely to exacerbate problems. In other words, like the economists of old cited above, today’s policymakers appear to be focusing too heavily on simple models that predict market failure, and far too little on the mechanisms that firms have put in place to thrive within this complex environment.
The bigger picture is that far more circumspection is required when using theoretical models in real-world policy settings. Indeed, as Harold Demsetz famously put it, the purpose of normative economics is not so much to identify market failures, but to help policymakers determine which of several alternative institutions will deliver the best outcomes for consumers:
This nirvana approach differs considerably from a comparative institution approach in which the relevant choice is between alternative real institutional arrangements. In practice, those who adopt the nirvana viewpoint seek to discover discrepancies between the ideal and the real and if discrepancies are found, they deduce that the real is inefficient. Users of the comparative institution approach attempt to assess which alternative real institutional arrangement seems best able to cope with the economic problem […].
[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]
It is my endeavor to scrutinize the questionable assessment articulated against default settings in the U.S. Justice Department’s lawsuit against Google. Default, I will argue, is no antitrust fault. Default in the Google case drastically differs from default referred to in the Microsoft case. In Part I, I argue the comparison is odious. Furthermore, in Part II, it will be argued that the implicit prohibition of default settings echoes, as per listings, the explicit prohibition of self-preferencing in search results. Both aspects – default’s implicit prohibition and self-preferencing’s explicit prohibition – are the two legs of a novel and integrated theory of sanctioning corporate favoritism. The coming to the fore of such theory goes against the very essence of the capitalist grain. In Part III, I note the attempt to instill some corporate selflessness is at odds with competition on the merits and the spirit of fundamental economic freedoms.
When Default is No-Fault
The recent complaint filed by the DOJ and 11 state attorneys general claims that Google has abused its dominant position on the search-engine market through several ways, notably making Google the default search engine both in Google Chrome web browser for Android OS and in Apple’s Safari web browser for iOS. Undoubtedly, default setting confers a noticeable advantage for users’ attraction – it is sought and enforced on purpose. Nevertheless, the default setting confers an unassailable position unless the product remains competitive. Furthermore, the default setting can hardly be proven to be anticompetitive in the Google case. Indeed, the DOJ puts considerable effort in the complaint to make the Google case resemble the 20-year-old Microsoft case. Former Federal Trade Commission Chairman William Kovacic commented: “I suppose the Justice Department is telling the court, ‘You do not have to be scared of this case. You’ve done it before […] This is Microsoft part 2.”[1]
However, irrespective of the merits of the Microsoft case two decades ago, the Google default setting case bears minimal resemblance to the Microsoft default setting of Internet Explorer. First, as opposed to the Microsoft case, where default by Microsoft meant pre-installed software (i.e., Internet Explorer)[2], the Google case does not relate to the pre-installment of the Google search engine (since it is just a webpage) but a simple setting. This technical difference is significant: although “sticky”[3], the default setting, can be outwitted with just one click[4]. It is dissimilar to the default setting, which can only be circumvented by uninstalling software[5], searching and installing a new one[6]. Moreover, with no certainty that consumers will effectively use Google search engine, default settings come with advertising revenue sharing agreements between Google and device manufacturers, mobile phone carriers, competing browsers and Apple[7]. These mutually beneficial deals represent a significant cost with no technical exclusivity [8]. In other words, the antitrust treatment of a tie-in between software and hardware in the Microsoft case cannot be convincingly extrapolated to the default setting of a “webware”[9] as relevant in the Google case.
Second, the Google case cannot legitimately resort to extrapolating the Microsoft case for another technical (and commercial) aspect: the Microsoft case was a classic tie-in case where the tied product (Internet Explorer) was tied into the main product (Windows). As a traditional tie-in scenario, the tied product (Internet Explorer) was “consistently offered, promoted, and distributed […] as a stand-alone product separate from, and not as a component of, Windows […]”[10]. In contrast, Google has never sold Google Chrome or Android OS. It offered both Google Chrome and Android OS for free, necessarily conditional to Google search engine as default setting. The very fact that Google Chrome or Android OS have never been “stand-alone” products, to use the Microsoft case’s language, together with the absence of software installation, dramatically differentiates the features pertaining to the Google case from those of the Microsoft case. The Google case is not a traditional tie-in case: it is a case against default setting when both products (the primary and related products) are given for free, are not saleable, are neither tangible nor intangible goods but only popular digital services due to significant innovativeness and ease of usage. The Microsoft “complaint challenge[d] only Microsoft’s concerted attempts to maintain its monopoly in operating systems and to achieve dominance in other markets, not by innovation and other competition on the merits, but by tie-ins.” Quite noticeably, the Google case does not mention tie-in ,as per Google Chrome or Android OS.
The complaint only refers to tie-ins concerning Google’s app being pre-installed on Android OS. Therefore, concerning Google’s dominance on the search engine market, it cannot be said that the default setting of Google search in Android OS entails tie-in. Google search engine has no distribution channel (since it is only a website) other than through downstream partnerships (i.e., vertical deals with Android device manufacturers). To sanction default setting on downstream trading partners is tantamount to refusing legitimate means to secure distribution channels of proprietary and zero-priced services. To further this detrimental logic, it would mean that Apple may no longer offer its own apps in its own iPhones or, in offline markets, that a retailer may no longer offer its own (default) bags at the till since it excludes rivals’ sale bags. Products and services naked of any adjacent products and markets (i.e., an iPhone or Android OS with no app or a shopkeeper with no bundled services) would dramatically increase consumers’ search costs while destroying innovators’ essential distribution channels for innovative business models and providing few departures from the status quo as long as consumers will continue to value default products[11].
Default should not be an antitrust fault: the Google case makes default settings a new line of antitrust injury absent tie-ins. In conclusion, as a free webware, Google search’s default setting cannot be compared to default installation in the Microsoft case since minimal consumer stickiness entails (almost) no switching costs. As free software, Google’s default apps cannot be compared to Microsoft case either since pre-installation is the sine qua non condition of the highly valued services (Android OS) voluntarily chosen by device manufacturers. Default settings on downstream products can only be reasonably considered as antitrust injury when the dominant company is erroneously treated as a de facto essential facility – something evidenced by the similar prohibition of self-preferencing.
When Self-Preference is No Defense
Self-preferencing is to listings what the default setting is to operating systems. They both are ways to market one’s own products (i.e., alternative to marketing toward end-consumers). While default setting may come with both free products and financial payments (Android OS and advertising revenue sharing), self-preferencing may come with foregone advertising revenues in order to promote one’s own products. Both sides can be apprehended as the two sides of the same coin:[12] generating the ad-funded main product’s distribution channels – Google’s search engine. Both are complex advertising channels since both venues favor one’s own products regarding consumers’ attention. Absent both channels, the payments made for default agreements and the foregone advertising revenues in self-preferencing one’s own products would morph into marketing and advertising expenses of Google search engine toward end-consumers.
The DOJ complaint lambasts that “Google’s monopoly in general search services also has given the company extraordinary power as the gateway to the internet, which uses to promote its own web content and increase its profits.” This blame was at the core of the European Commission’s Google Shopping decision in 2017[13]: it essentially holds Google accountable for having, because of its ad-funded business model, promoted its own advertising products and demoted organic links in search results. According to which Google’s search results are no longer relevant and listed on the sole motivation of advertising revenue
But this argument is circular: should these search results become irrelevant, Google’s core business would become less attractive, thereby generating less advertising revenue. This self-inflicted inefficiency would deprive Google of valuable advertising streams and incentivize end-consumers to switch to search engine rivals such as Bing, DuckDuckGo, Amazon (product search), etc. Therefore, an ad-funded company such as Google needs to reasonably arbitrage between advertising objectives and the efficiency of its core activities (here, zero-priced organic search services). To downplay (the ad-funded) self-referencing in order to foster (the zero-priced) organic search quality would disregard the two-sidedness of the Google platform: it would harm advertisers and the viability of the ad-funded business model without providing consumers and innovation protection it aims at providing. The problematic and undesirable concept of “search neutrality” would mean algorithmic micro-management for the sake of an “objective” listing considered acceptable only to the eyes of the regulator.
Furthermore, self-preferencing entails a sort of positive discrimination toward one’s own products[14]. If discrimination has traditionally been antitrust lines of injuries, self-preferencing is an “epithet”[15] outside antitrust remits for good reasons[16]. Indeed, should self-interested (i.e., rationally minded) companies and individuals are legally complied to self-demote their own products and services? If only big (how big?) companies are legally complied to self-demote their products and services, to what extent will exempted companies involved in self-preferencing become liable to do so?
Indeed, many uncertainties, legal and economic ones, may spawn from the emerging prohibition of self-preferencing. More fundamentally, antitrust liability may clash with basic corporate governance principles where self-interestedness allows self-preferencing and command such self-promotion. The limits of antitrust have been reached when two sets of legal regimes, both applicable to companies, suggest contradictory commercial conducts. To what extent may Amazon no longer promote its own series on Amazon Video in a similar manner Netflix does? To what extent can Microsoft no longer promote Bing’s search engine to compete with Google’s search engine effectively? To what extent Uber may no longer promote UberEATS in order to compete with delivery services effectively? Not only the business of business is doing business[17], but also it is its duty for which shareholders may hold managers to account.
The self is moral; there is a corporate morality of business self-interest. In other words, corporate selflessness runs counter to business ethics since corporate self-interest yields the self’s rivalrous positioning within a competitive order. Absent a corporate self-interest, self-sacrifice may generate value destruction for the sake of some unjustified and ungrounded claims. The emerging prohibition of self-preferencing, similar to the established ban on the default setting on one’s own products into other proprietary products, materializes the corporate self’s losing. Both directions coalesce to instill the legally embedded duty of self-sacrifice for the competitor’s welfare instead of the traditional consumer welfare and the dynamics of innovation, which never unleash absent appropriabilities. In conclusion, to expect firms, however big or small, to act irrespective of their identities (i.e., corporate selflessness) would constitute an antitrust error and would be at odds with capitalism.
Toward an Integrated Theory of Disintegrating Favoritism
The Google lawsuit primarily blames Google for default settings enforced via several deals. The lawsuit also makes self-preferencing anticompetitive conduct under antitrust rules. These two charges are novel and dubious in their remits. They nevertheless represent a fundamental catalyst for the development of a new and problematic unified antitrust theory prohibiting favoritism: companies may no longer favor their products and services, both vertically and horizontally, irrespective of consumer benefits, irrespective of superior efficiency arguments, and irrespective of dynamic capabilities enhancement. Indeed, via an unreasonably expanded vision of leveraging, antitrust enforcement is furtively banning a company to favor its own products and services based on greater consumer choice as a substitute to consumer welfare, based on the protection of the opportunities of rivals to innovate and compete as a substitute to the essence of competition and innovation, and based on limiting the outreach and size of companies as a substitute to the capabilities and efficiencies of these companies. Leveraging becomes suspicious and corporate self-favoritism under accusation. The Google lawsuit materializes this impractical trend, which further enshrines the precautionary approach to antitrust enforcement[18].
[1] Jessica Guynn, Google Justice Department antitrust lawsuit explained: this is what it means for you. USA Today, October 20, 2020.
[2] The software (Internet Explorer) was tied in the hardware (Windows PC).
[3]U.S. v Google LLC, Case A:20, October 20, 2020, 3 (referring to default settings as “especially sticky” with respect to consumers’ willingness to change).
[4] While the DOJ affirms that “being the preset default general search engine is particularly valuable because consumers rarely change the preset default”, it nevertheless provides no evidence of the breadth of such consumer stickiness. To be sure, search engine’s default status does not necessarily lead to usage as evidenced by the case of South Korea. In this country, despite Google’s preset default settings, the search engine Naver remains dominant in the national search market with over 70% of market shares. The rivalry exerted by Naver on Google demonstrates that limits of consumer stickiness to default settings. See Alesia Krush, Google vs. Naver: Why Can’t Google Dominate Search in Korea? Link-Assistant.Com, available at: https://www.link-assistant.com/blog/google-vs-naver-why-cant-google-dominate-search-in-korea/ . As dominant search engine in Korea, Naver is subject to antitrust investigations with similar leveraging practices as Google in other countries, see Shin Ji-hye, FTC sets up special to probe Naver, Google, The Korea Herald, November 19, 2019, available at : http://www.koreaherald.com/view.php?ud=20191119000798 ; Kim Byung-wook, Complaint against Google to be filed with FTC, The Investor, December 14, 2020, available at : https://www.theinvestor.co.kr/view.php?ud=20201123000984 (reporting a complaint by Naver and other Korean IT companies against Google’s 30% commission policy on Google Play Store’s apps).
[5] For instance, the then complaint acknowledged that “Microsoft designed Windows 98 so that removal of Internet Explorer by OEMs or end users is operationally more difficult than it was in Windows 95”, in U.S. v Microsoft Corp., Civil Action No 98-1232, May 18, 1998, para.20.
[6] The DOJ complaint itself quotes “one search competitor” who is reported to have noted consumer stickiness “despite the simplicity of changing a default setting to enable customer choice […]” (para.47). Therefore, default setting for search engine is remarkably simple to bypass but consumers do not often do so, either due to satisfaction with Google search engine and/or due to search and opportunity costs.
[8] Competing browsers can always welcome rival search engines and competing search engine apps can always be downloaded despite revenue sharing agreements. See paras.78-87 of the DOJ complaint.
[9] Google search engine is nothing but a “webware” – a complex set of algorithms that work via online access of a webpage with no prior download. For a discussion on the definition of webware, see https://www.techopedia.com/definition/4933/webware .
[11] Such outcome would frustrate traditional ways of offering computers and mobile devices as acknowledged by the DOJ itself in the Google complaint: “new computers and new mobile devices generally come with a number of preinstalled apps and out-of-the-box setting. […] Each of these search access points can and almost always does have a preset default general search engine”, at para. 41. Also, it appears that present default general search engine is common commercial practices since, as the DOJ complaint itself notes when discussing Google’s rivals (Microsoft’s Bing and Amazon’s Fire OS), “Amazon preinstalled its own proprietary apps and agreed to make Microsoft’s Bing the preset default general search engine”, in para.130. The complaint fails to identify alternative search engines which are not preset defaults, thus implicitly recognizing this practice as a widespread practice.
[12] To use Vesterdof’s language, see Bo Vesterdorf, Theories of Self-Preferencing and Duty to Deal – Two Sides of the Same Coin, Competition Law & Policy Debate 1(1) 4, (2015). See also Nicolas Petit, Theories of Self-Preferencing under Article 102 TFEU: A Reply to Bo Vesterdorf, 5-7 (2015).
[13] Case 39740 Google Search (Shopping). Here the foreclosure effects of self-preferencing are only speculated: « the Commission is not required to prove that the Conduct has the actual effect of decreasing traffic to competing comparison shopping services and increasing traffic to Google’s comparison-shopping service. Rather, it is sufficient for the Commission to demonstrate that the Conduct is capable of having, or likely to have, such effects.” (para.601 of the Decision). See P. Ibáñez Colomo, Indispensability and Abuse of Dominance: From Commercial Solvents to Slovak Telekom and Google Shopping, 10 Journal of European Competition Law & Practice 532 (2019); Aurelien Portuese, When Demotion is Competition: Algorithmic Antitrust Illustrated, Concurrences, no 2, May 2018, 25-37; Aurelien Portuese, Fine is Only One Click Away, Symposium on the Google Shopping Decision, Case Note, 3 Competition and Regulatory Law Review, (2017).
[14] For a general discussion on law and economics of self-preferencing, see Michael A. Salinger, Self-Preferencing, Global Antitrust Institute Report, 329-368 (2020).
[15]Pablo Ibanez Colomo, Self-Preferencing: Yet Another Epithet in Need of Limiting Principles, 43 World Competition (2020) (concluding that self-preferencing is « misleading as a legal category »).
[16] See, for instances, Pedro Caro de Sousa, What Shall We Do About Self-Preferencing? Competition Policy International, June 2020.
[17] Milton Friedman, The Social Responsibility of Business is to Increase Its Profits, New York Times, September 13, 1970. This echoes Adam Smith’s famous statement that « It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard for their own self-interest » from the 1776 Wealth of Nations. In Ayn Rand’s philosophy, the only alternative to rational self-interest is to sacrifice one’s own interests either for fellowmen (altruism) or for supernatural forces (mysticism). See Ayn Rand, The Objectivist Ethics, in The Virtue of Selfishness, Signet, (1964).
[18] Aurelien Portuese, European Competition Enforcement and the Digital Economy : The Birthplace of Precautionary Antitrust, Global Antitrust Institute’s Report on the Digital Economy, 597-651.
John Maynard Keynes wrote in his famous General Theorythat “[t]he ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else. Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist.”
This is true even of those who wish to criticize the effect of economic thinking on society. In his new book, The Economists’ Hour: False Prophets, Free Markets, and the Fracture of Society, New York Times economics reporter Binyamin Appelbaum aims to show that economists have had a detrimental effect on public policy. But the central irony of the Economists’ Hour is that in criticizing the influence of economists over policy, Appelbaum engages in a great deal of economic speculation himself. Appelbaum would discard the opinions of economists in favor of “the lessons of history,” but all he is left with is unsupported economic reasoning.
Much of The Economists’ Hour is about the history of ideas. To his credit, Appelbaum does a fair job describing Anglo-American economic thought post-New Deal until the start of the 21st century. Part I mainly focuses on macroeconomics, detailing the demise of the Keynesian consensus and the rise of the monetarists and supply-siders. If the author were not so cynical about the influence of economists, he might have represented these changes in dominant economic paradigms as an example of how science progresses over time.
Interestingly, Appelbaum often makes the case that the insights of economists have been incredibly beneficial. For instance, in the opening chapter, he describes how Milton Friedman (one of the main protagonists/antagonists of the book, depending on your point of view) and a band of economists (including Martin Anderson and Walter Oi) fought the military establishment and ended the draft. For that, I’m sure most of us born in the past fifty years would be thankful. One suspects that group includes Appelbaum, though he tries to find objections, claiming for example that “by making war more efficient and more remote from the lives of most Americans, the end of the draft may also have made war more likely.”
Appelbaum also notes positively that economists, most prominently Alfred Kahn in the United States, led the charge in a largely beneficial deregulation of the airline and trucking industries in the late 1970s and early 1980s.
Yet, overall, it is clear that Appelbaum believes the “outsized” influence of economists over policymaking itself fails the cost-benefit analysis. Appelbaum focuses on the costs of listening too much to economists on antitrust law, trade and development, interest rates and currency, the use of cost-benefit analysis in regulation, and the deregulation of the financial services industry. He sees the deregulation of airlines and trucking as the height of the economists’ hour, and its close with the financial crisis of the late-2000s. His thesis is that (his interpretation of) economists’ notions of efficiency, their (alleged) lack of concern about distributional effects, and their (alleged) myopia has harmed society as their influence over policy has grown.
In his chapter on antitrust, for instance, Appelbaum admits that even though “[w]e live in a new era of giant corporations… there is little evidence consumers are suffering.” Appelbaum argues instead that lax antitrust enforcement has resulted in market concentration harmful to workers, democracy, and innovation. In order to make those arguments, he uncritically cites the work of economists and non-economist legal scholars that make economic claims. A closer inspection of each of these (economic) arguments suggests there is more to the story.
First, recent research questions the narrative that increasing market concentration has resulted in harm to consumers, workers, or society. In their recent paper, “The Industrial Revolution in Services,” Chang-Tai Hsieh of the University of Chicago and Esteban Rossi-Hansberg of Princeton University argue that increasing concentration is primarily due to technological innovation in services, retail, and wholesale sectors. While there has been greater concentration at the national level, this has been accompanied by increased competition locally as national chains expanded to more local markets. Of note, employment has increased in the sectors where national concentration is rising.
The rise in national industry concentration in the US between 1977 and 2013 is driven by a new industrial revolution in three broad non-traded sectors: services, retail, and wholesale. Sectors where national concentration is rising have increased their share of employment, and the expansion is entirely driven by the number of local markets served by firms. Firm employment per market has either increased slightly at the MSA level, or decreased substantially at the county or establishment levels. In industries with increasing concentration, the expansion into more markets is more pronounced for the top 10% firms, but is present for the bottom 90% as well. These trends have not been accompanied by economy-wide concentration. Top U.S. firms are increasingly specialized in sectors with rising industry concentration, but their aggregate employment share has remained roughly stable. We argue that these facts are consistent with the availability of a new set of fixed-cost technologies that enable adopters to produce at lower marginal costs in all markets. We present a simple model of firm size and market entry to describe the menu of new technologies and trace its implications.
In other words, any increase in concentration has been sector-specific and primarily due to more efficient national firms expanding into local markets. This has been associated with lower prices for consumers and more employment opportunities for workers in those sectors.
Appelbaum also looks to Lina Khan’s law journal article, which attacks Amazon for allegedly engaging in predatory pricing, as an example of a new group of young scholars coming to the conclusion that there is a need for more antitrust scrutiny. But, as ICLE scholars Alec Stapp and Kristian Stout have pointed out, there is very little evidence Amazon is actually engaging in predatory pricing. Khan’s article is a challenge to the consensus on how to think about predatory pricing and consumer welfare, but her underlying economic theory is premised on Amazon having such a long time horizon that they can lose money on retail for decades (even though it has been profitable for some time), on the theory that someday down the line they can raise prices after they have run all retail competition out.
Second, Appelbaum argues that mergers and acquisitions in the technology sector, especially acquisitions by Google and Facebook of potential rivals, has decreased innovation. Appelbaum’s belief is that innovation is spurred when government forces dominant players “to make room” for future competition. Here he draws in part on claims by some economists that dominant firms sometimes engage in “killer acquisitions” — acquiring nascent competitors in order to reduce competition, to the detriment of consumer welfare. But a simple model of how that results in reduced competition must be balanced by a recognition that many companies, especially technology startups, are incentivized to innovate in part by the possibility that they will be bought out. As noted by the authors of the leading study on the welfare effects of alleged “killer acquisitions”,
“it is possible that the presence of an acquisition channel also has a positive effect on welfare if the prospect of entrepreneurial exit through acquisition (by an incumbent) spurs ex-ante innovation …. Whereas in our model entrepreneurs are born with a project and thus do not have to exert effort to come up with an idea, it is plausible that the prospect of later acquisition may motivate the origination of entrepreneurial ideas in the first place… If, on the other hand, killer acquisitions do increase ex-ante innovation, this potential welfare gain will have to be weighed against the ex-post efficiency loss due to reduced competition. Whether the former positive or the latter negative effect dominates will depend on the elasticity of the entrepreneur’s innovation response.”
This analysis suggests that a case-by-case review is necessary if antitrust plaintiffs can show evidence that harm to consumers is likely to occur due to a merger.. But shifting the burden to merging entities, as Applebaum seems to suggest, will come with its own costs. In other words, more economics is needed to understand this area, not less.
Third, Appelbaum’s few concrete examples of harm to consumers resulting from “lax antitrust enforcement” in the United States come from airline mergers and telecommunications. In both cases, he sees the increased attention from competition authorities in Europe compared to the U.S. at the explanation for better outcomes. Neither is a clear example of harm to consumers, nor can be used to show superior antitrust frameworks in Europe versus the United States.
In the case of airline mergers, Appelbaum argues the gains from deregulation of the industry have been largely given away due to poor antitrust enforcement and prices stopped falling, leading to a situation where “[f]or the first time since the dawn of aviation, it is generally cheaper to fly in Europe than in the United States.” This is hard to square with the data.
While the concentration and profits story fits the antitrust populist narrative, other observations run contrary to [this] conclusion. For example, airline prices, as measured by price indexes, show that changes in U.S. and EU airline prices have fairly closely tracked each other until 2014, when U.S. prices began dropping. Sure, airlines have instituted baggage fees, but the CPI includes taxes, fuel surcharges, airport, security, and baggage fees. It’s not obvious that U.S. consumers are worse off in the so-called era of rising concentration.
Our main conclusion is simple: The recent legacy carrier mergers have been associated with pro-competitive outcomes. We find that, on average across all three mergers combined, nonstop overlap routes (on which both merging parties were present pre-merger) experienced statistically significant output increases and statistically insignificant nominal fare decreases relative to non-overlap routes. This pattern also holds when we study each of the three mergers individually. We find that nonstop overlap routes experienced statistically significant output and capacity increases following all three legacy airline mergers, with statistically significant nominal fare decreases following Delta/Northwest and American/USAirways mergers, and statistically insignificant nominal fare decreases following the United/Continental merger…
One implication of our findings is that any fare increases that have been observed since the mergers were very unlikely to have been caused by the mergers. In particular, our results demonstrate pro-competitive output expansions on nonstop overlap routes indicating reductions in quality-adjusted fares and a lack of significant anti-competitive effects on connecting overlaps. Hence ,our results demonstrate consumer welfare gains on overlap routes, without even taking credit for the large benefits on non-overlap routes (due to new online service, improved service networks at airports, fleet reallocation, etc.). While some of our results indicate that passengers on non-overlap routes also benefited from the mergers, we leave the complete exploration of such network effects for future research.
In other words, neither part of Applebaum’s proposition, that Europe has cheaper fares and that concentration has led to worse outcomes for consumers in the United States, appears to be true. Perhaps the influence of economists over antitrust law in the United States has not been so bad after all.
Appelbaum also touts the lower prices for broadband in Europe as an example of better competition policy over telecommunications in Europe versus the United States. While prices are lower on average in Europe for broadband, this obfuscates distribution of prices depending on speed tiers. UPenn Professor Christopher Yoo’s 2014 study titled U.S. vs. European Broadband Deployment: What Do the Data Say? found:
U.S. broadband was cheaper than European broadband for all speed tiers below 12 Mbps. U.S. broadband was more expensive for higher speed tiers, although the higher cost was justified in no small part by the fact that U.S. Internet users on average consumed 50% more bandwidth than their European counterparts.
Population density also helps explain differences between Europe and the United States. The closer people are together, the easier it is to build out infrastructure like broadband Internet. The United States is considerably more rural than most European countries. As a result, consideration of prices and speed need to be adjusted to reflect those differences. For instance, the FCC’s 2018 International Broadband Data Report shows a move in position from 23rd to 14th for the United States compared to 28 (mostly European) other countries once population density and income are taken into consideration for fixed broadband prices (Model 1 to Model 2). The United States climbs even further to 6th out of the 29 countries studied if data usage is included and 7th if quality (i.e. websites available in language) is taken into consideration (Model 4).
Country
Model 1
Model 2
Model 3
Model 4
Price
Rank
Price
Rank
Price
Rank
Price
Rank
Australia
$78.30
28
$82.81
27
$102.63
26
$84.45
23
Austria
$48.04
17
$60.59
15
$73.17
11
$74.02
17
Belgium
$46.82
16
$66.62
21
$75.29
13
$81.09
22
Canada
$69.66
27
$74.99
25
$92.73
24
$76.57
19
Chile
$33.42
8
$73.60
23
$83.81
20
$88.97
25
Czech Republic
$26.83
3
$49.18
6
$69.91
9
$60.49
6
Denmark
$43.46
14
$52.27
8
$69.37
8
$63.85
8
Estonia
$30.65
6
$56.91
12
$81.68
19
$69.06
12
Finland
$35.00
9
$37.95
1
$57.49
2
$51.61
1
France
$30.12
5
$44.04
4
$61.96
4
$54.25
3
Germany
$36.00
12
$53.62
10
$75.09
12
$66.06
11
Greece
$35.38
10
$64.51
19
$80.72
17
$78.66
21
Iceland
$65.78
25
$73.96
24
$94.85
25
$90.39
26
Ireland
$56.79
22
$62.37
16
$76.46
14
$64.83
9
Italy
$29.62
4
$48.00
5
$68.80
7
$59.00
5
Japan
$40.12
13
$53.58
9
$81.47
18
$72.12
15
Latvia
$20.29
1
$42.78
3
$63.05
5
$52.20
2
Luxembourg
$56.32
21
$54.32
11
$76.83
15
$72.51
16
Mexico
$35.58
11
$91.29
29
$120.40
29
$109.64
29
Netherlands
$44.39
15
$63.89
18
$89.51
21
$77.88
20
New Zealand
$59.51
24
$81.42
26
$90.55
22
$76.25
18
Norway
$88.41
29
$71.77
22
$103.98
27
$96.95
27
Portugal
$30.82
7
$58.27
13
$72.83
10
$71.15
14
South Korea
$25.45
2
$42.07
2
$52.01
1
$56.28
4
Spain
$54.95
20
$87.69
28
$115.51
28
$106.53
28
Sweden
$52.48
19
$52.16
7
$61.08
3
$70.41
13
Switzerland
$66.88
26
$65.01
20
$91.15
23
$84.46
24
United Kingdom
$50.77
18
$63.75
17
$79.88
16
$65.44
10
United States
$58.00
23
$59.84
14
$64.75
6
$62.94
7
Average
$46.55
$61.70
$80.24
$73.73
Model 1: Unadjusted for demographics and content quality
Model 2: Adjusted for demographics but not content quality
Model 3: Adjusted for demographics and data usage
Model 4: Adjusted for demographics and content quality
Furthermore, investment and buildout are other important indicators of how well the United States is doing compared to Europe. Appelbaum fails to consider all of these factors when comparing the European model of telecommunications to the United States’. Yoo’s conclusion is an appropriate response:
The increasing availability of high-quality data has the promise to effect a sea change in broadband policy. Debates that previously relied primarily on anecdotal evidence and personal assertions of visions for the future can increasingly take place on a firmer empirical footing.
In particular, these data can resolve the question whether the U.S. is running behind Europe in the broadband race or vice versa. The U.S. and European mapping studies are clear and definitive: These data indicate that the U.S. is ahead of Europe in terms of the availability of Next Generation Access (NGA) networks. The U.S. advantage is even starker in terms of rural NGA coverage and with respect to key technologies such as FTTP and LTE.
Empirical analysis, both in terms of top-level statistics and in terms of eight country case studies, also sheds light into the key policy debate between facilities-based competition and service-based competition. The evidence again is fairly definitive, confirming that facilities-based competition is more effective in terms of driving broadband investment than service-based competition.
In other words, Appelbaum relies on bad data to come to his conclusion that listening to economists has been wrong for American telecommunications policy. Perhaps it is his economic assumptions that need to be questioned.
Conclusion
At the end of the day, in antitrust, environmental regulation, and other areas he reviewed, Appelbaum does not believe economic efficiency should be the primary concern anyway. For instance, he repeats the common historical argument that the purpose of the Sherman Act was to protect small businesses from bigger, and often more efficient, competitors.
So applying economic analysis to Appelbaum’s claims may itself be an illustration of caring too much about economic models instead of learning “the lessons of history.” But Appelbaum inescapably assumes economic models of its own. And these models appear less grounded in empirical data than those of the economists he derides. There’s no escaping mental models to understand the world. It is just a question of whether we are willing to change our mind if a better way of understanding the world presents itself. As Keynes is purported to have said, “When the facts change, I change my mind. What do you do, sir?”
For all the criticism of economists, there at least appears to be a willingness among them to change their minds, as illustrated by the increasing appreciation for anti-inflationary monetary policy among macroeconomists described in TheEconomists’ Hour. The question which remains is whether Appelbaum and other critics of the economic way of thinking are as willing to reconsider their strongly held views when they conflict with the evidence.
The populists are on the march, and as the 2018 campaign season gets rolling we’re witnessing more examples of political opportunism bolstered by economic illiteracy aimed at increasingly unpopular big tech firms.
The latest example comes in the form of a new investigation of Google opened by Missouri’s Attorney General, Josh Hawley. Mr. Hawley — a Republican who, not coincidentally, is running for Senate in 2018 — alleges various consumer protection violations and unfair competition practices.
But while Hawley’s investigation may jump start his campaign and help a few vocal Google rivals intent on mobilizing the machinery of the state against the company, it is unlikely to enhance consumer welfare — in Missouri or anywhere else.
According to the press release issued by the AG’s office:
[T]he investigation will seek to determine if Google has violated the Missouri Merchandising Practices Act—Missouri’s principal consumer-protection statute—and Missouri’s antitrust laws.
The business practices in question are Google’s collection, use, and disclosure of information about Google users and their online activities; Google’s alleged misappropriation of online content from the websites of its competitors; and Google’s alleged manipulation of search results to preference websites owned by Google and to demote websites that compete with Google.
Mr. Hawley’s justification for his investigation is a flourish of populist rhetoric:
We should not just accept the word of these corporate giants that they have our best interests at heart. We need to make sure that they are actually following the law, we need to make sure that consumers are protected, and we need to hold them accountable.
But Hawley’s “strong” concern is based on tired retreads of the same faulty arguments that Google’s competitors (Yelp chief among them), have been plying for the better part of a decade. In fact, all of his apparent grievances against Google were exhaustively scrutinized by the FTC and ultimately rejected or settled in separate federal investigations in 2012 and 2013.
The antitrust issues
To begin with, AG Hawley references the EU antitrust investigation as evidence that
this is not the first-time Google’s business practices have come into question. In June, the European Union issued Google a record $2.7 billion antitrust fine.
True enough — and yet, misleadingly incomplete. Missing from Hawley’s recitation of Google’s antitrust rap sheet are the following investigations, which were closed without any finding of liability related to Google Search, Android, Google’s advertising practices, etc.:
United States FTC, 2013. The FTC found no basis to pursue a case after a two-year investigation: “Challenging Google’s product design decisions in this case would require the Commission — or a court — to second-guess a firm’s product design decisions where plausible procompetitive justifications have been offered, and where those justifications are supported by ample evidence.” The investigation did result in a consent order regarding patent licensing unrelated in any way to search and a voluntary commitment by Google not to engage in certain search-advertising-related conduct.
South Korea FTC, 2013. The KFTC cleared Google after a two-year investigation. It opened a new investigation in 2016, but, as I have discussed, “[i]f anything, the economic conditions supporting [the KFTC’s 2013] conclusion have only gotten stronger since.”
Canada Competition Bureau, 2016. The CCB closed a three-year long investigation into Google’s search practices without taking any action.
Similar investigations have been closed without findings of liability (or simply lie fallow) in a handful of other countries (e.g., Taiwan and Brazil) and even several states (e.g., Ohio and Texas). In fact, of all the jurisdictions that have investigated Google, only the EU and Russia have actually assessed liability.
As Beth Wilkinson, outside counsel to the FTC during the Google antitrust investigation, noted upon closing the case:
Undoubtedly, Google took aggressive actions to gain advantage over rival search providers. However, the FTC’s mission is to protect competition, and not individual competitors. The evidence did not demonstrate that Google’s actions in this area stifled competition in violation of U.S. law.
The CCB was similarly unequivocal in its dismissal of the very same antitrust claims Missouri’s AG seems intent on pursuing against Google:
The Bureau sought evidence of the harm allegedly caused to market participants in Canada as a result of any alleged preferential treatment of Google’s services. The Bureau did not find adequate evidence to support the conclusion that this conduct has had an exclusionary effect on rivals, or that it has resulted in a substantial lessening or prevention of competition in a market.
Unfortunately, rather than follow the lead of these agencies, Missouri’s investigation appears to have more in common with Russia’s effort to prop up a favored competitor (Yandex) at the expense of consumer welfare.
The Yelp Claim
Take Mr. Hawley’s focus on “Google’s alleged misappropriation of online content from the websites of its competitors,” for example, which cleaves closely to what should become known henceforth as “The Yelp Claim.”
While the sordid history of Yelp’s regulatory crusade against Google is too long to canvas in its entirety here, the primary elements are these:
Once upon a time (in 2005), Google licensed Yelp’s content for inclusion in its local search results. In 2007 Yelp ended the deal. By 2010, and without a license from Yelp (asserting fair use), Google displayed small snippets of Yelp’s reviews that, if clicked on, led to Yelp’s site. Even though Yelp received more user traffic from those links as a result, Yelp complained, and Google removed Yelp snippets from its local results.
In its 2013 agreement with the FTC, Google guaranteed that Yelp could opt-out of having even snippets displayed in local search results by committing Google to:
make available a web-based notice form that provides website owners with the option to opt out from display on Google’s Covered Webpages of content from their website that has been crawled by Google. When a website owner exercises this option, Google will cease displaying crawled content from the domain name designated by the website owner….
The commitments also ensured that websites (like Yelp) that opt out would nevertheless remain in Google’s general index.
Ironically, Yelp now claims in a recent study that Google should show not only snippets of Yelp reviews, but even more of Yelp’s content. (For those interested, my colleagues and I have a paper explaining why the study’s claims are spurious).
The key bit here, of course, is that Google stopped pulling content from Yelp’s pages to use in its local search results, and that it implemented a simple mechanism for any other site wishing to opt out of the practice to do so.
It’s difficult to imagine why Missouri’s citizens might require more than this to redress alleged anticompetitive harms arising from the practice.
Perhaps AG Hawley thinks consumers would be better served by an opt-in mechanism? Of course, this is absurd, particularly if any of Missouri’s citizens — and their businesses — have websites. Most websites want at least some of their content to appear on Google’s search results pages as prominently as possible — see this and this, for example — and making this information more accessible to users is why Google exists.
To be sure, some websites may take issue with how much of their content Google features and where it places that content. But the easy opt out enables them to prevent Google from showing their content in a manner they disapprove of. Yelp is an outlier in this regard because it views Google as a direct competitor, especially to the extent it enables users to read some of Yelp’s reviews without visiting Yelp’s pages.
For Yelp and a few similarly situated companies the opt out suffices. But for almost everyone else the opt out is presumably rarely exercised, and any more-burdensome requirement would just impose unnecessary costs, harming instead of helping their websites.
The privacy issues
The Missouri investigation also applies to “Google’s collection, use, and disclosure of information about Google users and their online activities.” More pointedly, Hawley claims that “Google may be collecting more information from users than the company was telling consumers….”
Presumably this would come as news to the FTC, which, with a much larger staff and far greater expertise, currently has Google under a 20 year consent order (with some 15 years left to go) governing its privacy disclosures and information-sharing practices, thus ensuring that the agency engages in continual — and well-informed — oversight of precisely these issues.
The FTC’s consent order with Google (the result of an investigation into conduct involving Google’s short-lived Buzz social network, allegedly in violation of Google’s privacy policies), requires the company to:
“[N]ot misrepresent in any manner, expressly or by implication… the extent to which respondent maintains and protects the privacy and confidentiality of any [user] information…”;
“Obtain express affirmative consent from” users “prior to any new or additional sharing… of the Google user’s identified information with any third party” if doing so would in any way deviate from previously disclosed practices;
“[E]stablish and implement, and thereafter maintain, a comprehensive privacy program that is reasonably designed to [] address privacy risks related to the development and management of new and existing products and services for consumers, and (2) protect the privacy and confidentiality of [users’] information”; and
Along with a laundry list of other reporting requirements, “[submit] biennial assessments and reports [] from a qualified, objective, independent third-party professional…, approved by the [FTC] Associate Director for Enforcement, Bureau of Consumer Protection… in his or her sole discretion.”
What, beyond the incredibly broad scope of the FTC’s consent order, could the Missouri AG’s office possibly hope to obtain from an investigation?
Google is already expressly required to provide privacy reports to the FTC every two years. It must provide several of the items Hawley demands in his CID to the FTC; others are required to be made available to the FTC upon demand. What materials could the Missouri AG collect beyond those the FTC already receives, or has the authority to demand, under its consent order?
And what manpower and expertise could Hawley apply to those materials that would even begin to equal, let alone exceed, those of the FTC?
That penalty is of undeniable import, not only for its amount (at the time it was the largest in FTC history) and for stemming from alleged problems completely unrelated to the issue underlying the initial action, but also because it was so easy to obtain. Having put Google under a 20-year consent order, the FTC need only prove (or threaten to prove) contempt of the consent order, rather than the specific elements of a new violation of the FTC Act, to bring the company to heel. The former is far easier to prove, and comes with the ability to impose (significant) damages.
So what’s really going on in Jefferson City?
While states are, of course, free to enforce their own consumer protection laws to protect their citizens, there is little to be gained — other than cold hard cash, perhaps — from pursuing cases that, at best, duplicate enforcement efforts already undertaken by the federal government (to say nothing of innumerable other jurisdictions).
To take just one relevant example, in 2013 — almost a year to the day following the court’s approval of the settlement in the FTC’s case alleging Google’s violation of the Buzz consent order — 37 states plus DC (not including Missouri) settled their own, follow-on litigation against Google on the same facts. Significantly, the terms of the settlement did not impose upon Google any obligation not already a part of the Buzz consent order or the subsequent FTC settlement — but it did require Google to fork over an additional $17 million.
Not only is there little to be gained from yet another ill-conceived antitrust campaign, there is much to be lost. Such massive investigations require substantial resources to conduct, and the opportunity cost of doing so may mean real consumer issues go unaddressed. The Consumer Protection Section of the Missouri AG’s office says it receives some 100,000 consumer complaints a year. How many of those will have to be put on the back burner to accommodate an investigation like this one?
Even when not politically motivated, state enforcement of CPAs is not an unalloyed good. In fact, empirical studies of state consumer protection actions like the one contemplated by Mr. Hawley have shown that such actions tend toward overreach — good for lawyers, perhaps, but expensive for taxpayers and often detrimental to consumers. According to a recent study by economists James Cooper and Joanna Shepherd:
[I]n recent decades, this thoughtful balance [between protecting consumers and preventing the proliferation of lawsuits that harm both consumers and businesses] has yielded to damaging legislative and judicial overcorrections at the state level with a common theoretical mistake: the assumption that more CPA litigation automatically yields more consumer protection…. [C]ourts and legislatures gradually have abolished many of the procedural and remedial protections designed to cabin state CPAs to their original purpose: providing consumers with redress for actual harm in instances where tort and contract law may provide insufficient remedies. The result has been an explosion in consumer protection litigation, which serves no social function and for which consumers pay indirectly through higher prices and reduced innovation.
AG Hawley’s investigation seems almost tailored to duplicate the FTC’s extensive efforts — and to score political points. Or perhaps Mr. Hawley is just perturbed that Missouri missed out its share of the $17 million multistate settlement in 2013.
Which raises the spectre of a further problem with the Missouri case: “rent extraction.”
It’s no coincidence that Mr. Hawley’s investigation follows closely on the heels of Yelp’s recent letter to the FTC and every state AG (as well as four members of Congress and the EU’s chief competition enforcer, for good measure) alleging that Google had re-started scraping Yelp’s content, thus violating the terms of its voluntary commitments to the FTC.
It’s also no coincidence that Yelp “notified” Google of the problem only by lodging a complaint with every regulator who might listen rather than by actuallynotifying Google. But an action like the one Missouri is undertaking — not resolution of the issue — is almost certainly exactly what Yelp intended, and AG Hawley is playing right into Yelp’s hands.
Google, for its part, strongly disputes Yelp’s allegation, and, indeed, has — even according to Yelp — complied fully with Yelp’s request to keep its content off Google Local and other “vertical” search pages since 18 months before Google entered into its commitments with the FTC. Google claims that the recent scraping was inadvertent, and that it would happily have rectified the problem if only Yelp had actually bothered to inform Google.
Indeed, Yelp’s allegations don’t really pass the smell test: That Google would suddenly change its practices now, in violation of its commitments to the FTC and at a time of extraordinarily heightened scrutiny by the media, politicians of all stripes, competitors like Yelp, the FTC, the EU, and a host of other antitrust or consumer protection authorities, strains belief.
But, again, identifying and resolving an actual commercial dispute was likely never the goal. As a recent, fawning New York Times article on “Yelp’s Six-Year Grudge Against Google” highlights (focusing in particular on Luther Lowe, now Yelp’s VP of Public Policy and the author of the letter):
Yelp elevated Mr. Lowe to the new position of director of government affairs, a job that more or less entails flying around the world trying to sic antitrust regulators on Google. Over the next few years, Yelp hired its first lobbyist and started a political action committee. Recently, it has started filing complaints in Brazil.
Missouri, in other words, may just be carrying Yelp’s water.
The one clear lesson of the decades-long Microsoft antitrust saga is that companies that struggle to compete in the market can profitably tax their rivals by instigating antitrust actions against them. As Milton Friedman admonished, decrying “the business community’s suicidal impulse” to invite regulation:
As a believer in the pursuit of self-interest in a competitive capitalist system, I can’t blame a businessman who goes to Washington [or is it Jefferson City?] and tries to get special privileges for his company.… Blame the rest of us for being so foolish as to let him get away with it.
Taking a tough line on Silicon Valley firms in the midst of today’s anti-tech-company populist resurgence may help with the electioneering in Mr. Hawley’s upcoming bid for a US Senate seat and serve Yelp, but it doesn’t offer any clear, actual benefits to Missourians. As I’ve wondered before: “Exactly when will regulators be a little more skeptical of competitors trying to game the antitrust laws for their own advantage?”
Your industry, the computer industry, moves so much more rapidly than the legal process, that by the time this suit is over, who knows what the shape of the industry will be.
Though the legal process seems to be moving quickly in the cases of Dow/Dupont, ChemChina/Syngenta, and Bayer/Monsanto, seed technology is moving fast as well. With recent breakthroughs in gene editing, seed technology will be more dynamic, cheaper, and likely subject to far less regulation than the current transgenic technology.
GMO seeds produced using current techniques are primarily designed with specific insect control and herbicide tolerance. Gene editing has the potential to go much further by creating drought and disease tolerance as well as improving yield. It’s difficult to know precisely how this new technology will be integrated into the industry, but its effects are likely to promote innovation from outside the three large firms that will result from the mergers and acquisitions mentioned above.
As in the food industry, small gene editing startups will be able to develop new traits with the intention of being acquired by one of the large firms in the industry. By allowing small firms to enter the seed biotech industry, gene editing will provide the sort of external innovation Joanna Shepherd notes is so important in understanding antitrust cases.
I have been a critic of the Federal Trade Commission’s investigation into Google since it was a gleam in its competitors’ eyes—skeptical that there was any basis for a case, and concerned about the effect on consumers, innovation and investment if a case were brought.
While it took the Commission more than a year and a half to finally come to the same conclusion, ultimately the FTC had no choice but to close the case that was a “square peg, round hole” problem from the start.
Now that the FTC’s investigation has concluded, an examination of the nature of the markets in which Google operates illustrates why this crusade was ill-conceived from the start. In short, the “realities on the ground” strongly challenged the logic and relevance of many of the claims put forth by Google’s critics. Nevertheless, the politics are such that their nonsensical claims continue, in different forums, with competitors continuing to hope that they can wrangle a regulatory solution to their competitive problem.
The case against Google rested on certain assumptions about the functioning of the markets in which Google operates. Because these are tech markets, constantly evolving and complex, most assumptions about the scope of these markets and competitive effects within them are imperfect at best. But there are some attributes of Google’s markets—conveniently left out of the critics’ complaints— that, properly understood, painted a picture for the FTC that undermined the basic, essential elements of an antitrust case against the company.
That case was seriously undermined by the nature and extent of competition in the markets the FTC was investigating. Most importantly, casual references to a “search market” and “search advertising market” aside, Google actually competes in the market for targeted eyeballs: a market aimed to offer up targeted ads to interested users. Search offers a valuable opportunity for targeting an advertiser’s message, but it is by no means alone: there are myriad (and growing) other mechanisms to access consumers online.
Consumers use Google because they are looking for information — but there are lots of ways to do that. There are plenty of apps that circumvent Google, and consumers are increasingly going to specialized sites to find what they are looking for. The search market, if a distinct one ever existed, has evolved into an online information market that includes far more players than those who just operate traditional search engines.
We live in a world where what prevails today won’t prevail tomorrow. The tech industry is constantly changing, and it is the height of folly (and a serious threat to innovation and consumer welfare) to constrain the activities of firms competing in such an environment by pigeonholing the market. In other words, in a proper market, Google looks significantly less dominant. More important, perhaps, as search itself evolves, and as Facebook, Amazon and others get into the search advertising game, Google’s strong position even in the overly narrow “search market” is far from unassailable.
This is progress — creative destruction — not regress, and such changes should not be penalized.
Another common refrain from Google’s critics was that Google’s access to immense amounts of data used to increase the quality of its targeting presented a barrier to competition that no one else could match, thus protecting Google’s unassailable monopoly. But scale comes in lots of ways.
Even if scale doesn’t come cheaply, the fact that challenging firms might have to spend the same (or, in this case, almost certainly less) Google did in order to replicate its success is not a “barrier to entry” that requires an antitrust remedy. Data about consumer interests is widely available (despite efforts to reduce the availability of such data in the name of protecting “privacy”—which might actually create barriers to entry). It’s never been the case that a firm has to generate its own inputs for every product it produces — and there’s no reason to suggest search or advertising is any different.
Additionally, to defend a claim of monopolization, it is generally required to show that the alleged monopolist enjoys protection from competition through barriers to entry. In Google’s case, the barriers alleged were illusory. Bing and other recent entrants in the general search business have enjoyed success precisely because they were able to obtain the inputs (in this case, data) necessary to develop competitive offerings.
Meanwhile unanticipated competitors like Facebook, Amazon, Twitter and others continue to knock at Google’s metaphorical door, all of them entering into competition with Google using data sourced from creative sources, and all of them potentially besting Google in the process. Consider, for example, Amazon’s recent move into the targeted advertising market, competing with Google to place ads on websites across the Internet, but with the considerable advantage of being able to target ads based on searches, or purchases, a user has made on Amazon—the world’s largest product search engine.
Now that the investigation has concluded, we come away with two major findings. First, the online information market is dynamic, and it is a fool’s errand to identify the power or significance of any player in these markets based on data available today — data that is already out of date between the time it is collected and the time it is analyzed.
Second, each development in the market – whether offered by Google or its competitors and whether facilitated by technological change or shifting consumer preferences – has presented different, novel and shifting opportunities and challenges for companies interested in attracting eyeballs, selling ad space and data, earning revenue and obtaining market share. To say that Google dominates “search” or “online advertising” missed the mark precisely because there was simply nothing especially antitrust-relevant about either search or online advertising. Because of their own unique products, innovations, data sources, business models, entrepreneurship and organizations, all of these companies have challenged and will continue to challenge the dominant company — and the dominant paradigm — in a shifting and evolving range of markets.
It would be churlish not to give credit where credit is due—and credit is due the FTC. I continue to think the investigation should have ended before it began, of course, but the FTC is to be commended for reaching this result amidst an overwhelming barrage of pressure to “do something.”
But there are others in this sadly politicized mess for whom neither the facts nor the FTC’s extensive investigation process (nor the finer points of antitrust law) are enough. Like my four-year-old daughter, they just “want what they want,” and they will stamp their feet until they get it.
While competitors will be competitors—using the regulatory system to accomplish what they can’t in the market—they do a great disservice to the very customers they purport to be protecting in doing so. As Milton Friedman famously said, in decrying “The Business Community’s Suicidal Impulse“:
As a believer in the pursuit of self-interest in a competitive capitalist system, I can’t blame a businessman who goes to Washington and tries to get special privileges for his company.… Blame the rest of us for being so foolish as to let him get away with it.
I do blame businessmen when, in their political activities, individual businessmen and their organizations take positions that are not in their own self-interest and that have the effect of undermining support for free private enterprise. In that respect, businessmen tend to be schizophrenic. When it comes to their own businesses, they look a long time ahead, thinking of what the business is going to be like 5 to 10 years from now. But when they get into the public sphere and start going into the problems of politics, they tend to be very shortsighted.
Ironically, Friedman was writing about the antitrust persecution of Microsoft by its rivals back in 1999:
Is it really in the self-interest of Silicon Valley to set the government on Microsoft? Your industry, the computer industry, moves so much more rapidly than the legal process, that by the time this suit is over, who knows what the shape of the industry will be.… [Y]ou will rue the day when you called in the government.
Among Microsoft’s chief tormentors was Gary Reback. He’s spent the last few years beating the drum against Google—but singing from the same song book. Reback recently told the Washington Post, “if a settlement were to be proposed that didn’t include search, the institutional integrity of the FTC would be at issue.” Actually, no it wouldn’t. As a matter of fact, the opposite is true. It’s hard to imagine an agency under more pressure, from more quarters (including the Hill), to bring a case around search. Doing so would at least raise the possibility that it were doing so because of pressure and not the merits of the case. But not doing so in the face of such pressure? That can almost only be a function of institutional integrity.
As another of Google’s most-outspoken critics, Tom Barnett, noted:
[The FTC has] really put [itself] in the position where they are better positioned now than any other agency in the U.S. is likely to be in the immediate future to address these issues. I would encourage them to take the issues as seriously as they can. To the extent that they concur that Google has violated the law, there are very good reasons to try to address the concerns as quickly as possible.
As Barnett acknowledges, there is no question that the FTC investigated these issues more fully than anyone. The agency’s institutional culture and its committed personnel, together with political pressure, media publicity and endless competitor entreaties, virtually ensured that the FTC took the issues “as seriously as they [could]” – in fact, as seriously as anyone else in the world. There is simply no reasonable way to criticize the FTC for being insufficiently thorough in its investigation and conclusions.
Nor is there a basis for claiming that the FTC is “standing in the way” of the courts’ ability to review the issue, as Scott Cleland contends in an op-ed in the Hill. Frankly, this is absurd. Google’s competitors have spent millions pressuring the FTC to bring a case. But the FTC isn’t remotely the only path to the courts. As Commissioner Rosch admonished,
They can darn well bring [a case] as a private antitrust action if they think their ox is being gored instead of free-riding on the government to achieve the same result.
Competitors have already beaten a path to the DOJ’s door, and investigations are still pending in the EU, Argentina, several US states, and elsewhere. That the agency that has leveled the fullest and best-informed investigation has concluded that there is no “there” there should give these authorities pause, but, sadly for consumers who would benefit from an end to competitors’ rent seeking, nothing the FTC has done actually prevents courts or other regulators from having a crack at Google.
The case against Google has received more attention from the FTC than the merits of the case ever warranted. It is time for Google’s critics and competitors to move on.
Here. The most interesting part is Caplan’s take on why it is Friedman stands apart from other free-market thinkers:
Why does Friedman stand apart from my other idols? In the end, it’s the absence of obscurantism. Friedman makes his points as simply, clearly, and bluntly as possible. He never rambles on. He never hides behind academic jargon. Healmost never makes bizarre philosophical assertions to explain away obvious facts. He never tries to win fair weather converts by speaking in vague generalities about “liberty.” Friedman never turned out to have feet of clay, because he played every game barefoot.
Many libertarians look down on Friedman for his moderation and statist compromises. I’m about as radical as libertarians come, but these critics have never impressed me. By any normal standard, Friedman was a very radical libertarian indeed. If you’re going to take points off for a few deviations, remember to give him extra credit for earnestly trying to convince people who didn’t already agree with him. His arguments for liberty weren’t just intellectually compelling; he made them with humor and common decency. Friedman was a paragon of libertarian friendliness – a model of the nobility we should all aspire to.
In a just world, we’d all be Friedmanites now. But don’t be bitter that he wasn’t more successful. Rejoice that a century ago, Milton Friedman was born – and forever enriched the world of ideas.
Friedman’s economics was in many ways an economics of the poor. As early as 1955 he was publishing articles about the failure of government-monopoly schools to properly educate the children of the poor and marginalized, and he proposed a system of vouchers to allow the disadvantaged to pursue the same educational opportunities that their better-off neighbors enjoyed. Liberals grimaced to hear him say it, but he correctly identified the minimum-wage requirement as “one of the most, if not the most anti-blacklaw on the statute books,” and referred to the elevated black unemployment rate as “a disgrace and scandal.” He spent much of his public life arguing that “there has never been a more effective machine for the elimination of poverty than the free-enterprise system and a free market.” By contrast, he described the welfare state as “a machine for producing poor people.” His most influential work (conducted together with his colleague Anna Jacobson Schwartz) was on the causes of the Great Depression, making a powerful case that it was not market failure but government policy — specifically Federal Reserve policy — that turned a normal economic downturn into an epic catastrophe.
On November 3rd, the president of the United States spoke at the Hotel Lowry in St. Paul, Minnesota, in what was billed repeatedly as a bi-partisan address. The president ridiculed reactionaries in Congress who he claimed represented the wealthy and the powerful, and whose “theory seems to be that if these groups are prosperous, they will pass along some of their prosperity to the rest of us.” The president drew a direct line between prosperity and increased “fairness” in the distribution of wealth: “We know that the country will achieve economic stability and progress only if the benefits of our production are widely distributed among all its citizens.” The president then laid out an ambitious agenda focused on creating jobs, improving education, expanding health care, and ensuring equal rights for all.
Addressing his opponents in Congress, the president said “[t]here are people who contend that . . . programs for the general welfare will cost too much,” but argued “[t]he expenditures which we make today for the education, health, and security of our citizens are investments in the future of our country . . . .” Giving a specific, and favorite, example, the president argued that government investments in the areas of energy are “good investments in the future of this great country.” Building on the meme about great countries doing great national projects, he praised the Louisiana Purchase, which brought Minnesota into the Union, and compared congressional critics of his past and proposed spending to those who argued President Jefferson should not have been allowed to borrow to buy “Louisiana” from Napoleon.
The speech was given on November 3, 1949, and the president was Harry Truman. But it could just as easily have come from the mouth of our current president, despite the fact that President Obama’s December 2011 speech in Osawatomie, Kansas was allegedly invoking President Teddy Roosevelt. In fact, we could play a game – call it, “Harry or Barry?” (as the president was called for most of his life) – to see how little has changed since 1949:
As a libertarian, I mostly concur in the critique of occupational licensure made famous by (among others) Milton Friedman. For the most part, licensure is a consumer-unfriendly affair that protects incumbent practitioners from competition, locks out promising new methods of service provision, and interferes with voluntary dealings between professional and client. It is dubious enough as applied to occupational groups such as doctors and plumbers, and downright ridiculous (as the Institute for Justice keeps reminding us) as applied to groups like cosmetologists, florists and interior designers.
But lawyers are different. No, seriously — they are. Most other professional groups deal with a clientele that, even if unsophisticated, is at least participating voluntarily and exercising a choice of providers. This is true of lawyers as well for the majority of the services they provide — advising on the state of the law, drafting contracts, negotiating business deals, devising estate plans. But lawyers also are given a litigator’s hunting license to initiate compulsory civil process against unwilling (often wholly innocent) opponents and third parties, and deregulating that power is a good bit more problematic.
The coercive powers wielded by private lawyers are more akin to the powers wielded by prosecutors and other government officials than to the powers wielded by, say, optometrists or dentists. They include the power not only to initiate a lawsuit — something that, even if disposed of at an early stage, can inflict hundreds of thousands of dollars of financial cost (plus reputational damage and distraction) on an adversary — but also the power to pursue discovery under our remarkably broad American rules, an extraordinarily coercive and invasive process by which opponents are compelled to hand over private emails, memos and doodles for hostile scrutiny, attend and endure hostile depositions in person, undertake vast file searches at an unreimbursed cost that can exceed the value in controversy in the suit, and more. I am not convinced that deregulating the power to commence this sort of civil process and demand money from an opponent for calling it off — in effect, to widen the existing pro se exemption so as to allow anyone to proceed pro se on behalf of anyone else they can get to sign up — would reduce the amount of unjustified legal aggression in a system that already has plenty of it and to spare.
It will not do to say that abuse of the power to litigate can be sorted out after the fact as it allegedly was in the cases of Scruggs and Lerach, years after the ethical lapses began. Much experience suggests that sanctions, disbarments, countersuits and prosecutions are typically belated and spotty as it is (for manyexamples, check my website Overlawyered). True, abusive lawyering would be far better checked if we had loser-pays, a strong Rule 11, serious constraints on the use of discovery for cost infliction, and so forth. But we don’t — and the Law Lobby will not let us win those remedies any time soon.
The way forward might be to split the tasks of a lawyer in two, moving to deregulate the advisory and document-preparation functions (which could indeed be a way of saving consumers large sums) while continuing to apply appropriate scrutiny to those in the profession who presume to wield coercive litigation powers. Although the British separation of highly regulated barristers from less highly regulated solicitors does not precisely track this distinction, it is worth keeping in mind as a possible model for a division between an “outer” legal profession whose operation might be entrusted to general business principles and an “inner” group of professionals of whom more is expected, as we expect more ethically and legally from judges themselves, public prosecutors, and others cloaked in public authority.
I blogged a bit about the MetroPCS net neutrality complaint a few weeks ago. The complaint, you may recall, targeted the MetroPCS menu of packages and pricing offered to its consumers. The idea that MetroPCS, about one-tenth the size of Verizon, has market power is nonsense. As my colleague Tom Hazlett explains, restrictions on MetroPCS in the name of net neutrality are likely to harm consumers, not help them:
Indeed, low-cost prepaid plans of MetroPCS are popular with users who want to avoid long-term contracts and are price sensitive. Half its customers are ‘cord cutters’, subscribers whose only phone is wireless and usage is intense. Voice minutes per month average about 2,000, more than double that of larger carriers. The $40 plan is cheap because it’s inexpensively delivered using 2G technology. It is not broadband (topping out, in third party reviews, at just 100 kbps), and has software and capacity issues. In general, voice over internet is not supported by the handsets and video streaming is not available on the network. The carrier deals with those limitations in three ways.
First, the $40 per month price tag extends a fat discount. Unlimited everything can cost $120 on faster networks. Second, it has also deployed new 4G technology, offering both a $40 tier similar to the 2G product (no video streaming), but also a pumped up version with video streaming, VoIP and everything else – without data caps – for $60 a month. Of course, this network has far larger capacity and is much zippier (reliable at 700 kbps). PC World rated the full-blown 4G service “dirt cheap”.
Third, to upgrade the cheaper-than-dirt 2G experience, MetroPCS got Google – owner of YouTube – to compress their videos for delivery over the older network. This allowed the mobile carrier to extend unlimited wildly popular YouTube content to its lowest tier subscribers. Busted! Favouring YouTube is said to violate neutrality. …
So much for the “consumer welfare” case for net neutrality in practice. Of course, the FCC mandate is one of “public interest,” and not just consumer welfare. So — perhaps another case can be made to defend the MetroPCS complaint? Malkia Cyril from the Center for Media Justice offers just such a case in a recent blog post. The problem with MetroPCS satisfying consumer demand for low-cost prepaid plans? Cyril argues that the “Lowering the price for partial Internet service while calling it “unlimited access” is a fraudulent gimmick that Metro PCS hopes will confuse low-income consumers into buying its phones,” and that it is “un-American to give low-income communities substandard Internet service that creates barriers to economic opportunity and democratic engagement.”
Cyril is wrong that competition for low-price plans makes low-income consumers worse off. The claim is the same one that is often made in defense of restricting the access to low-income individuals to other products (and especially consumer credit) because their purchasing decisions cannot be trusted, i.e. the revealed preferences of those 8 million consumers should be substituted for by the Federal Communications Commission in this case. This is precisely the type of claim for which a little bit of economic analysis can go a long way in shedding some light.
David Honig, co-founder of the liberal Minority Media & Telecommunications Council, makes the relevant points (HT: Hazlett):
One of the wireless carriers is offering three packages, all of VOIP-enabled (so they can get services like Skype) with free access to any lawful website, and all of them clearly labeled:
• Plan A: $40, with no multimedia streaming (that is, no movie downloads such as Netflix, porn, etc.)
• Plan B: $50, with metered multimedia streaming.
• Plan C: $60, with unlimited multimedia streaming.
Could you decide which of these three packages meets your needs?
Or is all this just too confusing? Cyril thinks so.
She writes that Plan A “will confuse low-income consumers” into buying this carrier’s cell phones because they won’t be able to figure out that “if you want the WHOLE Internet, you just have to pay more.”
Well, actually you don’t have to pay more. The most expensive option — Plan C — costs $40 less than the least expensive offering of any of the other carriers. And if you later discover you don’t like Plan A, you can upgrade to Plan B or Plan C with no penalty, or you can pay the $100 it would cost to get service similar to Plan C from competing carriers. And you can do that immediately, since none of these plans has an early termination fee. What’s wrong with paying less for the particular services you want?
Cyril is making a common mistake among us lefties when it comes to low income people — she is being paternalistic. Those poor poor people. They can’t think for themselves, so the government has to make decisions for them. In this case, Cyril argues, the FCC should outlaw Plan A (and maybe Plan B) and require every carrier to offer only full-menu service like Plan C. All this in the name of “net neutrality.”
If I’ve learned anything from my 45 years working with low income folks, it’s this: they’re intelligent and they’re resourceful. They have to be in order to survive. They don’t appreciate condescension or sloganeering in their name. And they have sense enough to know whether they’d rather use an extra $20 a month for movie downloads or for movie tickets — and would rather get discounts for services they do not want or need. …
What the FCC doesn’t need to do is increase costs for those who can least afford it. As long as there’s full transparency, low income people ought to be able to choose Plan A, B or C. Low income people — the underserved — don’t need the FCC to decide, for them, how they can spend their money.
Well put.
This relates to an important economic point that the proponents of these types of regulation often miss, including in the context of lawyer licensing, but also with respect to the hundreds of state and local regulations impacting hundreds of industries that create barriers to entry in the provision of medical services, dental services, hairdressing, etc. The introduction of lower quality products provides greater choice and significant economic value. The fact that not all consumers demand (or can afford) premium brands and services does not mean that consumers are exploited. Recall Milton Friedman’s statement that lawyer licensing is very much like requiring consumers desiring an automobile to purchase a Cadillac. In this case, low-income consumers would bear the brunt of a restriction against the type of plan offered by MetroPCS.
There is a longstanding debate over the differences between the FCC’s “public interest” standard and the “consumer welfare” standard used in traditional antitrust analysis. Sometimes, the two appear to conflict. Sometimes, as is the case here, with the benefit of economics it is clear the two standards converge. Here’s hoping the FCC doesn’t take the bait.