Archives For doj

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]

The U.S. Department of Justice’s (DOJ) antitrust case against Google, which was filed in October 2020, will be a tough slog.[1] It is an alleged monopolization (Sherman Act, Sec. 2) case; and monopolization cases are always a tough slog.

In this brief essay I will lay out some of the issues in the case and raise an intriguing possibility.

What is the case about?

The case is about exclusivity and exclusion in the distribution of search engine services; that Google paid substantial sums to Apple and to the manufacturers of Android-based mobile phones and tablets and also to wireless carriers and web-browser proprietors—in essence, to distributors—to install the Google search engine as the exclusive pre-set (installed), default search program. The suit alleges that Google thereby made it more difficult for other search-engine providers (e.g., Bing; DuckDuckGo) to obtain distribution for their search-engine services and thus to attract search-engine users and to sell the online advertising that is associated with search-engine use and that provides the revenue to support the search “platform” in this “two-sided market” context.[2]

Exclusion can be seen as a form of “raising rivals’ costs.”[3]  Equivalently, exclusion can be seen as a form of non-price predation. Under either interpretation, the exclusionary action impedes competition.

It’s important to note that these allegations are different from those that motivated an investigation by the Federal Trade Commission (which the FTC dropped in 2013) and the cases by the European Union against Google.[4]  Those cases focused on alleged self-preferencing; that Google was unduly favoring its own products and services (e.g., travel services) in its delivery of search results to users of its search engine. In those cases, the impairment of competition (arguably) happens with respect to those competing products and services, not with respect to search itself.

What is the relevant market?

For a monopolization allegation to have any meaning, there needs to be the exercise of market power (which would have adverse consequences for the buyers of the product). And in turn, that exercise of market power needs to occur in a relevant market: one in which market power can be exercised.

Here is one of the important places where the DOJ’s case is likely to turn into a slog: the delineation of a relevant market for alleged monopolization cases remains as a largely unsolved problem for antitrust economics.[5]  This is in sharp contrast to the issue of delineating relevant markets for the antitrust analysis of proposed mergers.  For this latter category, the paradigm of the “hypothetical monopolist” and the possibility that this hypothetical monopolist could prospectively impose a “small but significant non-transitory increase in price” (SSNIP) has carried the day for the purposes of market delineation.

But no such paradigm exists for monopolization cases, in which the usual allegation is that the defendant already possesses market power and has used the exclusionary actions to buttress that market power. To see the difficulties, it is useful to recall the basic monopoly diagram from Microeconomics 101. A monopolist faces a negatively sloped demand curve for its product (at higher prices, less is bought; at lower prices, more is bought) and sets a profit-maximizing price at the level of output where its marginal revenue (MR) equals its marginal costs (MC). Its price is thereby higher than an otherwise similar competitive industry’s price for that product (to the detriment of buyers) and the monopolist earns higher profits than would the competitive industry.

But unless there are reliable benchmarks as to what the competitive price and profits would otherwise be, any information as to the defendant’s price and profits has little value with respect to whether the defendant already has market power. Also, a claim that a firm does not have market power because it faces rivals and thus isn’t able profitably to raise its price from its current level (because it would lose too many sales to those rivals) similarly has no value. Recall the monopolist from Micro 101. It doesn’t set a higher price than the one where MR=MC, because it would thereby lose too many sales to other sellers of other things.

Thus, any firm—regardless of whether it truly has market power (like the Micro 101 monopolist) or is just another competitor in a sea of competitors—should have already set its price at its profit-maximizing level and should find it unprofitable to raise its price from that level.[6]  And thus the claim, “Look at all of the firms that I compete with!  I don’t have market power!” similarly has no informational value.

Let us now bring this problem back to the Google monopolization allegation:  What is the relevant market?  In the first instance, it has to be “the provision of answers to user search queries.” After all, this is the “space” in which the exclusion occurred. But there are categories of search: e.g., search for products/services, versus more general information searches (“What is the current time in Delaware?” “Who was the 21st President of the United States?”). Do those separate categories themselves constitute relevant markets?

Further, what would the exercise of market power in a (delineated relevant) market look like?  Higher-than-competitive prices for advertising that targets search-results recipients is one obvious answer (but see below). In addition, because this is a two-sided market, the competitive “price” (or prices) might involve payments by the search engine to the search users (in return for their exposure to the lucrative attached advertising).[7]  And product quality might exhibit less variety than a competitive market would provide; and/or the monopolistic average level of quality would be lower than in a competitive market: e.g., more abuse of user data, and/or deterioration of the delivered information itself, via more self-preferencing by the search engine and more advertising-driven preferencing of results.[8]

In addition, a natural focus for a relevant market is the advertising that accompanies the search results. But now we are at the heart of the difficulty of delineating a relevant market in a monopolization context. If the relevant market is “advertising on search engine results pages,” it seems highly likely that Google has market power. If the relevant market instead is all online U.S. advertising (of which Google’s revenue share accounted for 32% in 2019[9]), then the case is weaker; and if the relevant market is all advertising in the United States (which is about twice the size of online advertising[10]), the case is weaker still. Unless there is some competitive benchmark, there is no easy way to delineate the relevant market.[11]

What exactly has Google been paying for, and why?

As many critics of the DOJ’s case have pointed out, it is extremely easy for users to switch their default search engine. If internet search were a normal good or service, this ease of switching would leave little room for the exercise of market power. But in that case, why is Google willing to pay $8-$12 billion annually for the exclusive default setting on Apple devices and large sums to the manufacturers of Android-based devices (and to wireless carriers and browser proprietors)? Why doesn’t Google instead run ads in prominent places that remind users how superior Google’s search results are and how easy it is for users (if they haven’t already done so) to switch to the Google search engine and make Google the user’s default choice?

Suppose that user inertia is important. Further suppose that users generally have difficulty in making comparisons with respect to the quality of delivered search results. If this is true, then being the default search engine on Apple and Android-based devices and on other distribution vehicles would be valuable. In this context, the inertia of their customers is a valuable “asset” of the distributors that the distributors may not be able to take advantage of, but that Google can (by providing search services and selling advertising). The question of whether Google’s taking advantage of this user inertia means that Google exercises market power takes us back to the issue of delineating the relevant market.

There is a further wrinkle to all of this. It is a well-understood concept in antitrust economics that an incumbent monopolist will be willing to pay more for the exclusive use of an essential input than a challenger would pay for access to the input.[12] The basic idea is straightforward. By maintaining exclusive use of the input, the incumbent monopolist preserves its (large) monopoly profits. If the challenger enters, the incumbent will then earn only its share of the (much lower, more competitive) duopoly profits. Similarly, the challenger can expect only the lower duopoly profits. Accordingly, the incumbent should be willing to outbid (and thereby exclude) the challenger and preserve the incumbent’s exclusive use of the input, so as to protect those monopoly profits.

To bring this to the Google monopolization context, if Google does possess market power in some aspect of search—say, because online search-linked advertising is a relevant market—then Google will be willing to outbid Microsoft (which owns Bing) for the “asset” of default access to Apple’s (inertial) device owners. That Microsoft is a large and profitable company and could afford to match (or exceed) Google’s payments to Apple is irrelevant. If the duopoly profits for online search-linked advertising would be substantially lower than Google’s current profits, then Microsoft would not find it worthwhile to try to outbid Google for that default access asset.

Alternatively, this scenario could be wholly consistent with an absence of market power. If search users (who can easily switch) consider Bing to be a lower-quality search service, then large payments by Microsoft to outbid Google for those exclusive default rights would be largely wasted, since the “acquired” default search users would quickly switch to Google (unless Microsoft provided additional incentives for the users not to switch).

But this alternative scenario returns us to the original puzzle:  Why is Google making such large payments to the distributors for those exclusive default rights?

An intriguing possibility

Consider the following possibility. Suppose that Google was paying that $8-$12 billion annually to Apple in return for the understanding that Apple would not develop its own search engine for Apple’s device users.[13] This possibility was not raised in the DOJ’s complaint, nor is it raised in the subsequent suits by the state attorneys general.

But let’s explore the implications by going to an extreme. Suppose that Google and Apple had a formal agreement that—in return for the $8-$12 billion per year—Apple would not develop its own search engine. In this event, this agreement not to compete would likely be seen as a violation of Section 1 of the Sherman Act (which does not require a market delineation exercise) and Apple would join Google as a co-conspirator. The case would take on the flavor of the FTC’s prosecution of “pay-for-delay” agreements between the manufacturers of patented pharmaceuticals and the generic drug manufacturers that challenge those patents and then receive payments from the former in return for dropping the patent challenge and delaying the entry of the generic substitute.[14]

As of this writing, there is no evidence of such an agreement and it seems quite unlikely that there would have been a formal agreement. But the DOJ will be able to engage in discovery and take depositions. It will be interesting to find out what the relevant executives at Google—and at Apple—thought was being achieved by those payments.

What would be a suitable remedy/relief?

The DOJ’s complaint is vague with respect to the remedy that it seeks. This is unsurprising. The DOJ may well want to wait to see how the case develops and then amend its complaint.

However, even if Google’s actions have constituted monopolization, it is difficult to conceive of a suitable and effective remedy. One apparently straightforward remedy would be to require simply that Google not be able to purchase exclusivity with respect to the pre-set default settings. In essence, the device manufacturers and others would always be able to sell parallel default rights to other search engines: on the basis, say, that the default rights for some categories of customers—or even a percentage of general customers (randomly selected)—could be sold to other search-engine providers.

But now the Gilbert-Newbery insight comes back into play. Suppose that a device manufacturer knows (or believes) that Google will pay much more if—even in the absence of any exclusivity agreement—Google ends up being the pre-set search engine for all (or nearly all) of the manufacturer’s device sales, as compared with what the manufacturer would receive if those default rights were sold to multiple search-engine providers (including, but not solely, Google). Can that manufacturer (recall that the distributors are not defendants in the case) be prevented from making this sale to Google and thus (de facto) continuing Google’s exclusivity?[15]

Even a requirement that Google not be allowed to make any payment to the distributors for a default position may not improve the competitive environment. Google may be able to find other ways of making indirect payments to distributors in return for attaining default rights, e.g., by offering them lower rates on their online advertising.

Further, if the ultimate goal is an efficient outcome in search, it is unclear how far restrictions on Google’s bidding behavior should go. If Google were forbidden from purchasing any default installation rights for its search engine, would (inert) consumers be better off? Similarly, if a distributor were to decide independently that its customers were better served by installing the Google search engine as the default, would that not be allowed? But if it is allowed, how could one be sure that Google wasn’t indirectly paying for this “independent” decision (e.g., through favorable advertising rates)?

It’s important to remember that this (alleged) monopolization is different from the Standard Oil case of 1911 or even the (landline) AT&T case of 1984. In those cases, there were physical assets that could be separated and spun off to separate companies. For Google, physical assets aren’t important. Although it is conceivable that some of Google’s intellectual property—such as Gmail, YouTube, or Android—could be spun off to separate companies, doing so would do little to cure the (arguably) fundamental problem of the inert device users.

In addition, if there were an agreement between Google and Apple for the latter not to develop a search engine, then large fines for both parties would surely be warranted. But what next? Apple can’t be forced to develop a search engine.[16] This differentiates such an arrangement from the “pay-for-delay” arrangements for pharmaceuticals, where the generic manufacturers can readily produce a near-identical substitute for the patented drug and are otherwise eager to do so.

At the end of the day, forbidding Google from paying for exclusivity may well be worth trying as a remedy. But as the discussion above indicates, it is unlikely to be a panacea and is likely to require considerable monitoring for effective enforcement.

Conclusion

The DOJ’s case against Google will be a slog. There are unresolved issues—such as how to delineate a relevant market in a monopolization case—that will be central to the case. Even if the DOJ is successful in showing that Google violated Section 2 of the Sherman Act in monopolizing search and/or search-linked advertising, an effective remedy seems problematic. But there also remains the intriguing question of why Google was willing to pay such large sums for those exclusive default installation rights?

The developments in the case will surely be interesting.


[1] The DOJ’s suit was joined by 11 states.  More states subsequently filed two separate antitrust lawsuits against Google in December.

[2] There is also a related argument:  That Google thereby gained greater volume, which allowed it to learn more about its search users and their behavior, and which thereby allowed it to provide better answers to users (and thus a higher-quality offering to its users) and better-targeted (higher-value) advertising to its advertisers.  Conversely, Google’s search-engine rivals were deprived of that volume, with the mirror-image negative consequences for the rivals.  This is just another version of the standard “learning-by-doing” and the related “learning curve” (or “experience curve”) concepts that have been well understood in economics for decades.

[3] See, for example, Steven C. Salop and David T. Scheffman, “Raising Rivals’ Costs: Recent Advances in the Theory of Industrial Structure,” American Economic Review, Vol. 73, No. 2 (May 1983), pp.  267-271; and Thomas G. Krattenmaker and Steven C. Salop, “Anticompetitive Exclusion: Raising Rivals’ Costs To Achieve Power Over Price,” Yale Law Journal, Vol. 96, No. 2 (December 1986), pp. 209-293.

[4] For a discussion, see Richard J. Gilbert, “The U.S. Federal Trade Commission Investigation of Google Search,” in John E. Kwoka, Jr., and Lawrence J. White, eds. The Antitrust Revolution: Economics, Competition, and Policy, 7th edn.  Oxford University Press, 2019, pp. 489-513.

[5] For a more complete version of the argument that follows, see Lawrence J. White, “Market Power and Market Definition in Monopolization Cases: A Paradigm Is Missing,” in Wayne D. Collins, ed., Issues in Competition Law and Policy. American Bar Association, 2008, pp. 913-924.

[6] The forgetting of this important point is often termed “the cellophane fallacy”, since this is what the U.S. Supreme Court did in a 1956 antitrust case in which the DOJ alleged that du Pont had monopolized the cellophane market (and du Pont, in its defense claimed that the relevant market was much wider: all flexible wrapping materials); see U.S. v. du Pont, 351 U.S. 377 (1956).  For an argument that profit data and other indicia argued for cellophane as the relevant market, see George W. Stocking and Willard F. Mueller, “The Cellophane Case and the New Competition,” American Economic Review, Vol. 45, No. 1 (March 1955), pp. 29-63.

[7] In the context of differentiated services, one would expect prices (positive or negative) to vary according to the quality of the service that is offered.  It is worth noting that Bing offers “rewards” to frequent searchers; see https://www.microsoft.com/en-us/bing/defaults-rewards.  It is unclear whether this pricing structure of payment to Bing’s customers represents what a more competitive framework in search might yield, or whether the payment just indicates that search users consider Bing to be a lower-quality service.

[8] As an additional consequence of the impairment of competition in this type of search market, there might be less technological improvement in the search process itself – to the detriment of users.

[9] As estimated by eMarketer: https://www.emarketer.com/newsroom/index.php/google-ad-revenues-to-drop-for-the-first-time/.

[10] See https://www.visualcapitalist.com/us-advertisers-spend-20-years/.

[11] And, again, if we return to the du Pont cellophane case:  Was the relevant market cellophane?  Or all flexible wrapping materials?

[12] This insight is formalized in Richard J. Gilbert and David M.G. Newbery, “Preemptive Patenting and the Persistence of Monopoly,” American Economic Review, Vol. 72, No. 3 (June 1982), pp. 514-526.

[13] To my knowledge, Randal C. Picker was the first to suggest this possibility; see https://www.competitionpolicyinternational.com/a-first-look-at-u-s-v-google/.  Whether Apple would be interested in trying to develop its own search engine – given the fiasco a decade ago when Apple tried to develop its own maps app to replace the Google maps app – is an open question.  In addition, the Gilbert-Newbery insight applies here as well:  Apple would be less inclined to invest the substantial resources that would be needed to develop a search engine when it is thereby in a duopoly market.  But Google might be willing to pay “insurance” to reinforce any doubts that Apple might have.

[14] The U.S. Supreme Court, in FTC v. Actavis, 570 U.S. 136 (2013), decided that such agreements could be anti-competitive and should be judged under the “rule of reason”.  For a discussion of the case and its implications, see, for example, Joseph Farrell and Mark Chicu, “Pharmaceutical Patents and Pay-for-Delay: Actavis (2013),” in John E. Kwoka, Jr., and Lawrence J. White, eds. The Antitrust Revolution: Economics, Competition, and Policy, 7th edn.  Oxford University Press, 2019, pp. 331-353.

[15] This is an example of the insight that vertical arrangements – in this case combined with the Gilbert-Newbery effect – can be a way for dominant firms to raise rivals’ costs.  See, for example, John Asker and Heski Bar-Isaac. 2014. “Raising Retailers’ Profits: On Vertical Practices and the Exclusion of Rivals.” American Economic Review, Vol. 104, No. 2 (February 2014), pp. 672-686.

[16] And, again, for the reasons discussed above, Apple might not be eager to make the effort.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]

As one of the few economic theorists in this symposium, I believe my comparative advantage is in that: economic theory. In this post, I want to remind people of the basic economic theories that we have at our disposal, “off the shelf,” to make sense of the U.S. Department of Justice’s lawsuit against Google. I do not mean this to be a proclamation of “what economics has to say about X,” but merely just to help us frame the issue.

In particular, I’m going to focus on the economic concerns of Google paying phone manufacturers (Apple, in particular) to be the default search engine installed on phones. While there is not a large literature on the economic effects of default contracts, there is a large literature on something that I will argue is similar: trade promotions, such as slotting contracts, where a manufacturer pays a retailer for shelf space. Despite all the bells and whistles of the Google case, I will argue that, from an economic point of view, the contracts that Google signed are just trade promotions. No more, no less. And trade promotions are well-established as part of a competitive process that ultimately helps consumers. 

However, it is theoretically possible that such trade promotions hurt customers, so it is theoretically possible that Google’s contracts hurt consumers. Ultimately, the theoretical possibility of anticompetitive behavior that harms consumers does not seem plausible to me in this case.

Default Status

There are two reasons that Google paying Apple to be its default search engine is similar to a trade promotion. First, the deal brings awareness to the product, which nudges certain consumers/users to choose the product when they would not otherwise do so. Second, the deal does not prevent consumers from choosing the other product.

In the case of retail trade promotions, a promotional space given to Coca-Cola makes it marginally easier for consumers to pick Coke, and therefore some consumers will switch from Pepsi to Coke. But it does not reduce any consumer’s choice. The store will still have both items.

This is the same for a default search engine. The marginal searchers, who do not have a strong preference for either search engine, will stick with the default. But anyone can still install a new search engine, install a new browser, etc. It takes a few clicks, just as it takes a few steps to walk down the aisle to get the Pepsi; it is still an available choice.

If we were to stop the analysis there, we could conclude that consumers are worse off (if just a tiny bit). Some customers will have to change the default app. We also need to remember that this contract is part of a more general competitive process. The retail stores are also competing with one another, as are smartphone manufacturers.

Despite popular claims to the contrary, Apple cannot charge anything it wants for its phone. It is competing with Samsung, etc. Therefore, Apple has to pass through some of Google’s payments to customers in order to compete with Samsung. Prices are lower because of this payment. As I phrased it elsewhere, Google is effectively subsidizing the iPhone. This cross-subsidization is a part of the competitive process that ultimately benefits consumers through lower prices.

These contracts lower consumer prices, even if we assume that Apple has market power. Those who recall your Econ 101 know that a monopolist chooses a quantity where the marginal revenue equals marginal cost. With a payment from Google, the marginal cost of producing a phone is lower, therefore Apple will increase the quantity and lower price. This is shown below:

One of the surprising things about markets is that buyers’ and sellers’ incentives can be aligned, even though it seems like they must be adversarial. Companies can indirectly bargain for their consumers. Commenting on Standard Fashion Co. v. Magrane-Houston Co., where a retail store contracted to only carry Standard’s products, Robert Bork (1978, pp. 306–7) summarized this idea as follows:

The store’s decision, made entirely in its own interest, necessarily reflects the balance of competing considerations that determine consumer welfare. Put the matter another way. If no manufacturer used exclusive dealing contracts, and if a local retail monopolist decided unilaterally to carry only Standard’s patterns because the loss in product variety was more than made up in the cost saving, we would recognize that decision was in the consumer interest. We do not want a variety that costs more than it is worth … If Standard finds it worthwhile to purchase exclusivity … the reason is not the barring of entry, but some more sensible goal, such as obtaining the special selling effort of the outlet.

How trade promotions could harm customers

Since Bork’s writing, many theoretical papers have shown exceptions to Bork’s logic. There are times that the retailers’ incentives are not aligned with the customers. And we need to take those possibilities seriously.

The most common way to show the harm of these deals (or more commonly exclusivity deals) is to assume:

  1. There are large, fixed costs so that a firm must acquire a sufficient number of customers in order to enter the market; and
  2. An incumbent can lock in enough customers to prevent the entrant from reaching an efficient size.

Consumers can be locked-in because there is some fixed cost of changing suppliers or because of some coordination problems. If that’s true, customers can be made worse off, on net, because the Google contracts reduce consumer choice.

To understand the logic, let’s simplify the model to just search engines and searchers. Suppose there are two search engines (Google and Bing) and 10 searchers. However, to operate profitably, each search engine needs at least three searchers. If Google can entice eight searchers to use its product, Bing cannot operate profitably, even if Bing provides a better product. This holds even if everyone knows Bing would be a better product. The consumers are stuck in a coordination failure.

We should be skeptical of coordination failure models of inefficient outcomes. The problem with any story of coordination failures is that it is highly sensitive to the exact timing of the model. If Bing can preempt Google and offer customers an even better deal (the new entrant is better by assumption), then the coordination failure does not occur.

To argue that Bing could not execute a similar contract, the most common appeal is that the new entrant does not have the capital to pay upfront for these contracts, since it will only make money from its higher-quality search engine down the road. That makes sense until you remember that we are talking about Microsoft. I’m skeptical that capital is the real constraint. It seems much more likely that Google just has a more popular search engine.

The other problem with coordination failure arguments is that they are almost non-falsifiable. There is no way to tell, in the model, whether Google is used because of a coordination failure or whether it is used because it is a better product. If Google is a better product, then the outcome is efficient. The two outcomes are “observationally equivalent.” Compare this to the standard theory of monopoly, where we can (in principle) establish an inefficiency if the price is greater than marginal cost. While it is difficult to measure marginal cost, it can be done.

There is a general economic idea in these models that we need to pay attention to. If Google takes an action that prevents Bing from reaching efficient size, that may be an externality, sometimes called a network effect, and so that action may hurt consumer welfare.

I’m not sure how seriously to take these network effects. If more searchers allow Bing to make a better product, then literally any action (competitive or not) by Google is an externality. Making a better product that takes away consumers from Bing lowers Bing’s quality. That is, strictly speaking, an externality. Surely, that is not worthy of antitrust scrutiny simply because we find an externality.

And Bing also “takes away” searchers from Google, thus lowering Google’s possible quality. With network effects, bigger is better and it may be efficient to have only one firm. Surely, that’s not an argument we want to put forward as a serious antitrust analysis.

Put more generally, it is not enough to scream “NETWORK EFFECT!” and then have the antitrust authority come in, lawsuits-a-blazing. Well, it shouldn’t be enough.

For me to take the network effect argument seriously from an economic point of view, compared to a legal perspective, I would need to see a real restriction on consumer choice, not just an externality. One needs to argue that:

  1. No competitor can cover their fixed costs to make a reasonable search engine; and
  2. These contracts are what prevent the competing search engines from reaching size.

That’s the challenge I would like to put forward to supporters of the lawsuit. I’m skeptical.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]

It is my endeavor to scrutinize the questionable assessment articulated against default settings in the U.S. Justice Department’s lawsuit against Google. Default, I will argue, is no antitrust fault. Default in the Google case drastically differs from default referred to in the Microsoft case. In Part I, I argue the comparison is odious. Furthermore, in Part II, it will be argued that the implicit prohibition of default settings echoes, as per listings, the explicit prohibition of self-preferencing in search results. Both aspects – default’s implicit prohibition and self-preferencing’s explicit prohibition – are the two legs of a novel and integrated theory of sanctioning corporate favoritism. The coming to the fore of such theory goes against the very essence of the capitalist grain. In Part III, I note the attempt to instill some corporate selflessness is at odds with competition on the merits and the spirit of fundamental economic freedoms.

When Default is No-Fault

The recent complaint filed by the DOJ and 11 state attorneys general claims that Google has abused its dominant position on the search-engine market through several ways, notably making Google the default search engine both in Google Chrome web browser for Android OS and in Apple’s Safari web browser for iOS. Undoubtedly, default setting confers a noticeable advantage for users’ attraction – it is sought and enforced on purpose. Nevertheless, the default setting confers an unassailable position unless the product remains competitive. Furthermore, the default setting can hardly be proven to be anticompetitive in the Google case. Indeed, the DOJ puts considerable effort in the complaint to make the Google case resemble the 20-year-old Microsoft case. Former Federal Trade Commission Chairman William Kovacic commented: “I suppose the Justice Department is telling the court, ‘You do not have to be scared of this case. You’ve done it before […] This is Microsoft part 2.”[1]

However, irrespective of the merits of the Microsoft case two decades ago, the Google default setting case bears minimal resemblance to the Microsoft default setting of Internet Explorer. First, as opposed to the Microsoft case, where default by Microsoft meant pre-installed software (i.e., Internet Explorer)[2], the Google case does not relate to the pre-installment of the Google search engine (since it is just a webpage) but a simple setting. This technical difference is significant: although “sticky”[3], the default setting, can be outwitted with just one click[4]. It is dissimilar to the default setting, which can only be circumvented by uninstalling software[5], searching and installing a new one[6]. Moreover, with no certainty that consumers will effectively use Google search engine, default settings come with advertising revenue sharing agreements between Google and device manufacturers, mobile phone carriers, competing browsers and Apple[7]. These mutually beneficial deals represent a significant cost with no technical exclusivity [8]. In other words, the antitrust treatment of a tie-in between software and hardware in the Microsoft case cannot be convincingly extrapolated to the default setting of a “webware”[9] as relevant in the Google case.

Second, the Google case cannot legitimately resort to extrapolating the Microsoft case for another technical (and commercial) aspect: the Microsoft case was a classic tie-in case where the tied product (Internet Explorer) was tied into the main product (Windows). As a traditional tie-in scenario, the tied product (Internet Explorer) was “consistently offered, promoted, and distributed […] as a stand-alone product separate from, and not as a component of, Windows […]”[10]. In contrast, Google has never sold Google Chrome or Android OS. It offered both Google Chrome and Android OS for free, necessarily conditional to Google search engine as default setting. The very fact that Google Chrome or Android OS have never been “stand-alone” products, to use the Microsoft case’s language, together with the absence of software installation, dramatically differentiates the features pertaining to the Google case from those of the Microsoft case. The Google case is not a traditional tie-in case: it is a case against default setting when both products (the primary and related products) are given for free, are not saleable, are neither tangible nor intangible goods but only popular digital services due to significant innovativeness and ease of usage. The Microsoft “complaint challenge[d] only Microsoft’s concerted attempts to maintain its monopoly in operating systems and to achieve dominance in other markets, not by innovation and other competition on the merits, but by tie-ins.” Quite noticeably, the Google case does not mention tie-in ,as per Google Chrome or Android OS.

The complaint only refers to tie-ins concerning Google’s app being pre-installed on Android OS. Therefore, concerning Google’s dominance on the search engine market, it cannot be said that the default setting of Google search in Android OS entails tie-in. Google search engine has no distribution channel (since it is only a website) other than through downstream partnerships (i.e., vertical deals with Android device manufacturers). To sanction default setting on downstream trading partners is tantamount to refusing legitimate means to secure distribution channels of proprietary and zero-priced services. To further this detrimental logic, it would mean that Apple may no longer offer its own apps in its own iPhones or, in offline markets, that a retailer may no longer offer its own (default) bags at the till since it excludes rivals’ sale bags. Products and services naked of any adjacent products and markets (i.e., an iPhone or Android OS with no app or a shopkeeper with no bundled services) would dramatically increase consumers’ search costs while destroying innovators’ essential distribution channels for innovative business models and providing few departures from the status quo as long as consumers will continue to value default products[11].

Default should not be an antitrust fault: the Google case makes default settings a new line of antitrust injury absent tie-ins. In conclusion, as a free webware, Google search’s default setting cannot be compared to default installation in the Microsoft case since minimal consumer stickiness entails (almost) no switching costs. As free software, Google’s default apps cannot be compared to Microsoft case either since pre-installation is the sine qua non condition of the highly valued services (Android OS) voluntarily chosen by device manufacturers. Default settings on downstream products can only be reasonably considered as antitrust injury when the dominant company is erroneously treated as a de facto essential facility – something evidenced by the similar prohibition of self-preferencing.

When Self-Preference is No Defense

Self-preferencing is to listings what the default setting is to operating systems. They both are ways to market one’s own products (i.e., alternative to marketing toward end-consumers). While default setting may come with both free products and financial payments (Android OS and advertising revenue sharing), self-preferencing may come with foregone advertising revenues in order to promote one’s own products. Both sides can be apprehended as the two sides of the same coin:[12] generating the ad-funded main product’s distribution channels – Google’s search engine. Both are complex advertising channels since both venues favor one’s own products regarding consumers’ attention. Absent both channels, the payments made for default agreements and the foregone advertising revenues in self-preferencing one’s own products would morph into marketing and advertising expenses of Google search engine toward end-consumers.

The DOJ complaint lambasts that “Google’s monopoly in general search services also has given the company extraordinary power as the gateway to the internet, which uses to promote its own web content and increase its profits.” This blame was at the core of the European Commission’s Google Shopping decision in 2017[13]: it essentially holds Google accountable for having, because of its ad-funded business model, promoted its own advertising products and demoted organic links in search results. According to which Google’s search results are no longer relevant and listed on the sole motivation of advertising revenue

But this argument is circular: should these search results become irrelevant, Google’s core business would become less attractive, thereby generating less advertising revenue. This self-inflicted inefficiency would deprive Google of valuable advertising streams and incentivize end-consumers to switch to search engine rivals such as Bing, DuckDuckGo, Amazon (product search), etc. Therefore, an ad-funded company such as Google needs to reasonably arbitrage between advertising objectives and the efficiency of its core activities (here, zero-priced organic search services). To downplay (the ad-funded) self-referencing in order to foster (the zero-priced) organic search quality would disregard the two-sidedness of the Google platform: it would harm advertisers and the viability of the ad-funded business model without providing consumers and innovation protection it aims at providing. The problematic and undesirable concept of “search neutrality” would mean algorithmic micro-management for the sake of an “objective” listing considered acceptable only to the eyes of the regulator.

Furthermore, self-preferencing entails a sort of positive discrimination toward one’s own products[14]. If discrimination has traditionally been antitrust lines of injuries, self-preferencing is an “epithet”[15] outside antitrust remits for good reasons[16]. Indeed, should self-interested (i.e., rationally minded) companies and individuals are legally complied to self-demote their own products and services? If only big (how big?) companies are legally complied to self-demote their products and services, to what extent will exempted companies involved in self-preferencing become liable to do so?

Indeed, many uncertainties, legal and economic ones, may spawn from the emerging prohibition of self-preferencing. More fundamentally, antitrust liability may clash with basic corporate governance principles where self-interestedness allows self-preferencing and command such self-promotion. The limits of antitrust have been reached when two sets of legal regimes, both applicable to companies, suggest contradictory commercial conducts. To what extent may Amazon no longer promote its own series on Amazon Video in a similar manner Netflix does? To what extent can Microsoft no longer promote Bing’s search engine to compete with Google’s search engine effectively? To what extent Uber may no longer promote UberEATS in order to compete with delivery services effectively? Not only the business of business is doing business[17], but also it is its duty for which shareholders may hold managers to account.

The self is moral; there is a corporate morality of business self-interest. In other words, corporate selflessness runs counter to business ethics since corporate self-interest yields the self’s rivalrous positioning within a competitive order. Absent a corporate self-interest, self-sacrifice may generate value destruction for the sake of some unjustified and ungrounded claims. The emerging prohibition of self-preferencing, similar to the established ban on the default setting on one’s own products into other proprietary products, materializes the corporate self’s losing. Both directions coalesce to instill the legally embedded duty of self-sacrifice for the competitor’s welfare instead of the traditional consumer welfare and the dynamics of innovation, which never unleash absent appropriabilities. In conclusion, to expect firms, however big or small, to act irrespective of their identities (i.e., corporate selflessness) would constitute an antitrust error and would be at odds with capitalism.

Toward an Integrated Theory of Disintegrating Favoritism

The Google lawsuit primarily blames Google for default settings enforced via several deals. The lawsuit also makes self-preferencing anticompetitive conduct under antitrust rules. These two charges are novel and dubious in their remits. They nevertheless represent a fundamental catalyst for the development of a new and problematic unified antitrust theory prohibiting favoritism:  companies may no longer favor their products and services, both vertically and horizontally, irrespective of consumer benefits, irrespective of superior efficiency arguments, and irrespective of dynamic capabilities enhancement. Indeed, via an unreasonably expanded vision of leveraging, antitrust enforcement is furtively banning a company to favor its own products and services based on greater consumer choice as a substitute to consumer welfare, based on the protection of the opportunities of rivals to innovate and compete as a substitute to the essence of competition and innovation, and based on limiting the outreach and size of companies as a substitute to the capabilities and efficiencies of these companies. Leveraging becomes suspicious and corporate self-favoritism under accusation. The Google lawsuit materializes this impractical trend, which further enshrines the precautionary approach to antitrust enforcement[18].


[1] Jessica Guynn, Google Justice Department antitrust lawsuit explained: this is what it means for you. USA Today, October 20, 2020.

[2] The software (Internet Explorer) was tied in the hardware (Windows PC).

[3] U.S. v Google LLC, Case A:20, October 20, 2020, 3 (referring to default settings as “especially sticky” with respect to consumers’ willingness to change).

[4] While the DOJ affirms that “being the preset default general search engine is particularly valuable because consumers rarely change the preset default”, it nevertheless provides no evidence of the breadth of such consumer stickiness. To be sure, search engine’s default status does not necessarily lead to usage as evidenced by the case of South Korea. In this country, despite Google’s preset default settings, the search engine Naver remains dominant in the national search market with over 70% of market shares. The rivalry exerted by Naver on Google demonstrates that limits of consumer stickiness to default settings. See Alesia Krush, Google vs. Naver: Why Can’t Google Dominate Search in Korea? Link-Assistant.Com, available at: https://www.link-assistant.com/blog/google-vs-naver-why-cant-google-dominate-search-in-korea/ . As dominant search engine in Korea, Naver is subject to antitrust investigations with similar leveraging practices as Google in other countries, see Shin Ji-hye, FTC sets up special to probe Naver, Google, The Korea Herald, November 19, 2019, available at :  http://www.koreaherald.com/view.php?ud=20191119000798 ; Kim Byung-wook, Complaint against Google to be filed with FTC, The Investor, December 14, 2020, available at : https://www.theinvestor.co.kr/view.php?ud=20201123000984  (reporting a complaint by Naver and other Korean IT companies against Google’s 30% commission policy on Google Play Store’s apps).

[5] For instance, the then complaint acknowledged that “Microsoft designed Windows 98 so that removal of Internet Explorer by OEMs or end users is operationally more difficult than it was in Windows 95”, in U.S. v Microsoft Corp., Civil Action No 98-1232, May 18, 1998, para.20.

[6] The DOJ complaint itself quotes “one search competitor who is reported to have noted consumer stickiness “despite the simplicity of changing a default setting to enable customer choice […]” (para.47). Therefore, default setting for search engine is remarkably simple to bypass but consumers do not often do so, either due to satisfaction with Google search engine and/or due to search and opportunity costs.

[7] See para.56 of the DOJ complaint.

[8] Competing browsers can always welcome rival search engines and competing search engine apps can always be downloaded despite revenue sharing agreements. See paras.78-87 of the DOJ complaint.

[9] Google search engine is nothing but a “webware” – a complex set of algorithms that work via online access of a webpage with no prior download. For a discussion on the definition of webware, see https://www.techopedia.com/definition/4933/webware .

[10] Id. para.21.

[11] Such outcome would frustrate traditional ways of offering computers and mobile devices as acknowledged by the DOJ itself in the Google complaint: “new computers and new mobile devices generally come with a number of preinstalled apps and out-of-the-box setting. […] Each of these search access points can and almost always does have a preset default general search engine”, at para. 41. Also, it appears that present default general search engine is common commercial practices since, as the DOJ complaint itself notes when discussing Google’s rivals (Microsoft’s Bing and Amazon’s Fire OS), “Amazon preinstalled its own proprietary apps and agreed to make Microsoft’s Bing the preset default general search engine”, in para.130. The complaint fails to identify alternative search engines which are not preset defaults, thus implicitly recognizing this practice as a widespread practice.

[12] To use Vesterdof’s language, see Bo Vesterdorf, Theories of Self-Preferencing and Duty to Deal – Two Sides of the Same Coin, Competition Law & Policy Debate 1(1) 4, (2015). See also Nicolas Petit, Theories of Self-Preferencing under Article 102 TFEU: A Reply to Bo Vesterdorf, 5-7 (2015).

[13] Case 39740 Google Search (Shopping). Here the foreclosure effects of self-preferencing are only speculated: « the Commission is not required to prove that the Conduct has the actual effect of decreasing traffic to competing comparison shopping services and increasing traffic to Google’s comparison-shopping service. Rather, it is sufficient for the Commission to demonstrate that the Conduct is capable of having, or likely to have, such effects.” (para.601 of the Decision). See P. Ibáñez Colomo, Indispensability and Abuse of Dominance: From Commercial Solvents to Slovak Telekom and Google Shopping, 10 Journal of European Competition Law & Practice 532 (2019); Aurelien Portuese, When Demotion is Competition: Algorithmic Antitrust Illustrated, Concurrences, no 2, May 2018, 25-37; Aurelien Portuese, Fine is Only One Click Away, Symposium on the Google Shopping Decision, Case Note, 3 Competition and Regulatory Law Review, (2017).

[14] For a general discussion on law and economics of self-preferencing, see Michael A. Salinger, Self-Preferencing, Global Antitrust Institute Report, 329-368 (2020).

[15]Pablo Ibanez Colomo, Self-Preferencing: Yet Another Epithet in Need of Limiting Principles, 43 World Competition (2020) (concluding that self-preferencing is « misleading as a legal category »).

[16] See, for instances, Pedro Caro de Sousa, What Shall We Do About Self-Preferencing? Competition Policy International, June 2020.

[17] Milton Friedman, The Social Responsibility of Business is to Increase Its Profits, New York Times, September 13, 1970. This echoes Adam Smith’s famous statement that « It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard for their own self-interest » from the 1776 Wealth of Nations. In Ayn Rand’s philosophy, the only alternative to rational self-interest is to sacrifice one’s own interests either for fellowmen (altruism) or for supernatural forces (mysticism). See Ayn Rand, The Objectivist Ethics, in The Virtue of Selfishness, Signet, (1964).

[18] Aurelien Portuese, European Competition Enforcement and the Digital Economy : The Birthplace of Precautionary Antitrust, Global Antitrust Institute’s Report on the Digital Economy, 597-651.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]

Judges sometimes claim that they do not pick winners when they decide antitrust cases. Nothing could be further from the truth.

Competitive conduct by its nature harms competitors, and so if antitrust were merely to prohibit harm to competitors, antitrust would then destroy what it is meant to promote.

What antitrust prohibits, therefore, is not harm to competitors but rather harm to competitors that fails to improve products. Only in this way is antitrust able to distinguish between the good firm that harms competitors by making superior products that consumers love and that competitors cannot match and the bad firm that harms competitors by degrading their products without offering consumers anything better than what came before.

That means, however, that antitrust must pick winners: antitrust must decide what is an improvement and what not. And a more popular search engine is a clear winner.

But one should not take its winningness for granted. For once upon a time there was another winner that the courts always picked, blocking antitrust case after antitrust case. Until one day the courts stopped picking it.

That was the economy of scale.

The Structure of the Google Case

Like all antitrust cases that challenge the exercise of power, the government’s case against Google alleges denial of an input to competitors in some market. Here the input is default search status in smartphones, the competitors are rival search providers, and the market is search advertising. The basic structure of the case is depicted in the figure below.

Although brought as a monopolization case under Section 2 of the Sherman Act, this is at heart an exclusive dealing case of the sort normally brought under Section 1 of the Sherman Act: the government’s core argument is that Google uses contracts with smartphone makers, pursuant to which the smartphone makers promise to make Google, and not competitors, the search default, to harm competing search advertising providers and by extension competition in the search advertising market.

The government must show anticompetitive conduct, monopoly power, and consumer harm in order to prevail.

Let us assume that there is monopoly power. The company has more than 70% of the search advertising market, which is in the zone normally required to prove that element of a monopolization claim.

The problem of anticompetitive conduct is only slightly more difficult.

Anticompetitive conduct is only ever one thing in antitrust: denial of an essential input to a competitor. There is no other way to harm rivals.

(To be sure, antitrust prohibits harm to competition, not competitors, but that means only that harm to competitors necessary but insufficient for liability. The consumer harm requirement decides whether the requisite harm to competitors is also harm to competition.)

It is not entirely clear just how important default search status really is to running a successful search engine, but let us assume that it is essential, as the government suggests.

Then the question whether Google’s contracts are anticompetitive turns on how much of the default search input Google’s contracts foreclose to rival search engines. If a lot, then the rivals are badly harmed. If a little, then there may be no harm at all.

The answer here is that there is a lot of foreclosure, at least if the government’s complaint is to be believed. Through its contracts with Apple and makers of Android phones, Google has foreclosed default search status to rivals on virtually every single smartphone.

That leaves consumer harm. And here is where things get iffy.

Usage as a Product Improvement: A Very Convenient Argument

The inquiry into consumer harm evokes measurements of the difference between demand curves and price lines, or extrapolations of compensating and equivalent variation using indifference curves painstakingly pieced together based on the assumptions of revealed preference.

But while the parties may pay experts plenty to spin such yarns, and judges may pretend to listen to them, in the end, for the judges, it always comes down to one question only: did exclusive dealing improve the product?

If it did, then the judge assumes that the contracts made consumers better off and the defendant wins. And if it did not, then off with their heads.

So, does foreclosing all this default search space to competitors make Google search advertising more valuable to advertisers?

Those who leap to Google’s defense say yes, for default search status increases the number of people who use Google’s search engine. And the more people use Google’s search engine, the more Google learns about how best to answer search queries and which advertisements will most interest which searchers. And that ensures that even more people will use Google’s search engine, and that Google will do an even better job of targeting ads on its search engine.

And that in turn makes Google’s search advertising even better: able to reach more people and to target ads more effectively to them.

None of that would happen if defaults were set to other engines and users spurned Google, and so foreclosing default search space to rivals undoubtedly improves Google’s product.

This is a nice argument. Indeed, it is almost too nice, for it seems to suggest that almost anything Google might do to steer users away from competitors and to itself deserves antitrust immunity. Suppose Google were to brandish arms to induce you to run your next search on Google. That would be a crime, but, on this account, not an antitrust crime. For getting you to use Google does make Google better.

The argument that locking up users improves the product is of potential use not just to Google but to any of the many tech companies that run on advertising—Facebook being a notable example—so it potentially immunizes an entire business model from antitrust scrutiny.

It turns out that has happened before.

Economies of Scale as a Product Improvement: Once a Convenient Argument

Once upon a time, antitrust exempted another kind of business for which products improve the more people used them. The business was industrial production, and it differs from online advertising only in the irrelevant characteristic that the improvement that comes with expanding use is not in the quality of the product but in the cost per unit of producing it.

The hallmark of the industrial enterprise is high fixed costs and low marginal costs. The textile mill differs from pre-industrial piecework weaving in that once a $10 million investment in machinery has been made, the mill can churn out yard after yard of cloth for pennies. The pieceworker, by contrast, makes a relatively small up-front investment—the cost of raising up the hovel in which she labors and making her few tools—but spends the same large amount of time to produce each new yard of cloth.

Large fixed costs and low marginal costs lie at the heart of the bounty of the modern age: the more you produce, the lower the unit cost, and so the lower the price at which you can sell your product. This is a recipe for plenty.

But it also means that, so long as consumer demand in a given market is lower than the capacity of any particular plant, driving buyers to a particular seller and away from competitors always improves the product, in the sense that it enables the firm to increase volume and reduce unit cost, and therefore to sell the product at a lower price.

If the promise of the modern age is goods at low prices, then the implication is that antitrust should never punish firms for driving rivals from the market and taking over their customers. Indeed, efficiency requires that only one firm should ever produce in any given market, at least in any market for which a single plant is capable of serving all customers.

For antitrust in the late 19th and early 20th centuries, beguiled by this advantage to size, exclusive dealing, refusals to deal, even the knife in a competitor’s back: whether these ran afoul of other areas of law or not, it was all for the better because it allowed industrial enterprises to achieve economies of scale.

It is no accident that, a few notable triumphs aside, antitrust did not come into its own until the mid-1930s, 40 years after its inception, on the heels of an intellectual revolution that explained, for the first time, why it might actually be better for consumers to have more than one seller in a market.

The Monopolistic Competition Revolution

The revolution came in the form of the theory of monopolistic competition and its cousin, the theory of creative destruction, developed between the 1920s and 1940s by Edward Chamberlin, Joan Robinson and Joseph Schumpeter.

These theories suggested that consumers might care as much about product quality as they do about product cost, and indeed would be willing to abandon a low-cost product for a higher-quality, albeit more expensive, one.

From this perspective, the world of economies of scale and monopoly production was the drab world of Soviet state-owned enterprises churning out one type of shoe, one brand of cleaning detergent, and so on.

The world of capitalism and technological advance, by contrast, was one in which numerous firms produced batches of differentiated products in amounts sometimes too small fully to realize all scale economies, but for which consumers were nevertheless willing to pay because the products better fit their preferences.

What is more, the striving of monopolistically competitive firms to lure away each other’s customers with products that better fit their tastes led to disruptive innovation— “creative destruction” was Schumpeter’s famous term for it—that brought about not just different flavors of the same basic concept but entirely new concepts. The competition to create a better flip phone, for example, would lead inevitably to a whole new paradigm, the smartphone.

This reasoning combined with work in the 1940s and 1950s on economic growth that quantified for the first time the key role played by technological change in the vigor of capitalist economies—the famous Solow residual—to suggest that product improvements, and not the cost reductions that come from capital accumulation and their associated economies of scale, create the lion’s share of consumer welfare. Innovation, not scale, was king.

Antitrust responded by, for the first time in its history, deciding between kinds of product improvements, rather than just in favor of improvements, casting economies of scale out of the category of improvements subject to antitrust immunity, while keeping quality improvements immune.

Casting economies of scale out of the protected product improvement category gave antitrust something to do for the first time. It meant that big firms had to plead more than just the cost advantages of being big in order to obtain license to push their rivals around. And government could now start reliably to win cases, rather than just the odd cause célèbre.

It is this intellectual watershed, and not Thurman Arnold’s tenacity, that was responsible for antitrust’s emergence as a force after World War Two.

Usage-Based Improvements Are Not Like Economies of Scale

The improvements in advertising that come from user growth fall squarely on the quality side of the ledger—the value they create is not due to the ability to average production costs over more ad buyers—and so they count as the kind of product improvements that antitrust continues to immunize today.

But given the pervasiveness of this mode of product improvement in the tech economy—the fact that virtually any tech firm that sells advertising can claim to be improving a product by driving users to itself and away from competitors—it is worth asking whether we have not reached a new stage in economic development in which this form of product improvement ought, like economies of scale, to be denied protection.

Shouldn’t the courts demand more and better innovation of big tech firms than just the same old big-data-driven improvements they serve up year after year?

Galling as it may be to those who, like myself, would like to see more vigorous antitrust enforcement in general, the answer would seem to be “no.” For what induced the courts to abandon antitrust immunity for economies of scale in the mid-20th century was not the mere fact that immunizing economies of scale paralyzed antitrust. Smashing big firms is not, after all, an end in itself.

Instead, monopolistic competition, creative destruction and the Solow residual induced the change, because they suggested both that other kinds of product improvement are more important than economies of scale and, crucially, that protecting economies of scale impedes development of those other kinds of improvements.

A big firm that excludes competitors in order to reach scale economies not only excludes competitors who might have produced an identical or near-identical product, but also excludes competitors who might have produced a better-quality product, one that consumers would have preferred to purchase even at a higher price.

To cast usage-based improvements out of the product improvement fold, a case must be made that excluding competitors in order to pursue such improvements will block a different kind of product improvement that contributes even more to consumer welfare.

If we could say, for example, that suppressing search competitors suppresses more-innovative search engines that ad buyers would prefer, even if those innovative search engines were to lack the advantages that come from having a large user base, then a case might be made that user growth should no longer count as a product improvement immune from antitrust scrutiny.

And even then, the case against usage-based improvements would need to be general enough to justify an epochal change in policy, rather than be limited to a particular technology in a particular lawsuit. For the courts hate to balance in individual cases, statements to the contrary in their published opinions notwithstanding.

But there is nothing in the Google complaint, much less the literature, to suggest that usage-based improvements are problematic in this way. Indeed, much of the value created by the information revolution seems to inhere precisely in its ability to centralize usage.

Americans Keep Voting to Centralize the Internet

In the early days of the internet, theorists mistook its decentralized architecture for a feature, rather than a bug. But internet users have since shown, time and again, that they believe the opposite.

For example, the basic protocols governing email were engineered to allow every American to run his own personal email server.

But Americans hated the freedom that created—not least the spam—and opted instead to get their email from a single server: the one run by Google as Gmail.

The basic protocols governing web traffic were also designed to allow every American to run whatever other communications services he wished—chat, video chat, RSS, webpages—on his own private server in distributed fashion.

But Americans hated the freedom that created—not least having to build and rebuild friend networks across platforms–—and they voted instead overwhelmingly to get their social media from a single server: Facebook.

Indeed, the basic protocols governing internet traffic were designed to allow every business to store and share its own data from its own computers, in whatever form.

But American businesses hated that freedom—not least the cost of having to buy and service their own data storage machines—and instead 40% of the internet is now stored and served from Amazon Web Services.

Similarly, advertisers have the option of placing advertisements on the myriad independently-run websites that make up the internet—known in the business as the “open web”—by placing orders through competitive ad exchanges. But advertisers have instead voted mostly to place ads on the handful of highly centralized platforms known as “walled gardens,” including Facebook, Google’s YouTube and, of course, Google Search.

The communications revolution, they say, is all about “bringing people together.” It turns out that’s true.

And that Google should win on consumer harm.

Remember the Telephone

Indeed, the same mid-20th century antitrust that thought so little of economies of scale as a defense immunized usage-based improvements when it encountered them in that most important of internet precursors: the telephone.

The telephone, like most internet services, gets better as usage increases. The more people are on a particular telephone network, the more valuable the network becomes to subscribers.

Just as with today’s internet services, the advantage of a large user base drove centralization of telephone services a century ago into the hands of a single firm: AT&T. Aside from a few business executives who liked the look of a desk full of handsets, consumers wanted one phone line that they could use to call everyone.

Although the government came close to breaking AT&T up in the early 20th century, the government eventually backed off, because a phone system in which you must subscribe to the right carrier to reach a friend just doesn’t make sense.

Instead, Congress and state legislatures stepped in to take the edge off monopoly by regulating phone pricing. And when antitrust finally did break AT&T up in 1982, it did so in a distinctly regulatory fashion, requiring that AT&T’s parts connect each other’s phone calls, something that Congress reinforced in the Telecommunications Act of 1996.

The message was clear: the sort of usage-based improvements one finds in communications are real product improvements. And antitrust can only intervene if it has a way to preserve them.

The equivalent of interconnection in search, that the benefits of usage, in the form of data and attention, be shared among competing search providers, might be feasible. But it is hard to imagine the court in the Google case ordering interconnection without the benefit of decades of regulatory experience with the defendant’s operations that the district court in 1982 could draw upon in the AT&T case.

The solution for the tech giants today is the same as the solution for AT&T a century ago: to regulate rather than to antitrust.

Microsoft Not to the Contrary, Because Users Were in Common

Parallels to the government’s 1990s-era antitrust case against Microsoft are not to the contrary.

As Sam Weinstein has pointed out to me, Microsoft, like Google, was at heart an exclusive dealing case: Microsoft contracted with computer manufacturers to prevent Netscape Navigator, an early web browser, from serving as the default web browser on Windows PCs.

That prevented Netscape, the argument went, from growing to compete with Windows in the operating system market, much the way the Google’s Chrome browser has become a substitute for Windows on low-end notebook computers today.

The D.C. Circuit agreed that default status was an essential input for Netscape as it sought eventually to compete with Windows in the operating system market.

The court also accepted the argument that the exclusive dealing did not improve Microsoft’s operating system product.

This at first seems to contradict the notion that usage improves products, for, like search advertising, operating systems get better as their user bases increase. The more people use an operating system, the more application developers are willing to write for the system, and the better the system therefore becomes.

It seems to follow that keeping competitors off competing operating systems and on Windows made Windows better. If the court nevertheless held Microsoft liable, it must be because the court refused to extend antitrust immunity to usage-based improvements.

The trouble with this line of argument is that it ignores the peculiar thing about the Microsoft case: that while the government alleged that Netscape was a potential competitor of Windows, Netscape was also an application that ran on Windows.

That means that, unlike Google and rival search engines, Windows and Netscape shared users.

So, Microsoft’s exclusive dealing did not increase its user base and therefore could not have improved Windows, at least not by making Windows more appealing for applications developers. Driving Netscape from Windows did not enable developers to reach even one more user. Conversely, allowing Netscape to be the default browser on Windows would not have reduced the number of Windows users, because Netscape ran on Windows.

By contrast, a user who runs a search in Bing does not run the same search simultaneously in Google, and so Bing users are not Google users. Google’s exclusive dealing therefore increases its user base and improves Google’s product, whereas Microsoft’s exclusive dealing served only to reduce Netscape’s user base and degrade Netscape’s product.

Indeed, if letting Netscape be the default browser on Windows was a threat to Windows, it was not because it prevented Microsoft from improving its product, but because Netscape might eventually have become an operating system, and indeed a better operating system, than Windows, and consumers and developers, who could be on both at the same time if they wished, might have nevertheless chosen eventually to go with Netscape alone.

Though it does not help the government in the Google case, Microsoft still does offer a beacon of hope for those concerned about size, for Microsoft’s subsequent history reminds us that yesterday’s behemoth is often today’s also ran.

And the favorable settlement terms Microsoft ultimately used to escape real consequences for its conduct 20 years ago imply that, at least in high-tech markets, we don’t always need antitrust for that to be true.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]

Google is facing a series of lawsuits in 2020 and 2021 that challenge some of the most fundamental parts of its business, and of the internet itself — Search, Android, Chrome, Google’s digital-advertising business, and potentially other services as well. 

The U.S. Justice Department (DOJ) has brought a case alleging that Google’s deals with Android smartphone manufacturers, Apple, and third-party browsers to make Google Search their default general search engine are anticompetitive (ICLE’s tl;dr on the case is here), and the State of Texas has brought a suit against Google’s display advertising business. These follow a market study by the United K’s Competition and Markets Authority that recommended an ex ante regulator and code of conduct for Google and Facebook. At least one more suit is expected to follow.

These lawsuits will test ideas that are at the heart of modern antitrust debates: the roles of defaults and exclusivity deals in competition; the costs of self-preferencing and its benefits to competition; the role of data in improving software and advertising, and its role as a potential barrier to entry; and potential remedies in these markets and their limitations.

This Truth on the Market symposium asks contributors with wide-ranging viewpoints to comment on some of these issues as they arise in the lawsuits being brought—starting with the U.S. Justice Department’s case against Google for alleged anticompetitive practices in search distribution and search-advertising markets—and continuing throughout the duration of the lawsuits.

This week the Senate will hold a hearing into potential anticompetitive conduct by Google in its display advertising business—the “stack” of products that it offers to advertisers seeking to place display ads on third-party websites. It is also widely reported that the Department of Justice is preparing a lawsuit against Google that will likely include allegations of anticompetitive behavior in this market, and is likely to be joined by a number of state attorneys general in that lawsuit. Meanwhile, several papers have been published detailing these allegations

This aspect of digital advertising can be incredibly complex and difficult to understand. Here we explain how display advertising fits in the broader digital advertising market, describe how display advertising works, consider the main allegations against Google, and explain why Google’s critics are misguided to focus on antitrust as a solution to alleged problems in the market (even if those allegations turn out to be correct).

Display advertising in context

Over the past decade, the price of advertising has fallen steadily while output has risen. Spending on digital advertising in the US grew from $26 billion in 2010 to nearly $130 billion in 2019, an average increase of 20% a year. Over the same period the Producer Price Index for Internet advertising sales declined by nearly 40%. The rising spending in the face of falling prices indicates the number of ads bought and sold increased by approximately 27% a year. Since 2000, advertising spending has been falling as a share of GDP, with online advertising growing as a share of that. The combination of increasing quantity, decreasing cost, and increasing total revenues are consistent with a growing and increasingly competitive market.

Display advertising on third-party websites is only a small subsection of the digital advertising market, comprising approximately 15-20% of digital advertising spending in the US. The rest of the digital advertising market is made up of ads on search results pages on sites like Google, Amazon and Kayak, on people’s Instagram and Facebook feeds, listings on sites like Zillow (for houses) or Craigslist, referral fees paid to price comparison websites for things like health insurance, audio and visual ads on services like Spotify and Hulu, and sponsored content from influencers and bloggers who will promote products to their fans. 

And digital advertising itself is only one of many channels through which companies can market their products. About 53% of total advertising spending in the United States goes on digital channels, with 30% going on TV advertising and the rest on things like radio ads, billboards and other more traditional forms of advertising. A few people still even read physical newspapers and the ads they contain, although physical newspapers’ bigger money makers have traditionally been classified ads, which have been replaced by less costly and more effective internet classifieds, such as those offered by Craigslist, or targeted ads on Google Maps or Facebook.

Indeed, it should be noted that advertising itself is only part of the larger marketing market of which non-advertising marketing communication—e.g., events, sales promotion, direct marketing, telemarketing, product placement—is as big a part as is advertising (each is roughly $500bn globally); it just hasn’t been as thoroughly disrupted by the Internet yet. But it is a mistake to assume that digital advertising is not a part of this broader market. And of that $1tr global market, Internet advertising in total occupies only about 18%—and thus display advertising only about 3%.

Ad placement is only one part of the cost of digital advertising. An advertiser trying to persuade people to buy its product must also do market research and analytics to find out who its target market is and what they want. Moreover, there are the costs of designing and managing a marketing campaign and additional costs to analyze and evaluate the effectiveness of the campaign. 

Nevertheless, one of the most straightforward ways to earn money from a website is to show ads to readers alongside the publisher’s content. To satisfy publishers’ demand for advertising revenues, many services have arisen to automate and simplify the placement of and payment for ad space on publishers’ websites. Google plays a large role in providing these services—what is referred to as “open display” advertising. And it is Google’s substantial role in this space that has sparked speculation and concern among antitrust watchdogs and enforcement authorities.

Before delving into the open display advertising market, a quick note about terms. In these discussions, “advertisers” are businesses that are trying to sell people stuff. Advertisers include large firms such as Best Buy and Disney and small businesses like the local plumber or financial adviser. “Publishers” are websites that carry those ads, and publish content that users want to read. Note that the term “publisher” refers to all websites regardless of the things they’re carrying: a blog about the best way to clean stains out of household appliances is a “publisher” just as much as the New York Times is. 

Under this broad definition, Facebook, Instagram, and YouTube are also considered publishers. In their role as publishers, they have a common goal: to provide content that attracts users to their pages who will act on the advertising displayed. “Users” are you and me—the people who want to read publishers’ content, and to whom advertisers want to show ads. Finally, “intermediaries” are the digital businesses, like Google, that sit in between the advertisers and the publishers, allowing them to do business with each other without ever meeting or speaking.

The display advertising market

If you’re an advertiser, display advertising works like this: your company—one that sells shoes, let’s say—wants to reach a certain kind of person and tell her about the company’s shoes. These shoes are comfortable, stylish, and inexpensive. You use a tool like Google Ads (or, if it’s a big company and you want a more expansive campaign over which you have more control, Google Marketing Platform) to design and upload an ad, and tell Google about the people you want to read—their age and location, say, and/or characterizations of their past browsing and searching habits (“interested in sports”). 

Using that information, Google finds ad space on websites whose audiences match the people you want to target. This ad space is auctioned off to the highest bidder among the range of companies vying, with your shoe company, to reach users matching the characteristics of the website’s users. Thanks to tracking data, it doesn’t just have to be sports-relevant websites: as a user browses sports-related sites on the web, her browser picks up files (cookies) that will tag her as someone potentially interested in sports apparel for targeting later.

So a user might look at a sports website and then later go to a recipe blog, and there receive the shoes ad on the basis of her earlier browsing. You, the shoe seller, hope that she will either click through and buy (or at least consider buying) the shoes when she sees those ads, but one of the benefits of display advertising over search advertising is that—as with TV ads or billboard ads—just seeing the ad will make her aware of the product and potentially more likely to buy it later. Advertisers thus sometimes pay on the basis of clicks, sometimes on the basis of views, and sometimes on the basis of conversion (when a consumer takes an action of some sort, such as making a purchase or filling out a form).

That’s the advertiser’s perspective. From the publisher’s perspective—the owner of that recipe blog, let’s say—you want to auction ad space off to advertisers like that shoe company. In that case, you go to an ad server—Google’s product is called AdSense—give them a little bit of information about your site, and add some html code to your website. These ad servers gather information about your content (e.g., by looking at keywords you use) and your readers (e.g., by looking at what websites they’ve used in the past to make guesses about what they’ll be interested in) and places relevant ads next to and among your content. If they click, lucky you—you’ll get paid a few cents or dollars. 

Apart from privacy concerns about the tracking of users, the really tricky and controversial part here concerns the way scarce advertising space is allocated. Most of the time, it’s done through auctions that happen in real time: each time a user loads a website, an auction is held in a fraction of a second to decide which advertiser gets to display an ad. The longer this process takes, the slower pages load and the more likely users are to get frustrated and go somewhere else.

As well as the service hosting the auction, there are lots of little functions that different companies perform that make the auction and placement process smoother. Some fear that by offering a very popular product integrated end to end, Google’s “stack” of advertising products can bias auctions in favour of its own products. There’s also speculation that Google’s product is so tightly integrated and so effective at using data to match users and advertisers that it is not viable for smaller rivals to compete.

We’ll discuss this speculation and fear in more detail below. But it’s worth bearing in mind that this kind of real-time bidding for ad placement was not always the norm, and is not the only way that websites display ads to their users even today. Big advertisers and websites often deal with each other directly. As with, say, TV advertising, large companies advertising often have a good idea about the people they want to reach. And big publishers (like popular news websites) often have a good idea about who their readers are. For example, big brands often want to push a message to a large number of people across different customer types as part of a broader ad campaign. 

Of these kinds of direct sales, sometimes the space is bought outright, in advance, and reserved for those advertisers. In most cases, direct sales are run through limited, intermediated auction services that are not open to the general market. Put together, these kinds of direct ad buys account for close to 70% of total US display advertising spending. The remainder—the stuff that’s left over after these kinds of sales have been done—is typically sold through the real-time, open display auctions described above.

Different adtech products compete on their ability to target customers effectively, to serve ads quickly (since any delay in the auction and ad placement process slows down page load times for users), and to do so inexpensively. All else equal (including the effectiveness of the ad placement), advertisers want to pay the lowest possible price to place an ad. Similarly, publishers want to receive the highest possible price to display an ad. As a result, both advertisers and publishers have a keen interest in reducing the intermediary’s “take” of the ad spending.

This is all a simplification of how the market works. There is not one single auction house for ad space—in practice, many advertisers and publishers end up having to use lots of different auctions to find the best price. As the market evolved to reach this state from the early days of direct ad buys, new functions that added efficiency to the market emerged. 

In the early years of ad display auctions, individual processes in the stack were performed by numerous competing companies. Through a process of “vertical integration” some companies, such as Google, brought these different processes under the same roof, with the expectation that integration would streamline the stack and make the selling and placement of ads more efficient and effective. The process of vertical integration in pursuit of efficiency has led to a more consolidated market in which Google is the largest player, offering simple, integrated ad buying products to advertisers and ad selling products to publishers. 

Google is by no means the only integrated adtech service provider, however: Facebook, Amazon, Verizon, AT&T/Xandr, theTradeDesk, LumenAd, Taboola and others also provide end-to-end adtech services. But, in the market for open auction placement on third-party websites, Google is the biggest.

The cases against Google

The UK’s Competition and Markets Authority (CMA) carried out a formal study into the digital advertising market between 2019 and 2020, issuing its final report in July of this year. Although also encompassing Google’s Search advertising business and Facebook’s display advertising business (both of which relate to ads on those companies “owned and operated” websites and apps), the CMA study involved the most detailed independent review of Google’s open display advertising business to date. 

That study did not lead to any competition enforcement proceedings, but it did conclude that Google’s vertically integrated products led to conflicts of interest that could lead it to behaving in ways that did not benefit the advertisers and publishers that use it. One example was Google’s withholding of certain data from publishers that would make it easier for them to use other ad selling products; another was the practice of setting price floors that allegedly led advertisers to pay more than they would otherwise.

Instead the CMA recommended the setting up of a “Digital Markets Unit” (DMU) that could regulate digital markets in general, and a code of conduct for Google and Facebook (and perhaps other large tech platforms) intended to govern their dealings with smaller customers.

The CMA’s analysis is flawed, however. For instance, it makes big assumptions about the dependency of advertisers on display advertising, largely assuming that they would not switch to other forms of advertising if prices rose, and it is light on economics. But factually it is the most comprehensively researched investigation into digital advertising yet published.

Piggybacking on the CMA’s research, and mounting perhaps the strongest attack on Google’s adtech offerings to date, was a paper released just prior to the CMA’s final report called “Roadmap for a Digital Advertising Monopolization Case Against Google”, by Yale economist Fiona Scott Morton and Omidyar Network lawyer David Dinielli. Dinielli will testify before the Senate committee.

While the Scott Morton and Dinielli paper is extremely broad, it also suffers from a number of problems. 

One, because it was released before the CMA’s final report, it is largely based on the interim report released months earlier by the CMA, halfway through the market study in December 2019. This means that several of its claims are out of date. For example, it makes much of the possibility raised by the CMA in its interim report that Google may take a larger cut of advertising spending than its competitors, and claims made in another report that Google introduces “hidden” fees that increases the overall cut it takes from ad auctions. 

But in the final report, after further investigation, the CMA concludes that this is not the case. In the final report, the CMA describes its analysis of all Google Ad Manager open auctions related to UK web traffic during the period between 8–14 March 2020 (involving billions of auctions). This, according to the CMA, allowed it to observe any possible “hidden” fees as well. The CMA concludes:

Our analysis found that, in transactions where both Google Ads and Ad Manager (AdX) are used, Google’s overall take rate is approximately 30% of advertisers’ spend. This is broadly in line with (or slightly lower than) our aggregate market-wide fee estimate outlined above. We also calculated the margin between the winning bid and the second highest bid in AdX for Google and non-Google DSPs, to test whether Google was systematically able to win with a lower margin over the second highest bid (which might have indicated that they were able to use their data advantage to extract additional hidden fees). We found that Google’s average winning margin was similar to that of non-Google DSPs. Overall, this evidence does not indicate that Google is currently extracting significant hidden fees. As noted below, however, it retains the ability and incentive to do so. (p. 275, emphasis added)

Scott Morton and Dinielli also misquote and/or misunderstand important sections of the CMA interim report as relating to display advertising when, in fact, they relate to search. For example, Scott Morton and Dinielli write that the “CMA concluded that Google has nearly insurmountable advantages in access to location data, due to the location information [uniquely available to it from other sources].” (p. 15). The CMA never makes any claim of “insurmountable advantage,” however. Rather, to support the claim, Scott Morton and Dinielli cite to a portion of the CMA interim report recounting a suggestion made by Microsoft regarding the “critical” value of location data in providing relevant advertising. 

But that portion of the report, as well as the suggestion made by Microsoft, is about search advertising. While location data may also be valuable for display advertising, it is not clear that the GPS-level data that is so valuable in providing mobile search ad listings (for a nearby cafe or restaurant, say) is particularly useful for display advertising, which may be just as well-targeted by less granular, city- or county-level location data, which is readily available from a number of sources. In any case, Scott Morton and Dinielli are simply wrong to use a suggestion offered by Microsoft relating to search advertising to demonstrate the veracity of an assertion about a conclusion drawn by the CMA regarding display advertising. 

Scott Morton and Dinielli also confusingly word their own judgements about Google’s conduct in ways that could be misinterpreted as conclusions by the CMA:

The CMA reports that Google has implemented an anticompetitive sales strategy on the publisher ad server end of the intermediation chain. Specifically, after purchasing DoubleClick, which became its publisher ad server, Google apparently lowered its prices to publishers by a factor of ten, at least according to one publisher’s account related to the CMA. (p. 20)

In fact, the CMA does not conclude that Google lowering its prices was an “anticompetitive sales strategy”—it does not use these words at all—and what Scott Morton and Dinielli are referring to is a claim by a rival ad server business, Smart, that Google cutting its prices after acquiring Doubleclick led to Google expanding its market share. Apart from the misleading wording, it is unclear why a competition authority should consider it to be “anticompetitive” when prices are falling and kept low, and—as Smart reported to the CMA—its competitor’s response is to enhance its own offering. 

The case that remains

Stripping away the elements of Scott Morton and Dinielli’s case that seem unsubstantiated by a more careful reading of the CMA reports, and with the benefit of the findings in the CMA’s final report, we are left with a case that argues that Google self-preferences to an unreasonable extent, giving itself a product that is as successful as it is in display advertising only because of Google’s unique ability to gain advantage from its other products that have little to do with display advertising. Because of this self-preferencing, they might argue, innovative new entrants cannot compete on an equal footing, so the market loses out on incremental competition because of the advantages Google gets from being the world’s biggest search company, owning YouTube, running Google Maps and Google Cloud, and so on. 

The most significant examples of this are Google’s use of data from other products—like location data from Maps or viewing history from YouTube—to target ads more effectively; its ability to enable advertisers placing search ads to easily place display ads through the same interface; its introduction of faster and more efficient auction processes that sidestep the existing tools developed by other third-party ad exchanges; and its design of its own tool (“open bidding”) for aggregating auction bids for advertising space to compete with (rather than incorporate) an alternative tool (“header bidding”) that is arguably faster, but costs more money to use.

These allegations require detailed consideration, and in a future paper we will attempt to assess them in detail. But in thinking about them now it may be useful to consider the remedies that could be imposed to address them, assuming they do diminish the ability of rivals to compete with Google: what possible interventions we could make in order to make the market work better for advertisers, publishers, and users. 

We can think of remedies as falling into two broad buckets: remedies that stop Google from doing things that improve the quality of its own offerings, thus making it harder for others to keep up; and remedies that require it to help rivals improve their products in ways otherwise accessible only to Google (e.g., by making Google’s products interoperable with third-party services) without inherently diminishing the quality of Google’s own products.

The first camp of these, what we might call “status quo minus,” includes rules banning Google from using data from its other products or offering single order forms for advertisers, or, in the extreme, a structural remedy that “breaks up” Google by either forcing it to sell off its display ad business altogether or to sell off elements of it. 

What is striking about these kinds of interventions is that all of them “work” by making Google worse for those that use it. Restrictions on Google’s ability to use data from other products, for example, will make its service more expensive and less effective for those who use it. Ads will be less well-targeted and therefore less effective. This will lead to lower bids from advertisers. Lower ad prices will be transmitted through the auction process to produce lower payments for publishers. Reduced publisher revenues will mean some content providers exit. Users will thus be confronted with less available content and ads that are less relevant to them and thus, presumably, more annoying. In other words: No one will be better off, and most likely everyone will be worse off.

The reason a “single order form” helps Google is that it is useful to advertisers, the same way it’s useful to be able to buy all your groceries at one store instead of lots of different ones. Similarly, vertical integration in the “ad stack” allows for a faster, cheaper, and simpler product for users on all sides of the market. A different kind of integration that has been criticized by others, where third-party intermediaries can bid more quickly if they host on Google Cloud, benefits publishers and users because it speeds up auction time, allowing websites to load faster. So does Google’s unified alternative to “header bidding,” giving a speed boost that is apparently valuable enough to publishers that they will pay for it.

So who would benefit from stopping Google from doing these things, or even forcing Google to sell its operations in this area? Not advertisers or publishers. Maybe Google’s rival ad intermediaries would; presumably, artificially hamstringing Google’s products would make it easier for them to compete with Google. But if so, it’s difficult to see how this would be an overall improvement. It is even harder to see how this would improve the competitive process—the very goal of antitrust. Rather, any increase in the competitiveness of rivals would result not from making their products better, but from making Google’s product worse. That is a weakening of competition, not its promotion. 

On the other hand, interventions that aim to make Google’s products more interoperable at least do not fall prey to this problem. Such “status quo plus” interventions would aim to take the benefits of Google’s products and innovations and allow more companies to use them to improve their own competing products. Not surprisingly, such interventions would be more in line with the conclusions the CMA came to than the divestitures and operating restrictions proposed by Scott Morton and Dinielli, as well as (reportedly) state attorneys general considering a case against Google.

But mandated interoperability raises a host of different concerns: extensive and uncertain rulemaking, ongoing regulatory oversight, and, likely, price controls, all of which would limit Google’s ability to experiment with and improve its products. The history of such mandated duties to deal or compulsory licenses is a troubled one, at best. But even if, for the sake of argument, we concluded that these kinds of remedies were desirable, they are difficult to impose via an antitrust lawsuit of the kind that the Department of Justice is expected to launch. Most importantly, if the conclusion of Google’s critics is that Google’s main offense is offering a product that is just too good to compete with without regulating it like a utility, with all the costs to innovation that that would entail, maybe we ought to think twice about whether an antitrust intervention is really worth it at all.

Hardly a day goes by without news of further competition-related intervention in the digital economy. The past couple of weeks alone have seen the European Commission announce various investigations into Apple’s App Store (here and here), as well as reaffirming its desire to regulate so-called “gatekeeper” platforms. Not to mention the CMA issuing its final report regarding online platforms and digital advertising.

While the limits of these initiatives have already been thoroughly dissected (e.g. here, here, here), a fundamental question seems to have eluded discussions: What are authorities trying to achieve here?

At first sight, the answer might appear to be extremely simple. Authorities want to “bring more competition” to digital markets. Furthermore, they believe that this competition will not arise spontaneously because of the underlying characteristics of digital markets (network effects, economies of scale, tipping, etc). But while it may have some intuitive appeal, this answer misses the forest for the trees.

Let us take a step back. Digital markets could have taken a vast number of shapes, so why have they systematically gravitated towards those very characteristics that authorities condemn? For instance, if market tipping and consumer lock-in are so problematic, why is it that new corners of the digital economy continue to emerge via closed platforms, as opposed to collaborative ones? Indeed, if recent commentary is to be believed, it is the latter that should succeed because they purportedly produce greater gains from trade. And if consumers and platforms cannot realize these gains by themselves, then we should see intermediaries step into the breach – i.e. arbitrage. This does not seem to be happening in the digital economy. The naïve answer is to say that this is precisely the problem, the harder one is to actually understand why.

To draw a parallel with evolution, in the late 18th century, botanists discovered an orchid with an unusually long spur (above). This made its nectar incredibly hard to reach for insects. Rational observers at the time could be forgiven for thinking that this plant made no sense, that its design was suboptimal. And yet, decades later, Darwin conjectured that the plant could be explained by a (yet to be discovered) species of moth with a proboscis that was long enough to reach the orchid’s nectar. Decades after his death, the discovery of the xanthopan moth proved him right.

Returning to the digital economy, we thus need to ask why the platform business models that authorities desire are not the ones that emerge organically. Unfortunately, this complex question is mostly overlooked by policymakers and commentators alike.

Competition law on a spectrum

To understand the above point, let me start with an assumption: the digital platforms that have been subject to recent competition cases and investigations can all be classified along two (overlapping) dimensions: the extent to which they are open (or closed) to “rivals” and the extent to which their assets are propertized (as opposed to them being shared). This distinction borrows heavily from Jonathan Barnett’s work on the topic. I believe that by applying such a classification, we would obtain a graph that looks something like this:

While these classifications are certainly not airtight, this would be my reasoning:

In the top-left quadrant, Apple and Microsoft, both operate closed platforms that are highly propertized (Apple’s platform is likely even more closed than Microsoft’s Windows ever was). Both firms notably control who is allowed on their platform and how they can interact with users. Apple notably vets the apps that are available on its App Store and influences how payments can take place. Microsoft famously restricted OEMs freedom to distribute Windows PCs as they saw fit (notably by “imposing” certain default apps and, arguably, limiting the compatibility of Microsoft systems with servers running other OSs). 

In the top right quadrant, the business models of Amazon and Qualcomm are much more “open”, yet they remain highly propertized. Almost anyone is free to implement Qualcomm’s IP – so long as they conclude a license agreement to do so. Likewise, there are very few limits on the goods that can be sold on Amazon’s platform, but Amazon does, almost by definition, exert a significant control on the way in which the platform is monetized. Retailers can notably pay Amazon for product placement, fulfilment services, etc. 

Finally, Google Search and Android sit in the bottom left corner. Both of these services are weakly propertized. The Android source code is shared freely via an open source license, and Google’s apps can be preloaded by OEMs free of charge. The only limit is that Google partially closes its platform, notably by requiring that its own apps (if they are pre-installed) receive favorable placement. Likewise, Google’s search engine is only partially “open”. While any website can be listed on the search engine, Google selects a number of specialized results that are presented more prominently than organic search results (weather information, maps, etc). There is also some amount of propertization, namely that Google sells the best “real estate” via ad placement. 

Enforcement

Readers might ask what is the point of this classification? The answer is that in each of the above cases, competition intervention attempted (or is attempting) to move firms/platforms towards more openness and less propertization – the opposite of their original design.

The Microsoft cases and the Apple investigation, both sought/seek to bring more openness and less propetization to these respective platforms. Microsoft was made to share proprietary data with third parties (less propertization) and open up its platform to rival media players and web browsers (more openness). The same applies to Apple. Available information suggests that the Commission is seeking to limit the fees that Apple can extract from downstream rivals (less propertization), as well as ensuring that it cannot exclude rival mobile payment solutions from its platform (more openness).

The various cases that were brought by EU and US authorities against Qualcomm broadly sought to limit the extent to which it was monetizing its intellectual property. The European Amazon investigation centers on the way in which the company uses data from third-party sellers (and ultimately the distribution of revenue between them and Amazon). In both of these cases, authorities are ultimately trying to limit the extent to which these firms propertize their assets.

Finally, both of the Google cases, in the EU, sought to bring more openness to the company’s main platform. The Google Shopping decision sanctioned Google for purportedly placing its services more favorably than those of its rivals. And the Android decision notably sought to facilitate rival search engines’ and browsers’ access to the Android ecosystem. The same appears to be true of ongoing investigations in the US.

What is striking about these decisions/investigations is that authorities are pushing back against the distinguishing features of the platforms they are investigating. Closed -or relatively closed- platforms are being opened-up, and firms with highly propertized assets are made to share them (or, at the very least, monetize them less aggressively).

The empty quadrant

All of this would not be very interesting if it weren’t for a final piece of the puzzle: the model of open and shared platforms that authorities apparently favor has traditionally struggled to gain traction with consumers. Indeed, there seem to be very few successful consumer-oriented products and services in this space.

There have been numerous attempts to introduce truly open consumer-oriented operating systems – both in the mobile and desktop segments. For the most part, these have ended in failure. Ubuntu and other Linux distributions remain fringe products. There have been attempts to create open-source search engines, again they have not been met with success. The picture is similar in the online retail space. Amazon appears to have beaten eBay despite the latter being more open and less propertized – Amazon has historically charged higher fees than eBay and offers sellers much less freedom in the way they sell their goods. This theme is repeated in the standardization space. There have been innumerable attempts to impose open royalty-free standards. At least in the mobile internet industry, few if any of these have taken off (5G and WiFi are the best examples of this trend). That pattern is repeated in other highly-standardized industries, like digital video formats. Most recently, the proprietary Dolby Vision format seems to be winning the war against the open HDR10+ format. 

This is not to say there haven’t been any successful ventures in this space – the internet, blockchain and Wikipedia all spring to mind – or that we will not see more decentralized goods in the future. But by and large firms and consumers have not yet taken to the idea of open and shared platforms. And while some “open” projects have achieved tremendous scale, the consumer-facing side of these platforms is often dominated by intermediaries that opt for much more traditional business models (think of Coinbase and Blockchain, or Android and Linux).

An evolutionary explanation?

The preceding paragraphs have posited a recurring reality: the digital platforms that competition authorities are trying to to bring about are fundamentally different from those that emerge organically. This begs the question: why have authorities’ ideal platforms, so far, failed to achieve truly meaningful success at consumers’ end of the market? 

I can see at least three potential explanations:

  1. Closed/propertized platforms have systematically -and perhaps anticompetitively- thwarted their open/shared rivals;
  2. Shared platforms have failed to emerge because they are much harder to monetize (and there is thus less incentive to invest in them);
  3. Consumers have opted for closed systems precisely because they are closed.

I will not go into details over the merits of the first conjecture. Current antitrust debates have endlessly rehashed this proposition. However, it is worth mentioning that many of today’s dominant platforms overcame open/shared rivals well before they achieved their current size (Unix is older than Windows, Linux is older than iOs, eBay and Amazon are basically the same age, etc). It is thus difficult to make the case that the early success of their business models was down to anticompetitive behavior.

Much more interesting is the fact that options (2) and (3) are almost systematically overlooked – especially by antitrust authorities. And yet, if true, both of them would strongly cut against current efforts to regulate digital platforms and ramp-up antitrust enforcement against them. 

For a start, it is not unreasonable to suggest that highly propertized platforms are generally easier to monetize than shared ones (2). For example, open-source platforms often rely on complementarities for monetization, but this tends to be vulnerable to outside competition and free-riding. If this is true, then there is a natural incentive for firms to invest and innovate in more propertized environments. In turn, competition enforcement that limits a platforms’ ability to propertize their assets may harm innovation.

Similarly, authorities should at the very least reflect on whether consumers really want the more “competitive” ecosystems that they are trying to design (3)

For instance, it is striking that the European Commission has a long track record of seeking to open-up digital platforms (the Microsoft decisions are perhaps the most salient example). And yet, even after these interventions, new firms have kept on using the very business model that the Commission reprimanded. Apple tied the Safari browser to its iPhones, Google went to some length to ensure that Chrome was preloaded on devices, Samsung phones come with Samsung Internet as default. But this has not deterred consumers. A sizable share of them notably opted for Apple’s iPhone, which is even more centrally curated than Microsoft Windows ever was (and the same is true of Apple’s MacOS). 

Finally, it is worth noting that the remedies imposed by competition authorities are anything but unmitigated successes. Windows XP N (the version of Windows that came without Windows Media Player) was an unprecedented flop – it sold a paltry 1,787 copies. Likewise, the internet browser ballot box imposed by the Commission was so irrelevant to consumers that it took months for authorities to notice that Microsoft had removed it, in violation of the Commission’s decision. 

There are many reasons why consumers might prefer “closed” systems – even when they have to pay a premium for them. Take the example of app stores. Maintaining some control over the apps that can access the store notably enables platforms to easily weed out bad players. Similarly, controlling the hardware resources that each app can use may greatly improve device performance. In other words, centralized platforms can eliminate negative externalities that “bad” apps impose on rival apps and consumers. This is especially true when consumers struggle to attribute dips in performance to an individual app, rather than the overall platform. 

It is also conceivable that consumers prefer to make many of their decisions at the inter-platform level, rather than within each platform. In simple terms, users arguably make their most important decision when they choose between an Apple or Android smartphone (or a Mac and a PC, etc.). In doing so, they can select their preferred app suite with one simple decision. They might thus purchase an iPhone because they like the secure App Store, or an Android smartphone because they like the Chrome Browser and Google Search. Furthermore, forcing too many “within-platform” choices upon users may undermine a product’s attractiveness. Indeed, it is difficult to create a high-quality reputation if each user’s experience is fundamentally different. In short, contrary to what antitrust authorities seem to believe, closed platforms might be giving most users exactly what they desire. 

To conclude, consumers and firms appear to gravitate towards both closed and highly propertized platforms, the opposite of what the Commission and many other competition authorities favor. The reasons for this trend are still misunderstood, and mostly ignored. Too often, it is simply assumed that consumers benefit from more openness, and that shared/open platforms are the natural order of things. This post certainly does not purport to answer the complex question of “the origin of platforms”, but it does suggest that what some refer to as “market failures” may in fact be features that explain the rapid emergence of the digital economy. Ronald Coase said this best when he quipped that economists always find a monopoly explanation for things that they fail to understand. The digital economy might just be the latest in this unfortunate trend.

The goal of US antitrust law is to ensure that competition continues to produce positive results for consumers and the economy in general. We published a letter co-signed by twenty three of the U.S.’s leading economists, legal scholars and practitioners, including one winner of the Nobel Prize in economics (full list of signatories here), to exactly that effect urging the House Judiciary Committee on the State of Antitrust Law to reject calls for radical upheaval of antitrust law that would, among other things, undermine the independence and neutrality of US antitrust law. 

A critical part of maintaining independence and neutrality in the administration of antitrust is ensuring that it is insulated from politics. Unfortunately, this view is under attack from all sides. The President sees widespread misconduct among US tech firms that he believes are controlled by the “radical left” and is, apparently, happy to use whatever tools are at hand to chasten them. 

Meanwhile, Senator Klobuchar has claimed, without any real evidence, that the mooted Uber/Grubhub merger is simply about monopolisation of the market, and not, for example, related to the huge changes that businesses like this are facing because of the Covid shutdown.

Both of these statements challenge the principle that the rule of law depends on being politically neutral, including in antitrust. 

Our letter, contrary to the claims made by President Trump, Sen. Klobuchar and some of the claims made to the Committee, asserts that the evidence and economic theory is clear: existing antitrust law is doing a good job of promoting competition and consumer welfare in digital markets and the economy more broadly, and concludes that the Committee should focus on reforms that improve antitrust at the margin, not changes that throw out decades of practice and precedent.

The letter argues that:

  1. The American economy—including the digital sector—is competitive, innovative, and serves consumers well, contrary to how it is sometimes portrayed in the public debate. 
  2. Structural changes in the economy have resulted from increased competition, and increases in national concentration have generally happened because competition at the local level has intensified and local concentration has fallen.
  3. Lax antitrust enforcement has not allowed systematic increases in market power, and the evidence simply does not support out the idea that antitrust enforcement has weakened in recent decades.
  4. Existing antitrust law is adequate for protecting competition in the modern economy, and built up through years of careful case-by-case scrutiny. Calls to throw out decades of precedent to achieve an antitrust “Year Zero” would throw away a huge body of learning and deliberation.
  5. History teaches that discarding the modern approach to antitrust would harm consumers, and return to a situation where per se rules prohibited the use of economic analysis and fact-based defences of business practices.
  6. Common sense reforms should be pursued to improve antitrust enforcement, and the reforms proposed in the letter could help to improve competition and consumer outcomes in the United States without overturning the whole system.

The reforms suggested include measures to increase transparency of the DoJ and FTC, greater scope for antitrust challenges against state-sponsored monopolies, stronger penalties for criminal cartel conduct, and more agency resources being made available to protect workers from anti-competitive wage-fixing agreements between businesses. These are suggestions for the House Committee to consider and are not supported by all the letter’s signatories.

Some of the arguments in the letter are set out in greater detail in the ICLE’s own submission to the Committee, which goes into detail about the nature of competition in modern digital markets and in traditional markets that have been changed because of the adoption of digital technologies. 

The full letter is here.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by John Newman, Associate Professor, University of Miami School of Law; Advisory Board Member, American Antitrust Institute; Affiliated Fellow, Thurman Arnold Project, Yale; Former Trial Attorney, DOJ Antitrust Division.)

Cooperation is the basis of productivity. The war of all against all is not a good model for any economy.

Who said it—a rose-emoji Twitter Marxist, or a card-carrying member of the laissez faire Chicago School of economics? If you guessed the latter, you’d be right. Frank Easterbrook penned these words in an antitrust decision written shortly after he left the University of Chicago to become a federal judge. Easterbrook’s opinion, now a textbook staple, wholeheartedly endorsed a cooperative agreement between two business owners not to compete with each another.

But other enforcers and judges have taken a far less favorable view of cooperation—particularly when workers are the ones cooperating. A few years ago, in an increasingly rare example of interagency agreement, the DOJ and FTC teamed up to argue against a Seattle ordinance that would have permitted drivers to cooperatively bargain with Uber and Lyft. Why the hostility from enforcers? “Competition is the lynchpin of the U.S. economy,” explained Acting FTC Chairman Maureen Ohlhausen.

Should workers be able to cooperate to counter concentrated corporate power? Or is bellum omnium contra omnes truly the “lynchpin” of our industrial policy?

The coronavirus pandemic has thrown this question into sharper relief than ever before. Low-income workers—many of them classified as independent contractors—have launched multiple coordinated boycotts in an effort to improve working conditions. The antitrust agencies, once quick to condemn similar actions by Uber and Lyft drivers, have fallen conspicuously silent.

Why? Why should workers be allowed to negotiate cooperatively for a healthier workplace, yet not for a living wage? In a society largely organized around paying for basic social services, money is health—and even life itself.

Unraveling the Double Standard

Antitrust law, like the rest of industrial policy, involves difficult questions over which members of society can cooperate with one another. These laws allocate “coordination rights”. Before the coronavirus pandemic, industrial policy seemed generally to favor allocating these rights to corporations, while simultaneously denying them to workers and class-action plaintiffs. But, as the antitrust agencies’ apparent about-face on workplace organizing suggests, the times may be a-changing.

Some of today’s most existential threats to societal welfare—pandemics, climate change, pollution—will best be addressed via cooperation, not atomistic rivalry. On-the-ground stakeholders certainly seem to think so. Absent a coherent, unified federal policy to deal with the coronavirus pandemic, state governors have reportedly begun to consider cooperating to provide a coordinated regional response. Last year, a group of auto manufacturers voluntarily agreed to increase fuel-efficiency standards and reduce emissions. They did attract an antitrust investigation, but it was subsequently dropped—a triumph for pro-social cooperation. It was perhaps also a reminder that corporations, each of which is itself a cooperative enterprise, can still play the role they were historically assigned: serving the public interest.

Going forward, policy-makers should give careful thought to how their actions and inactions encourage or stifle cooperation. Judge Easterbrook praised an agreement between business owners because it “promoted enterprise”. What counts as legitimate “enterprise”, though, is an eminently contestable proposition.

The federal antitrust agencies’ anti-worker stance in particular seems ripe for revisiting. Its modern origins date back to the 1980s, when President Reagan’s FTC challenged a coordinated boycott among D.C.-area criminal-defense attorneys. The boycott was a strike of sorts, intended to pressure the city into increasing court-appointed fees to a level that would allow for adequate representation. (The mayor’s office, despite being responsible for paying the fees, actually encouraged the boycott.) As the sole buyer of this particular type of service, the government wielded substantial power in the marketplace. A coordinated front was needed to counter it. Nonetheless, the FTC condemned the attorneys’ strike as per se illegal—a label supposedly reserved for the worst possible anticompetitive behavior—and the U.S. Supreme Court ultimately agreed.

Reviving Cooperation

In the short run, the federal antitrust agencies should formally reverse this anti-labor course. When workers cooperate in an attempt to counter employers’ power, antitrust intervention is, at best, a misallocation of scarce agency resources. Surely there are (much) bigger fish to fry. At worst, hostility to such cooperation directly contravenes Congress’ vision for the antitrust laws. These laws were intended to protect workers from concentrated downstream power, not to force their exposure to it—as the federal agencies themselves have recognized elsewhere.

In the longer run, congressional action may be needed. Supreme Court antitrust case law condemning worker coordination should be legislatively overruled. And, in a sharp departure from the current trend, we should be making it easier, not harder, for workers to form cooperative unions. Capital can be combined into a legal corporation in just a few hours, while it takes more than a month to create an effective labor union. None of this is to say that competition should be abandoned—much the opposite, in fact. A market that pits individual workers against highly concentrated cooperative entities is hardly “competitive”.

Thinking more broadly, antitrust and industrial policy may need to allow—or even encourage—cooperation in a number of sectors. Automakers’ and other manufacturers’ voluntary efforts to fight climate change should be lauded and protected, not investigated. Where cooperation is already shielded and even incentivized, as is the case with corporations, affirmative steps may be needed to ensure that the public interest is being furthered.

The current moment is without precedent. Industrial policy is destined, and has already begun, to change. Although competition has its place, it cannot serve as the sole lynchpin for a just economy. Now more than ever, a revival of cooperation is needed.

[TOTM: The following is part of a blog series by TOTM guests and authors on the law, economics, and policy of the ongoing COVID-19 pandemic. The entire series of posts is available here.

This post is authored by Steve Cernak, (Partner, Bona Law).]

The antitrust laws have not been suspended during the current COVID-19 crisis. But based on questions received from clients plus others discussed with other practitioners, the changed economic conditions have raised some new questions and put a new slant on some old ones. 

Under antitrust law’s flexible rule of reason standard, courts and enforcers consider the competitive effect of most actions under current and expected economic conditions. Because those conditions have changed drastically, at least temporarily, perhaps the antitrust assessments of certain actions will be different. Also, in a crisis, good businesses consider new options and reconsider others that had been rejected under the old conditions. So antitrust practitioners and enforcers need to be prepared for new questions and reconsiderations of others under new facts. Here are some that might cross their desks.

Benchmarking

Benchmarking had its antitrust moment a few years ago as practitioners discovered and began to worry about this form of communication with competitors. Both before and since then, the comparison of processes and metrics to industry bests to determine where improvement efforts should be concentrated has not raised serious antitrust issues – if done properly. Appropriate topic choice and implementation, often involving counsel review and third-party collection, should stay the same during this crisis. Companies implementing new processes might be tempted to reach out to competitors to learn best practices. Any of those companies unfamiliar with the right way to benchmark should get up to speed. Counsel must be prepared to help clients quickly, but properly, benchmark some suddenly important activities, like methods for deep-cleaning workplaces.

Joint ventures

Joint ventures where competitors work together to accomplish a task that neither could alone, or accomplish it more efficiently, have always received a receptive antitrust review. Often, those joint efforts have been temporary. Properly structured ones have always required the companies to remain competitors outside the joint venture. Joint efforts among competitors that did not make sense before the crisis might make perfect sense during it. For instance, a company whose distribution warehouse has been shut down by a shelter in place order might be able to use a competitor’s distribution assets to continue to get goods to the market. 

Some joint ventures of competitors have received special antitrust assurances for decades. The National Cooperative Research and Production Act of 1993 was originally passed in 1984 to protect research joint ventures of competitors. It was later extended to certain joint production efforts and standard development organizations. The law confirms that certain joint ventures of competitors will be judged under the rule of reason. If the parties file a very short notice with the DOJ Antitrust Division and FTC, they also will receive favorable treatment regarding damages and attorney’s fees in any antitrust lawsuit. For example, competitors cooperating on the development of new virus treatments might be able to use NCRPA to protect joint research and even production of the cure. 

Mergers

Horizontal mergers that permanently combine the assets of two competitors are unlikely to be justified under the antitrust laws by small transitory blips in the economic landscape. A huge crisis, however, might be so large and create such long-lasting effects that certain mergers suddenly might make sense, both on business and antitrust grounds. That rationale was used during the most recent economic crisis to justify several large mergers of banks although other large industrial mergers considered at the same time were abandoned for various reasons. It is not yet clear if that reasoning is present in any industry now. 

Remote communication among competitors

On a much smaller but more immediate scale, the new forms of communication being used while so many of us are physically separated have raised questions about the usual antitrust advice regarding communication with competitors. Antitrust practitioners have long advised clients about how to prepare and conduct an in-person meeting of competitors, say at a trade association convention. That same advice would seem to apply if, with the in-person convention cancelled, the meeting will be held via Teams or Zoom. And don’t forget: The reminders that the same rules apply to the cocktail party at the bar after the meeting should also be given for the virtual version conducted via Remo.co

Pricing and brand Management

Since at least the time when the Dr. Miles Medical Co. was selling its “restorative nervine,” manufacturers have been concerned about how their products were resold by retailers. Antitrust law has provided manufacturers considerable freedom for some time to impose non-price restraints on retailers to protect brand reputations; however, manufacturers must consider and impose those restraints before a crisis hits. For instance, a “no sale for resale” provision in place before the crisis would give a manufacturer of hand sanitizer another tool to use now to try to prevent bulk sales of the product that will be immediately resold on the street. 

Federal antitrust law has provided manufacturers considerable freedom to impose maximum price restraints. Even the states whose laws prevent minimum price restraints do not seem as concerned about maximum ones. But again, if a manufacturer is concerned that some consumer will blame it, not just the retailer, for a sudden skyrocketing price for a product in short supply, some sort of restraints must be in place before the crisis. Certain platforms are invoking their standard policies to prevent such actions by resellers on their platforms. 

Regulatory hurdles

While antitrust law is focused on actions by private parties that might prevent markets from properly working to serve consumers, the same rationales apply to unnecessary government interference in the market. The current health crisis has turned the spotlight back on certificate of need laws, a form of “brother may I?” government regulation that can allow current competitors to stifle entry by new competitors. Similarly, regulations that have slowed the use of telemedicine have been at least temporarily waived

Conclusion

Solving the current health crisis and rebuilding the economy will take the best efforts of both our public institutions and private companies. Antitrust law as currently written and enforced can and should continue to play a role in aligning incentives so we need not rely on “the benevolence of the butcher” for our dinner and other necessities. Instead, proper application of antitrust law can allow companies to do their part to (reviving a slogan helpful in a prior national crisis) keep America rolling.

Last Thursday and Friday, Truth on the Market hosted a symposium analyzing the Draft Vertical Merger Guidelines from the FTC and DOJ. The relatively short draft guidelines provided ample opportunity for discussion, as evidenced by the stellar roster of authors thoughtfully weighing in on the topic. 

We want to thank all of the participants for their excellent contributions. All of the posts are collected here, and below I briefly summarize each in turn. 

Symposium Day 1

Herbert Hovenkamp on the important advance of economic analysis in the draft guidelines

Hovenkamp views the draft guidelines as a largely positive development for the state of antitrust enforcement. Beginning with an observation — as was common among participants in the symposium — that the existing guidelines are outdated, Hovenkamp believes that the inclusion of 20% thresholds for market share and related product use represent a reasonable middle position between the extremes of zealous antitrust enforcement and non-enforcement.

Hovenkamp also observes that, despite their relative brevity, the draft guidelines contain much by way of reference to the 2010 Horizontal Merger Guidelines. Ultimately Hovenkamp believes that, despite the relative lack of detail in some respects, the draft guidelines are an important step in elaborating the “economic approaches that the agencies take toward merger analysis, one in which direct estimates play a larger role, with a comparatively reduced role for more traditional approaches depending on market definition and market share.”

Finally, he notes that, while the draft guidelines leave the current burden of proof in the hands of challengers, the presumption that vertical mergers are “invariably benign, particularly in highly concentrated markets or where the products in question are differentiated” has been weakened.

Full post.

Jonathan E. Neuchterlein on the lack of guidance in the draft vertical merger guidelines

Neuchterlein finds it hard to square elements of the draft vertical merger guidelines with both the past forty years of US enforcement policy as well as the empirical work confirming the largely beneficial nature of vertical mergers. Related to this, the draft guidelines lack genuine limiting principles when describing speculative theories of harm. Without better specificity, the draft guidelines will do little as a source of practical guidance.

One criticism from Neuchterlein is that the draft guidelines blur the distinction between “harm to competition” and “harm to competitors” by, for example, focusing on changes to rivals’ access to inputs and lost sales.

Neuchterlein also takes issue with what he characterizes as the “arbitrarily low” 20 percent thresholds. In particular, he finds the fact that the two separate 20 percent thresholds (relevant market and related product) being linked leads to a too-small set of situations in which firms might qualify for the safe harbor. Instead, by linking the two thresholds, he believes the provision does more to facilitate the agencies’ discretion, and little to provide clarity to firms and consumers.

Full post.

William J. Kolasky and Philip A. Giordano discuss the need to look to the EU for a better model for the draft guidelines

While Kolasky and Giordano believe that the 1984 guidelines are badly outdated, they also believe that the draft guidelines fail to recognize important efficiencies, and fail to give sufficiently clear standards for challenging vertical mergers.

By contrast, Kolasky and Giordano believe that the 2008 EU vertical merger guidelines provide much greater specificity and, in some cases, the 1984 guidelines were better aligned with the 2008 EU guidelines. Losing that specificity in the new draft guidelines sets back the standards. As such, they recommend that the DOJ and FTC adopt the EU vertical merger guidelines as a model for the US.

To take one example, the draft guidelines lose some of the important economic distinctions between vertical and horizontal mergers and need to be clarified, in particular with respect to burdens of proof related to efficiencies. The EU guidelines also provide superior guidance on how to distinguish between a firm’s ability and its incentive to raise rivals’ costs.

Full post.

Margaret Slade believes that the draft guidelines are a step in the right direction, but uneven on critical issues

Slade welcomes the new draft guidelines and finds them to be a good effort, if in need of some refinement.  She believes the agencies were correct to defer to the 2010 Horizontal Merger Guidelines for the the conceptual foundations of market definition and concentration, but believes that the 20 percent thresholds don’t reveal enough information. She believes that it would be helpful “to have a list of factors that could be used to determine which mergers that fall below those thresholds are more likely to be investigated, and vice versa.”

Slade also takes issue with the way the draft guidelines deal with EDM. Although she does not believe that EDM should always be automatically assumed, the guidelines do not offer enough detail to determine the cases where it should not be.

For Slade, the guidelines also fail to include a wide range of efficiencies that can arise from vertical integration. For instance “organizational efficiencies, such as mitigating contracting, holdup, and renegotiation costs, facilitating specific investments in physical and human capital, and providing appropriate incentives within firms” are important considerations that the draft guidelines should acknowledge.

Slade also advises caution when simulating vertical mergers. They are much more complex than horizontal simulations, which means that “vertical merger simulations have to be carefully crafted to fit the markets that are susceptible to foreclosure and that a one-size-fits-all model can be very misleading.”

Full post.

Joshua D. Wright, Douglas H. Ginsburg, Tad Lipsky, and John M. Yun on how to extend the economic principles present in the draft vertical merger guidelines

Wright et al. commend the agencies for highlighting important analytical factors while avoiding “untested merger assessment tools or theories of harm.”

They do, however, offer some points for improvement. First, EDM should be clearly incorporated into the unilateral effects analysis. The way the draft guidelines are currently structured improperly leaves the role of EDM in a sort of “limbo” between effects analysis and efficiencies analysis that could confuse courts and lead to an incomplete and unbalanced assessment of unilateral effects.

Second, Wright et al. also argue that the 20 percent thresholds in the draft guidelines do not have any basis in evidence or theory, nor are they of “any particular importance to predicting competitive effects.”

Third, by abandoning the 1984 guidelines’ acknowledgement of the generally beneficial effects of vertical mergers, the draft guidelines reject the weight of modern antitrust literature and fail to recognize “the empirical reality that vertical relationships are generally procompetitive or neutral.”

Finally, the draft guidelines should be more specific in recognizing that there are transaction costs associated with integration via contract. Properly conceived, the guidelines should more readily recognize that efficiencies arising from integration via merger are cognizable and merger specific.

Full post.

Gregory J. Werden and Luke M. Froeb on the the conspicuous silences of the proposed vertical merger guidelines

A key criticism offered by Werden and Froeb in their post is that “the proposed Guidelines do not set out conditions necessary or sufficient for the agencies to conclude that a merger likely would substantially lessen competition.” The draft guidelines refer to factors the agencies may consider as part of their deliberation, but ultimately do not give an indication as to how those different factors will be weighed. 

Further, Werden and Froeb believe that the draft guidelines fail even to communicate how the agencies generally view the competitive process — in particular, how the agencies’ views regard the critical differences between horizontal and vertical mergers. 

Full post.

Jonathan M. Jacobson and Kenneth Edelson on the missed opportunity to clarify merger analysis in the draft guidelines

Jacobson and Edelson begin with an acknowledgement that the guidelines are outdated and that there is a dearth of useful case law, thus leading to a need for clarified rules. Unfortunately, they do not feel that the current draft guidelines do nearly enough to satisfy this need for clarification. 

Generally positive about the 20% thresholds in the draft guidelines, Jacobson and Edelson nonetheless feel that this “loose safe harbor” leaves some problematic ambiguity. For example, the draft guidelines endorse a unilateral foreclosure theory of harm, but leave unspecified what actually qualifies as a harm. Also, while the Baker Hughes burden shifting framework is widely accepted, the guidelines fail to specify how burdens should be allocated in vertical merger cases. 

The draft guidelines also miss an important opportunity to specify whether or not EDM should be presumed to exist in vertical mergers, and whether it should be presumptively credited as merger-specific.

Full post.

Symposium Day 2

Timothy Brennan on the complexities of enforcement for “pure” vertical mergers

Brennan’s post focused on what he referred to as “pure” vertical mergers that do not include concerns about expansion into upstream or downstream markets. Brennan notes the highly complex nature of speculative theories of vertical harms that can arise from vertical mergers. Consequently, he concludes that, with respect to blocking pure vertical mergers, 

“[I]t is not clear that we are better off expending the resources to see whether something is bad, rather than accepting the cost of error from adopting imperfect rules — even rules that imply strict enforcement. Pure vertical merger may be an example of something that we might just want to leave be.”

Full post.

Steven J. Cernak on the burden of proof for EDM

Cernak’s post examines the absences and ambiguities in the draft guidelines as compared to the 1984 guidelines. He notes the absence of some theories of harm — for instance, the threat of regulatory evasion. And then moves on to point out the ambiguity in how the draft guidelines deal with pleading and proving EDM.

Specifically, the draft guidelines are unclear as to how EDM should be treated. Is EDM an affirmative defense, or is it a factor that agencies are required to include as part of their own analysis? In Cernak’s opinion, the agencies should be clearer on the point. 

Full post.

Eric Fruits on messy mergers and muddled guidelines

Fruits observes that the attempt of the draft guidelines to clarify how the Agencies think about mergers and competition actually demonstrates how complex markets, related products, and dynamic competition actually are.

Fruits goes on to describe how the nature of assumptions necessary to support the speculative theories of harm that the draft guidelines may rely upon are vulnerable to change. Ultimately, relying on such theories and strong assumptions may make market definition of even “obvious” markets and products a fraught exercise that devolves into a battle of experts. 

Full post.

Pozen, Cornell, Concklin, and Van Arsdall on the missed opportunity to harmonize with international law

Pozen et al. believe that the draft guidelines inadvisably move the US away from accepted international standards. The 20 percent threshold in the draft guidelines   is “arbitrarily low” given the generally pro competitive nature of vertical combinations. 

Instead, DOJ and the FTC should consider following the approaches taken by the EU, Japan and Chile by favoring a 30 percent threshold for challenges along with a post-merger  HHI measure below 2000.

Full post.

Scott Sher and Mattew McDonald write about the implications of the Draft Vertical Merger Guidelines for vertical mergers involving technology start-ups

Sher and McDonald describe how the draft Vertical guidelines miss a valuable opportunity to clarify speculative theories harm based on “potential competition.” 

In particular, the draft guidelines should address the literature that demonstrates that vertical acquisition of small tech firms by large tech firms is largely complementary and procompetitive. Large tech firms are good at process innovation and the smaller firms are good at product innovation leading to specialization and the realization of efficiencies through acquisition. 

Further, innovation in tech markets is driven by commercialization and exit strategy. Acquisition has become an important way for investors and startups to profit from their innovation. Vertical merger policy that is biased against vertical acquisition threatens this ecosystem and the draft guidelines should be updated to reflect this reality.

Full post.

Rybnicek on how the draft vertical merger guidelines might do more harm than good

Rybnicek notes the common calls to withdraw the 1984 Non-Horizontal Merger Guidelines, but is skeptical that replacing them will be beneficial. Particularly, he believes there are major flaws in the draft guidelines that would lead to suboptimal merger policy at the Agencies.

One concern is that the draft guidelines could easily lead to the impression that vertical mergers are as likely to lead to harm as horizontal mergers. But that is false and easily refuted by economic evidence and logic. By focusing on vertical transactions more than the evidence suggests is necessary, the Agencies will waste resources and spend less time pursuing enforcement of actually anticompetitive transactions.

Rybicek also notes that, in addition to the 20 percent threshold “safe harbor” being economically unsound, they will likely create a problematic “sufficient condition” for enforcement.

Rybnicek believes that the draft guidelines minimize the significant role of EDM and efficiencies by pointing to the 2010 Horizontal Merger Guidelines for analytical guidance. In the horizontal context, efficiencies are exceedingly difficult to prove, and it is unwarranted to apply the same skeptical treatment of efficiencies in the vertical merger context.

Ultimately, Rybnicek concludes that the draft guidelines do little to advance an understanding of how the agencies will look at a vertical transaction, while also undermining the economics and theory that have guided antitrust law. 

Full post.

Lawrence J. White on the missing market definition standard in the draft vertical guidelines

White believes that there is a gaping absence in the draft guidelines insofar as they lack an adequate  market definition paradigm. White notes that markets need to be defined in a way that permits a determination of market power (or not) post-merger, but the guidelines refrain from recommending a vertical-specific method for drawing market definition. 

Instead, the draft guidelines point to the 2010 Horizontal Merger Guidelines for a market definition paradigm. Unfortunately, that paradigm is inapplicable in the vertical merger context. The way that markets are defined in the horizontal and vertical contexts is very different. There is a significant chance that an improperly drawn market definition based on the Horizontal Guidelines could understate the risk of harm from a given vertical merger.

Full post.

Manne & Stout 1 on the important differences between integration via contract and integration via merger

Manne & Stout believe that there is a great deal of ambiguity in the proposed guidelines that could lead either to uncertainty as to how the agencies will exercise their discretion, or, more troublingly, could lead courts to take seriously speculative theories of harm. 

Among these, Manne & Stout believe that the Agencies should specifically address the alleged equivalence of integration via contract and integration via merger. They  need to either repudiate this theory, or else more fully explain the extremely complex considerations that factor into different integration decisions for different firms.

In particular, there is no reason to presume in any given situation that the outcome from contracting would be the same as from merging, even where both are notionally feasible. It would be a categorical mistake for the draft guidelines to permit an inference that simply because an integration could be achieved by contract, it follows that integration by merger deserves greater scrutiny per se.

A whole host of efficiency and non-efficiency related goals are involved in a choice of integration methods. But adopting a presumption against integration via merger necessary leads to (1) an erroneous assumption that efficiencies are functionally achievable in both situations and (2) a more concerning creation of discretion in the hands of enforcers to discount the non-efficiency reasons for integration.

Therefore, the agencies should clarify in the draft guidelines that the mere possibility of integration via contract or the inability of merging parties to rigorously describe and quantify efficiencies does not condemn a proposed merger.

Full post.

Manne & Stout 2 on the problematic implication of incorporating a contract/merger equivalency assumption into the draft guidelines

Manne & Stout begin by observing that, while Agencies have the opportunity to enforce in either the case of merger or contract, defendants can frequently only realize efficiencies in the case of merger. Therefore, calling for a contract/merger equivalency amounts to a preference for more enforcement per se, and is less solicitous of concerns about loss of procompetitive arrangements. Moreover, Manne & Stout point out that there is currently no empirical basis for justifying the weighting of enforcement so heavily against vertical mergers. 

Manne & Stout further observe that vertical merger enforcement is more likely to thwart procompetitive than anticompetitive arrangements relative to the status quo ante because we lack fundamental knowledge about the effects of market structure and firm organization on innovation and dynamic competition. 

Instead, the draft guidelines should adopt Williamson’s view of economic organizations: eschew the formal orthodox neoclassical economic lens in favor of organizational theory that focuses on complex contracts (including vertical mergers). Without this view, “We are more likely to miss it when mergers solve market inefficiencies, and more likely to see it when they impose static costs — even if the apparent costs actually represent a move from less efficient contractual arrangements to more efficient integration.”

Critically, Manne & Stout argue that the guidelines focus on market share thresholds leads to an overly narrow view of competition. Instead of looking at static market analyses, the Agencies should include a richer set of observations, including those that involve “organizational decisions made to facilitate the coordination of production and commercialization when they are dependent upon intangible assets.”

Ultimately Manne & Stout suggest that the draft guidelines should be clarified to guide the Agencies and courts away from applying inflexible, formalistic logic that will lead to suboptimal enforcement.

Full post.

In our first post, we discussed the weaknesses of an important theoretical underpinning of efforts to expand vertical merger enforcement (including, possibly, the proposed guidelines): the contract/merger equivalency assumption.

In this post we discuss the implications of that assumption and some of the errors it leads to — including some incorporated into the proposed guidelines.

There is no theoretical or empirical justification for more vertical enforcement

Tim Brennan makes a fantastic and regularly overlooked point in his post: If it’s true, as many claim (see, e.g., Steve Salop), that firms can generally realize vertical efficiencies by contracting instead of merging, then it’s also true that they can realize anticompetitive outcomes the same way. While efficiencies have to be merger-specific in order to be relevant to the analysis, so too do harms. But where the assumption is that the outcomes of integration can generally be achieved by the “less-restrictive” means of contracting, that would apply as well to any potential harms, thus negating the transaction-specificity required for enforcement. As Dennis Carlton notes:

There is a symmetry between an evaluation of the harms and benefits of vertical integration. Each must be merger-specific to matter in an evaluation of the merger’s effects…. If transaction costs are low, then vertical integration creates neither benefits nor harms, since everything can be achieved by contract. If transaction costs exist to prevent the achievement of a benefit but not a harm (or vice-versa), then that must be accounted for in a calculation of the overall effect of a vertical merger. (Dennis Carlton, Transaction Costs and Competition Policy)

Of course, this also means that those (like us) who believe that it is not so easy to accomplish by contract what may be accomplished by merger must also consider the possibility that a proposed merger may be anticompetitive because it overcomes an impediment to achieving anticompetitive goals via contract.

There’s one important caveat, though: The potential harms that could arise from a vertical merger are the same as those that would be cognizable under Section 2 of the Sherman Act. Indeed, for a vertical merger to cause harm, it must be expected to result in conduct that would otherwise be illegal under Section 2. This means there is always the possibility of a second bite at the apple when it comes to thwarting anticompetitive conduct. 

The same cannot be said of procompetitive conduct that can arise only through merger if a merger is erroneously prohibited before it even happens

Interestingly, Salop himself — the foremost advocate today for enhanced vertical merger enforcement — recognizes the issue raised by Brennan: 

Exclusionary harms and certain efficiency benefits also might be achieved with vertical contracts and agreements without the need for a vertical merger…. It [] might be argued that the absence of premerger exclusionary contracts implies that the merging firms lack the incentive to engage in conduct that would lead to harmful exclusionary effects. But anticompetitive vertical contracts may face the same types of impediments as procompetitive ones, and may also be deterred by potential Section 1 enforcement. Neither of these arguments thus justify a more or less intrusive vertical merger policy generally. Rather, they are factors that should be considered in analyzing individual mergers. (Salop & Culley, Potential Competitive Effects of Vertical Mergers)

In the same article, however, Salop also points to the reasons why it should be considered insufficient to leave enforcement to Sections 1 and 2, instead of addressing them at their incipiency under Clayton Section 7:

While relying solely on post-merger enforcement might have appealing simplicity, it obscures several key facts that favor immediate enforcement under Section 7.

  • The benefit of HSR review is to prevent the delays and remedial issues inherent in after-the-fact enforcement….
  • There may be severe problems in remedying the concern….
  • Section 1 and Section 2 legal standards are more permissive than Section 7 standards….
  • The agencies might well argue that anticompetitive post-merger conduct was caused by the merger agreement, so that it would be covered by Section 7….

All in all, failure to address these kinds of issues in the context of merger review could lead to significant consumer harm and underdeterrence.

The points are (mostly) well-taken. But they also essentially amount to a preference for more and tougher enforcement against vertical restraints than the judicial interpretations of Sections 1 & 2 currently countenance — a preference, in other words, for the use of Section 7 to bolster enforcement against vertical restraints of any sort (whether contractual or structural).

The problem with that, as others have pointed out in this symposium (see, e.g., Nuechterlein; Werden & Froeb; Wright, et al.), is that there’s simply no empirical basis for adopting a tougher stance against vertical restraints in the first place. Over and over again the empirical research shows that vertical restraints and vertical mergers are unlikely to cause anticompetitive harm: 

In reviewing this literature, two features immediately stand out: First, there is a paucity of support for the proposition that vertical restraints/vertical integration are likely to harm consumers. . . . Second, a far greater number of studies found that the use of vertical restraints in the particular context studied improved welfare unambiguously. (Cooper, et al, Vertical Restrictions and Antitrust Policy: What About the Evidence?)

[W]e did not have a particular conclusion in mind when we began to collect the evidence, and we… are therefore somewhat surprised at what the weight of the evidence is telling us. It says that, under most circumstances, profit-maximizing, vertical-integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view…. We therefore conclude that, faced with a vertical arrangement, the burden of evidence should be placed on competition authorities to demonstrate that that arrangement is harmful before the practice is attacked. (Francine Lafontaine & Margaret Slade, Vertical Integration and Firm Boundaries: The Evidence)

[Table 1 in this paper] indicates that voluntarily adopted restraints are associated with lower costs, greater consumption, higher stock returns, and better chances of survival. (Daniel O’Brien, The Antitrust Treatment of Vertical Restraint: Beyond the Beyond the Possibility Theorems)

In sum, these papers from 2009-2018 continue to support the conclusions from Lafontaine & Slade (2007) and Cooper et al. (2005) that consumers mostly benefit from vertical integration. While vertical integration can certainly foreclose rivals in theory, there is only limited empirical evidence supporting that finding in real markets. (GAI Comment on Vertical Mergers)

To the extent that the proposed guidelines countenance heightened enforcement relative to the status quo, they fall prey to the same defect. And while it is unclear from the fairly terse guidelines whether this is animating them, the removal of language present in the 1984 Non-Horizontal Merger Guidelines acknowledging the relative lack of harm from vertical mergers (“[a]lthough non-horizontal mergers are less likely than horizontal mergers to create competitive problems…”) is concerning.  

The shortcomings of orthodox economics and static formal analysis

There is also a further reason to think that vertical merger enforcement may be more likely to thwart procompetitive than anticompetitive arrangements relative to the status quo ante (i.e., where arrangements among vertical firms are by contract): Our lack of knowledge about the effects of market structure and firm organization on innovation and dynamic competition, and the relative hostility to nonstandard contracting, including vertical integration:

[T]he literature addressing how market structure affects innovation (and vice versa) in the end reveals an ambiguous relationship in which factors unrelated to competition play an important role. (Katz & Shelanski, Mergers and Innovation)

The fixation on the equivalency of the form of vertical integration (i.e., merger versus contract) is likely to lead enforcers to focus on static price and cost effects, and miss the dynamic organizational and informational effects that lead to unexpected, increased innovation across and within firms. 

In the hands of Oliver Williamson, this means that understanding firms in the real world entails taking an organization theory approach, in contrast to the “orthodox” economic perspective:

The lens of contract approach to the study of economic organization is partly complementary but also partly rival to the orthodox [neoclassical economic] lens of choice. Specifically, whereas the latter focuses on simple market exchange, the lens of contract is predominantly concerned with the complex contracts. Among the major differences is that non‐standard and unfamiliar contractual practices and organizational structures that orthodoxy interprets as manifestations of monopoly are often perceived to serve economizing purposes under the lens of contract. A major reason for these and other differences is that orthodoxy is dismissive of organization theory whereas organization theory provides conceptual foundations for the lens of contract. (emphasis added)

We are more likely to miss it when mergers solve market inefficiencies, and more likely to see it when they impose static costs — even if the apparent costs actually represent a move from less efficient contractual arrangements to more efficient integration.

The competition that takes place in the real world and between various groups ultimately depends upon the institution of private contracts, many of which, including the firm itself, are nonstandard. Innovation includes the discovery of new organizational forms and the application of old forms to new contexts. Such contracts prevent or attenuate market failure, moving the market toward what economists would deem a more competitive result. Indeed, as Professor Coase pointed out, many markets deemed “perfectly competitive” are in fact the end result of complex contracts limiting rivalry between competitors. This contractual competition cannot produce perfect results — no human institution ever can. Nonetheless, the result is superior to that which would obtain in a (real) world without nonstandard contracting. These contracts do not depend upon the creation or enhancement of market power and thus do not produce the evils against which antitrust law is directed. (Alan Meese, Price Theory Competition & the Rule of Reason)

Or, as Oliver Williamson more succinctly puts it:

[There is a] rebuttable presumption that nonstandard forms of contracting have efficiency purposes. (Oliver Williamson, The Economic Institutions of Capitalism)

The pinched focus of the guidelines on narrow market definition misses the bigger picture of dynamic competition over time

The proposed guidelines (and the theories of harm undergirding them) focus upon indicia of market power that may not be accurate if assessed in more realistic markets or over more relevant timeframes, and, if applied too literally, may bias enforcement against mergers with dynamic-innovation benefits but static-competition costs.  

Similarly, the proposed guidelines’ enumeration of potential efficiencies doesn’t really begin to cover the categories implicated by the organization of enterprise around dynamic considerations

The proposed guidelines’ efficiencies section notes that:

Vertical mergers bring together assets used at different levels in the supply chain to make a final product. A single firm able to coordinate how these assets are used may be able to streamline production, inventory management, or distribution, or create innovative products in ways that would have been hard to achieve though arm’s length contracts. (emphasis added)

But it is not clear than any of these categories encompasses organizational decisions made to facilitate the coordination of production and commercialization when they are dependent upon intangible assets.

As Thomas Jorde and David Teece write:

For innovations to be commercialized, the economic system must somehow assemble all the relevant complementary assets and create a dynamically-efficient interactive system of learning and information exchange. The necessary complementary assets can conceivably be assembled by either administrative or market processes, as when the innovator simply licenses the technology to firms that already own or are willing to create the relevant assets. These organizational choices have received scant attention in the context of innovation. Indeed, the serial model relies on an implicit belief that arm’s-length contracts between unaffiliated firms in the vertical chain from research to customer will suffice to commercialize technology. In particular, there has been little consideration of how complex contractual arrangements among firms can assist commercialization — that is, translating R&D capability into profitable new products and processes….

* * *

But in reality, the market for know-how is riddled with imperfections. Simple unilateral contracts where technology is sold for cash are unlikely to be efficient. Complex bilateral and multilateral contracts, internal organization, or various hybrid structures are often required to shore up obvious market failures and create procompetitive efficiencies. (Jorde & Teece, Rule of Reason Analysis of Horizontal Arrangements: Agreements Designed to Advance Innovation and Commercialize Technology) (emphasis added)

When IP protection for a given set of valuable pieces of “know-how” is strong — easily defendable, unique patents, for example — firms can rely on property rights to efficiently contract with vertical buyers and sellers. But in cases where the valuable “know how” is less easily defended as IP — e.g. business process innovation, managerial experience, distributed knowledge, corporate culture, and the like — the ability to partially vertically integrate through contract becomes more difficult, if not impossible. 

Perhaps employing these assets is part of what is meant in the draft guidelines by “streamline.” But the very mention of innovation only in the technological context of product innovation is at least some indication that organizational innovation is not clearly contemplated.  

This is a significant lacuna. The impact of each organizational form on knowledge transfers creates a particularly strong division between integration and contract. As Enghin Atalay, Ali Hortaçsu & Chad Syverson point out:

That vertical integration is often about transfers of intangible inputs rather than physical ones may seem unusual at first glance. However, as observed by Arrow (1975) and Teece (1982), it is precisely in the transfer of nonphysical knowledge inputs that the market, with its associated contractual framework, is most likely to fail to be a viable substitute for the firm. Moreover, many theories of the firm, including the four “elemental” theories as identified by Gibbons (2005), do not explicitly invoke physical input transfers in their explanations for vertical integration. (Enghin Atalay, et al., Vertical Integration and Input Flows) (emphasis added)

There is a large economics and organization theory literature discussing how organizations are structured with respect to these sorts of intangible assets. And the upshot is that, while we start — not end, as some would have it — with the Coasian insight that firm boundaries are necessarily a function of production processes and not a hard limit, we quickly come to realize that it is emphatically not the case that integration-via-contract and integration-via-merger are always, or perhaps even often, viable substitutes.

Conclusion

The contract/merger equivalency assumption, coupled with a “least-restrictive alternative” logic that favors contract over merger, puts a thumb on the scale against vertical mergers. While the proposed guidelines as currently drafted do not necessarily portend the inflexible, formalistic application of this logic, they offer little to guide enforcers or courts away from the assumption in the important (and perhaps numerous) cases where it is unwarranted.