Policymakers’ recent focus on how Big Tech should be treated under antitrust law has been accompanied by claims that companies like Facebook and Google hold dominant positions in various “markets.” Notwithstanding the tendency to conflate whether a firm is large with whether it hold a dominant position, we must first answer the question most of these claims tend to ignore: “dominant over what?”
For example, as set out in this earlier Truth on the Market post, a recent lawsuit filed by various states and the U.S. Justice Department outlined five areas related to online display advertising over which Google is alleged by the plaintiffs to hold a dominant position. But crucially, none appear to have been arrived at via the application of economic reasoning.
As that post explained, other forms of advertising (such as online search and offline advertising) might form part of a “relevant market” (i.e., the market in which a product actually competes) over which Google’s alleged dominance should be assessed. The post makes a strong case for the actual relevant market being much broader than that claimed in the lawsuit. Of course, some might disagree with that assessment, so it is useful to step back and examine the principles that underlie and motivate how a relevant market is defined.
In any antitrust case, defining the relevant market should be regarded as a means to an end, not an end in itself. While such definitions provide the basis to calculate market shares, the process of thinking about relevant markets also should provide a framework to consider and highlight important aspects of the case. The process enables one to think about how a particular firm and market operates, the constraints that it and rival firms face, and whether entry by other firms is feasible or likely.
Many naïve attempts to define the relevant market will limit their analysis to a particular industry. But an industry could include too few competitors, or it might even include too many—for example, if some firms in the industry generate products that do not constitute strong competitive constraints. If one were to define all cars as the “relevant” market, that would imply that a Dacia Sandero (a supermini model produced Renault’s Romanian subsidiary Dacia) constrains the price of Maserati’s Quattroporte luxury sports sedan as much as the Ferrari Portofino grand touring sports car does. This is very unlikely to hold in reality.
The relevant market should be the smallest possible group of products and services that contains all such products and services that could provide a reasonable competitive constraint. But that, of course, merely raises the question of what is meant by a “reasonable competitive constraint.” Thankfully, by applying economic reasoning, we can answer that question.
More specifically, we have the “hypothetical monopolist test.” This test operates by considering whether a hypothetical monopolist (i.e., a single firm that controlled all the products considered part of the relevant market) could profitably undertake “a small but significant, non-transitory, increase in price” (typically shortened as the SSNIP test).
If the hypothetical monopolist could profitably implement this increase in price, then the group of products under consideration is said to constitute a relevant market. On the other hand, if the hypothetical monopolist could not profitably increase the price of that group of products (due to demand-side or supply-side constraints on their ability to increase prices), then that group of products is not a relevant market, and more products need to be included in the candidate relevant market. The process of widening the group of products continues until the hypothetical monopolist could profitably increase prices over that group.
So how does this test work in practice? Let’s use an example to make things concrete. In particular, let’s focus on Google’s display advertising, as that has been a significant focus of attention. Starting from the narrowest possible market, Google’s own display advertising, the HM test would ask whether a hypothetical monopolist controlling these services (and just these services) could profitably increase prices of these services permanently by 5% to 10%.
At this initial stage, it is important to avoid the “cellophane fallacy,” in which a monopolist firm could not profitably increase its prices by 5% to 10% because it is already charging the monopoly price. This fallacy usually arises in situations where the product under consideration has very few (if any) substitutes. But as has been shown here, there are already plenty of alternatives to Google’s display-advertising services, so we can be reasonably confident that the fallacy does not apply here.
We would then consider what is likely to happen if Google were to increase the prices of its online display advertising services by 5% to 10%. Given the plethora of other options (such as Microsoft, Facebook, and Simpli.fi) customers have for obtaining online display ads, a sufficiently high number of Google’s customers are likely to switch away, such that the price increase would not be profitable. It is therefore necessary to expand the candidate relevant market to include those closest alternatives to which Google’s customers would switch.
We repeat the exercise, but now with the hypothetical monopolist also increasing the prices of those newly included products. It might be the case that alternatives such as online search ads (as opposed to display ads), print advertising, TV advertising and/or other forms of advertising would sufficiently constrain the hypothetical monopolist in this case that those other alternatives form part of the relevant market.
In determining whether an alternative sufficiently constrains our hypothetical monopolist, it is important to consider actual consumer/firm behavior, rather than relying on products having “similar” characteristics. Although constraints can come from either the demand side (i.e., customers switching to another provider) or the supply side (entry/switching by other providers to start producing the products offered by the HM), for market-definition purposes, it is almost always demand-side switching that matters most. Switching by consumers tends to happen much more quickly than does switching by providers, such that it can be a more effective constraint. (Note that supply-side switching is still important when assessing overall competitive constraints, but because such switching can take one or more years, it is usually considered in the overall competitive assessment, rather than at the market-definition stage.)
Identifying which alternatives consumers do and would switch to therefore highlights the rival products and services that constrain the candidate hypothetical monopolist. It is only once the hypothetical monopolist test has been completed and the relevant market has been found that market shares can be calculated.
It is at that point than an assessment of a firm’s alleged market power (or of a proposed merger) can proceed. This is why claims that “Facebook is a monopolist” or that “Google has market power” often fail at the first hurdle (indeed, in the case of Facebook, they recently have.)
Indeed, I would go so far as to argue that any antitrust claim that does not first undertake a market-definition exercise with sound economic reasoning akin to that described above should be discounted and ignored.
 Some might argue that there is a “chain of substitution” from the Maserati to, for example, an Audi A4, to a Ford Focus, to a Mini, to a Dacia Sandero, such that the latter does, indeed, provide some constraint on the former. However, the size of that constraint is likely to be de minimis, given how many “links” there are in that chain.
 The “small but significant” price increase is usually taken to be between 5% and 10%.
 Even if a product or group of products ends up excluded from the definition of the relevant market, these products can still form a competitive constraint in the overall assessment and are still considered at that point.
[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]
The U.S. Department of Justice’s (DOJ) antitrust case against Google, which was filed in October 2020, will be a tough slog. It is an alleged monopolization (Sherman Act, Sec. 2) case; and monopolization cases are always a tough slog.
In this brief essay I will lay out some of the issues in the case and raise an intriguing possibility.
What is the case about?
The case is about exclusivity and exclusion in the distribution of search engine services; that Google paid substantial sums to Apple and to the manufacturers of Android-based mobile phones and tablets and also to wireless carriers and web-browser proprietors—in essence, to distributors—to install the Google search engine as the exclusive pre-set (installed), default search program. The suit alleges that Google thereby made it more difficult for other search-engine providers (e.g., Bing; DuckDuckGo) to obtain distribution for their search-engine services and thus to attract search-engine users and to sell the online advertising that is associated with search-engine use and that provides the revenue to support the search “platform” in this “two-sided market” context.
Exclusion can be seen as a form of “raising rivals’ costs.” Equivalently, exclusion can be seen as a form of non-price predation. Under either interpretation, the exclusionary action impedes competition.
It’s important to note that these allegations are different from those that motivated an investigation by the Federal Trade Commission (which the FTC dropped in 2013) and the cases by the European Union against Google. Those cases focused on alleged self-preferencing; that Google was unduly favoring its own products and services (e.g., travel services) in its delivery of search results to users of its search engine. In those cases, the impairment of competition (arguably) happens with respect to those competing products and services, not with respect to search itself.
What is the relevant market?
For a monopolization allegation to have any meaning, there needs to be the exercise of market power (which would have adverse consequences for the buyers of the product). And in turn, that exercise of market power needs to occur in a relevant market: one in which market power can be exercised.
Here is one of the important places where the DOJ’s case is likely to turn into a slog: the delineation of a relevant market for alleged monopolization cases remains as a largely unsolved problem for antitrust economics. This is in sharp contrast to the issue of delineating relevant markets for the antitrust analysis of proposed mergers. For this latter category, the paradigm of the “hypothetical monopolist” and the possibility that this hypothetical monopolist could prospectively impose a “small but significant non-transitory increase in price” (SSNIP) has carried the day for the purposes of market delineation.
But no such paradigm exists for monopolization cases, in which the usual allegation is that the defendant already possesses market power and has used the exclusionary actions to buttress that market power. To see the difficulties, it is useful to recall the basic monopoly diagram from Microeconomics 101. A monopolist faces a negatively sloped demand curve for its product (at higher prices, less is bought; at lower prices, more is bought) and sets a profit-maximizing price at the level of output where its marginal revenue (MR) equals its marginal costs (MC). Its price is thereby higher than an otherwise similar competitive industry’s price for that product (to the detriment of buyers) and the monopolist earns higher profits than would the competitive industry.
But unless there are reliable benchmarks as to what the competitive price and profits would otherwise be, any information as to the defendant’s price and profits has little value with respect to whether the defendant already has market power. Also, a claim that a firm does not have market power because it faces rivals and thus isn’t able profitably to raise its price from its current level (because it would lose too many sales to those rivals) similarly has no value. Recall the monopolist from Micro 101. It doesn’t set a higher price than the one where MR=MC, because it would thereby lose too many sales to other sellers of other things.
Thus, any firm—regardless of whether it truly has market power (like the Micro 101 monopolist) or is just another competitor in a sea of competitors—should have already set its price at its profit-maximizing level and should find it unprofitable to raise its price from that level. And thus the claim, “Look at all of the firms that I compete with! I don’t have market power!” similarly has no informational value.
Let us now bring this problem back to the Google monopolization allegation: What is the relevant market? In the first instance, it has to be “the provision of answers to user search queries.” After all, this is the “space” in which the exclusion occurred. But there are categories of search: e.g., search for products/services, versus more general information searches (“What is the current time in Delaware?” “Who was the 21st President of the United States?”). Do those separate categories themselves constitute relevant markets?
Further, what would the exercise of market power in a (delineated relevant) market look like? Higher-than-competitive prices for advertising that targets search-results recipients is one obvious answer (but see below). In addition, because this is a two-sided market, the competitive “price” (or prices) might involve payments by the search engine to the search users (in return for their exposure to the lucrative attached advertising). And product quality might exhibit less variety than a competitive market would provide; and/or the monopolistic average level of quality would be lower than in a competitive market: e.g., more abuse of user data, and/or deterioration of the delivered information itself, via more self-preferencing by the search engine and more advertising-driven preferencing of results.
In addition, a natural focus for a relevant market is the advertising that accompanies the search results. But now we are at the heart of the difficulty of delineating a relevant market in a monopolization context. If the relevant market is “advertising on search engine results pages,” it seems highly likely that Google has market power. If the relevant market instead is all online U.S. advertising (of which Google’s revenue share accounted for 32% in 2019), then the case is weaker; and if the relevant market is all advertising in the United States (which is about twice the size of online advertising), the case is weaker still. Unless there is some competitive benchmark, there is no easy way to delineate the relevant market.
What exactly has Google been paying for, and why?
As many critics of the DOJ’s case have pointed out, it is extremely easy for users to switch their default search engine. If internet search were a normal good or service, this ease of switching would leave little room for the exercise of market power. But in that case, why is Google willing to pay $8-$12 billion annually for the exclusive default setting on Apple devices and large sums to the manufacturers of Android-based devices (and to wireless carriers and browser proprietors)? Why doesn’t Google instead run ads in prominent places that remind users how superior Google’s search results are and how easy it is for users (if they haven’t already done so) to switch to the Google search engine and make Google the user’s default choice?
Suppose that user inertia is important. Further suppose that users generally have difficulty in making comparisons with respect to the quality of delivered search results. If this is true, then being the default search engine on Apple and Android-based devices and on other distribution vehicles would be valuable. In this context, the inertia of their customers is a valuable “asset” of the distributors that the distributors may not be able to take advantage of, but that Google can (by providing search services and selling advertising). The question of whether Google’s taking advantage of this user inertia means that Google exercises market power takes us back to the issue of delineating the relevant market.
There is a further wrinkle to all of this. It is a well-understood concept in antitrust economics that an incumbent monopolist will be willing to pay more for the exclusive use of an essential input than a challenger would pay for access to the input. The basic idea is straightforward. By maintaining exclusive use of the input, the incumbent monopolist preserves its (large) monopoly profits. If the challenger enters, the incumbent will then earn only its share of the (much lower, more competitive) duopoly profits. Similarly, the challenger can expect only the lower duopoly profits. Accordingly, the incumbent should be willing to outbid (and thereby exclude) the challenger and preserve the incumbent’s exclusive use of the input, so as to protect those monopoly profits.
To bring this to the Google monopolization context, if Google does possess market power in some aspect of search—say, because online search-linked advertising is a relevant market—then Google will be willing to outbid Microsoft (which owns Bing) for the “asset” of default access to Apple’s (inertial) device owners. That Microsoft is a large and profitable company and could afford to match (or exceed) Google’s payments to Apple is irrelevant. If the duopoly profits for online search-linked advertising would be substantially lower than Google’s current profits, then Microsoft would not find it worthwhile to try to outbid Google for that default access asset.
Alternatively, this scenario could be wholly consistent with an absence of market power. If search users (who can easily switch) consider Bing to be a lower-quality search service, then large payments by Microsoft to outbid Google for those exclusive default rights would be largely wasted, since the “acquired” default search users would quickly switch to Google (unless Microsoft provided additional incentives for the users not to switch).
But this alternative scenario returns us to the original puzzle: Why is Google making such large payments to the distributors for those exclusive default rights?
An intriguing possibility
Consider the following possibility. Suppose that Google was paying that $8-$12 billion annually to Apple in return for the understanding that Apple would not develop its own search engine for Apple’s device users. This possibility was not raised in the DOJ’s complaint, nor is it raised in the subsequent suits by the state attorneys general.
But let’s explore the implications by going to an extreme. Suppose that Google and Apple had a formal agreement that—in return for the $8-$12 billion per year—Apple would not develop its own search engine. In this event, this agreement not to compete would likely be seen as a violation of Section 1 of the Sherman Act (which does not require a market delineation exercise) and Apple would join Google as a co-conspirator. The case would take on the flavor of the FTC’s prosecution of “pay-for-delay” agreements between the manufacturers of patented pharmaceuticals and the generic drug manufacturers that challenge those patents and then receive payments from the former in return for dropping the patent challenge and delaying the entry of the generic substitute.
As of this writing, there is no evidence of such an agreement and it seems quite unlikely that there would have been a formal agreement. But the DOJ will be able to engage in discovery and take depositions. It will be interesting to find out what the relevant executives at Google—and at Apple—thought was being achieved by those payments.
What would be a suitable remedy/relief?
The DOJ’s complaint is vague with respect to the remedy that it seeks. This is unsurprising. The DOJ may well want to wait to see how the case develops and then amend its complaint.
However, even if Google’s actions have constituted monopolization, it is difficult to conceive of a suitable and effective remedy. One apparently straightforward remedy would be to require simply that Google not be able to purchase exclusivity with respect to the pre-set default settings. In essence, the device manufacturers and others would always be able to sell parallel default rights to other search engines: on the basis, say, that the default rights for some categories of customers—or even a percentage of general customers (randomly selected)—could be sold to other search-engine providers.
But now the Gilbert-Newbery insight comes back into play. Suppose that a device manufacturer knows (or believes) that Google will pay much more if—even in the absence of any exclusivity agreement—Google ends up being the pre-set search engine for all (or nearly all) of the manufacturer’s device sales, as compared with what the manufacturer would receive if those default rights were sold to multiple search-engine providers (including, but not solely, Google). Can that manufacturer (recall that the distributors are not defendants in the case) be prevented from making this sale to Google and thus (de facto) continuing Google’s exclusivity?
Even a requirement that Google not be allowed to make any payment to the distributors for a default position may not improve the competitive environment. Google may be able to find other ways of making indirect payments to distributors in return for attaining default rights, e.g., by offering them lower rates on their online advertising.
Further, if the ultimate goal is an efficient outcome in search, it is unclear how far restrictions on Google’s bidding behavior should go. If Google were forbidden from purchasing any default installation rights for its search engine, would (inert) consumers be better off? Similarly, if a distributor were to decide independently that its customers were better served by installing the Google search engine as the default, would that not be allowed? But if it is allowed, how could one be sure that Google wasn’t indirectly paying for this “independent” decision (e.g., through favorable advertising rates)?
It’s important to remember that this (alleged) monopolization is different from the Standard Oil case of 1911 or even the (landline) AT&T case of 1984. In those cases, there were physical assets that could be separated and spun off to separate companies. For Google, physical assets aren’t important. Although it is conceivable that some of Google’s intellectual property—such as Gmail, YouTube, or Android—could be spun off to separate companies, doing so would do little to cure the (arguably) fundamental problem of the inert device users.
In addition, if there were an agreement between Google and Apple for the latter not to develop a search engine, then large fines for both parties would surely be warranted. But what next? Apple can’t be forced to develop a search engine. This differentiates such an arrangement from the “pay-for-delay” arrangements for pharmaceuticals, where the generic manufacturers can readily produce a near-identical substitute for the patented drug and are otherwise eager to do so.
At the end of the day, forbidding Google from paying for exclusivity may well be worth trying as a remedy. But as the discussion above indicates, it is unlikely to be a panacea and is likely to require considerable monitoring for effective enforcement.
The DOJ’s case against Google will be a slog. There are unresolved issues—such as how to delineate a relevant market in a monopolization case—that will be central to the case. Even if the DOJ is successful in showing that Google violated Section 2 of the Sherman Act in monopolizing search and/or search-linked advertising, an effective remedy seems problematic. But there also remains the intriguing question of why Google was willing to pay such large sums for those exclusive default installation rights?
The developments in the case will surely be interesting.
 The DOJ’s suit was joined by 11 states. More states subsequently filed two separate antitrust lawsuits against Google in December.
 There is also a related argument: That Google thereby gained greater volume, which allowed it to learn more about its search users and their behavior, and which thereby allowed it to provide better answers to users (and thus a higher-quality offering to its users) and better-targeted (higher-value) advertising to its advertisers. Conversely, Google’s search-engine rivals were deprived of that volume, with the mirror-image negative consequences for the rivals. This is just another version of the standard “learning-by-doing” and the related “learning curve” (or “experience curve”) concepts that have been well understood in economics for decades.
 See, for example, Steven C. Salop and David T. Scheffman, “Raising Rivals’ Costs: Recent Advances in the Theory of Industrial Structure,” American Economic Review, Vol. 73, No. 2 (May 1983), pp. 267-271; and Thomas G. Krattenmaker and Steven C. Salop, “Anticompetitive Exclusion: Raising Rivals’ Costs To Achieve Power Over Price,” Yale Law Journal, Vol. 96, No. 2 (December 1986), pp. 209-293.
 For a discussion, see Richard J. Gilbert, “The U.S. Federal Trade Commission Investigation of Google Search,” in John E. Kwoka, Jr., and Lawrence J. White, eds. The Antitrust Revolution: Economics, Competition, and Policy, 7th edn. Oxford University Press, 2019, pp. 489-513.
 For a more complete version of the argument that follows, see Lawrence J. White, “Market Power and Market Definition in Monopolization Cases: A Paradigm Is Missing,” in Wayne D. Collins, ed., Issues in Competition Law and Policy. American Bar Association, 2008, pp. 913-924.
 The forgetting of this important point is often termed “the cellophane fallacy”, since this is what the U.S. Supreme Court did in a 1956 antitrust case in which the DOJ alleged that du Pont had monopolized the cellophane market (and du Pont, in its defense claimed that the relevant market was much wider: all flexible wrapping materials); see U.S. v. du Pont, 351 U.S. 377 (1956). For an argument that profit data and other indicia argued for cellophane as the relevant market, see George W. Stocking and Willard F. Mueller, “The Cellophane Case and the New Competition,” American Economic Review, Vol. 45, No. 1 (March 1955), pp. 29-63.
 In the context of differentiated services, one would expect prices (positive or negative) to vary according to the quality of the service that is offered. It is worth noting that Bing offers “rewards” to frequent searchers; see https://www.microsoft.com/en-us/bing/defaults-rewards. It is unclear whether this pricing structure of payment to Bing’s customers represents what a more competitive framework in search might yield, or whether the payment just indicates that search users consider Bing to be a lower-quality service.
 As an additional consequence of the impairment of competition in this type of search market, there might be less technological improvement in the search process itself – to the detriment of users.
 And, again, if we return to the du Pont cellophane case: Was the relevant market cellophane? Or all flexible wrapping materials?
 This insight is formalized in Richard J. Gilbert and David M.G. Newbery, “Preemptive Patenting and the Persistence of Monopoly,” American Economic Review, Vol. 72, No. 3 (June 1982), pp. 514-526.
 To my knowledge, Randal C. Picker was the first to suggest this possibility; see https://www.competitionpolicyinternational.com/a-first-look-at-u-s-v-google/. Whether Apple would be interested in trying to develop its own search engine – given the fiasco a decade ago when Apple tried to develop its own maps app to replace the Google maps app – is an open question. In addition, the Gilbert-Newbery insight applies here as well: Apple would be less inclined to invest the substantial resources that would be needed to develop a search engine when it is thereby in a duopoly market. But Google might be willing to pay “insurance” to reinforce any doubts that Apple might have.
 The U.S. Supreme Court, in FTC v. Actavis, 570 U.S. 136 (2013), decided that such agreements could be anti-competitive and should be judged under the “rule of reason”. For a discussion of the case and its implications, see, for example, Joseph Farrell and Mark Chicu, “Pharmaceutical Patents and Pay-for-Delay: Actavis (2013),” in John E. Kwoka, Jr., and Lawrence J. White, eds. The Antitrust Revolution: Economics, Competition, and Policy, 7th edn. Oxford University Press, 2019, pp. 331-353.
 This is an example of the insight that vertical arrangements – in this case combined with the Gilbert-Newbery effect – can be a way for dominant firms to raise rivals’ costs. See, for example, John Asker and Heski Bar-Isaac. 2014. “Raising Retailers’ Profits: On Vertical Practices and the Exclusion of Rivals.” American Economic Review, Vol. 104, No. 2 (February 2014), pp. 672-686.
 And, again, for the reasons discussed above, Apple might not be eager to make the effort.
On April 17, the Federal Trade Commission (FTC) voted three-to-two to enter into a consent agreement In the Matter of Cardinal Health, Inc., requiring Cardinal Health to disgorge funds as part of the settlement in this monopolization case. As ably explained by dissenting Commissioners Josh Wright and Maureen Ohlhausen, the U.S. Federal Trade Commission (FTC) wrongly required the disgorgement of funds in this case. The settlement reflects an overzealous application of antitrust enforcement to unilateral conduct that may well be efficient. It also manifests a highly inappropriate application of antitrust monetary relief that stands to increase private uncertainty, to the detriment of economic welfare.
The basic facts and allegations in this matter, drawn from the FTC’s statement accompanying the settlement, are as follows. Through separate acquisitions in 2003 and 2004, Cardinal Health became the largest operator of radiopharmacies in the United States and the sole radiopharmacy operator in 25 relevant markets addressed by this settlement. Radiopharmacies distribute and sell radiopharmaceuticals, which are drugs containing radioactive isotopes, used by hospitals and clinics to diagnose and treat diseases. Notably, they typically derive at least of 60% of their revenues from the sale of heart perfusion agents (“HPAs”), a type of radiopharmaceutical that healthcare providers use to conduct heart stress tests. A practical consequence is that radiopharmacies cannot operate a financially viable and competitive business without access to an HPA. Between 2003 and 2008, Cardinal allegedly employed various tactics to induce the only two manufacturers of HPAs in the United States, BMS and GEAmersham, to withhold HPA distribution rights from would-be radiopharmacy market entrants in violation of Section 2 of the Sherman Act. Through these tactics Cardinal allegedly maintained exclusive dealing rights, denied its customers the benefits of competition, and profited from the monopoly prices it charged for all radiopharmaceuticals, including HPAs, in the relevant markets. Importantly, according to the FTC, there was no efficiency benefit or legitimate business justification for Cardinal simultaneously maintaining exclusive distribution rights to the only two HPAs then available in the relevant markets.
This settlement raises two types of problems.
First, this was a single firm conduct exclusive dealing case involving (at best) questionable anticompetitive effects. As Josh Wright (citing the economics literature) pointed out in his dissent, “there are numerous plausible efficiency justifications for such [exclusive dealing] restraints.” (Moreover, as Josh Wright and I stressed in an article on tying and exclusive dealing, “[e]xisting empirical evidence of the impact of exclusive dealing is scarce but generally favors the view that exclusive dealing is output‐enhancing”, suggesting that a (rebuttable) presumption of legality would be appropriate in this area.) Indeed, in this case, Commissioner Wright explained that “[t]he tactics the Commission challenges could have been output-enhancing” in various markets. Furthermore, Commissioner Wright emphasized that the data analysis showing that Cardinal charged higher prices in monopoly markets was “very fragile. The data show that the impact of a second competitor on Cardinal’s prices is small, borderline statistically significant, and not robust to minor changes in specification.” Commissioner Ohlhausen’s dissent reinforced Commissioner Wright’s critique of the majority’s exclusive dealing theory. As she put it:
“[E]even if the Commission could establish that Cardinal achieved some type of de facto exclusivity with both Bristol-Myers Squibb and General Electric Co. during the relevant time period (and that is less than clear), it is entirely unclear that such exclusivity – rather than, for example, insufficient demand for more than one radiopharmacy – caused the lack of entry within each of the relevant markets. That alternative explanation seems especially likely in the six relevant markets in which ‘Cardinal remains the sole or dominant radiopharmacy,’ notwithstanding the fact that whatever exclusivity Cardinal may have achieved admittedly expired in early 2008. The complaint provides no basis for the assertion that Cardinal’s conduct during the 2003-2008 period has caused the lack of entry in those six markets during the past seven years.”
Furthermore, Commissioner Ohlhausen underscored Commissioner Wright’s critique of the empirical evidence in this case: “[T]he evidence of anticompetitive effects in the relevant markets at issue is significantly lacking. It is largely based on non-market-specific documentary evidence. The market-specific empirical evidence we do have implies very small (i.e. low single-digit) and often statistically insignificant price increases or no price increases at all.”
Second, the FTC’s requirement that Cardinal Health disgorge $26.8 million into a fund for allegedly injured consumers is unmeritorious and inappropriately chills potentially procompetitive behavior. Commissioner Ohlhausen focused on how this case ran afoul of the FTC’s 2003 Policy Statement on Monetary Equitable Remedies in Competition Cases (Policy Statement) (withdrawn by the FTC in 2012, over Commissioner Ohlhausen’s dissent), which reserves disgorgement for cases in which the underlying violation is clear and there is a reasonable basis for calculating the amount of a remedial payment. As Ohlhausen explained, this case violates those principles because (1) it does not involve a clear violation of the antitrust laws (see above) and, given the lack of anticompetitive effects evidence (see above), (2) there is no reasonable basis for calculating the disgorgement amount (indeed, there is “the real possibility of no ill-gotten gains for Cardinal”). Furthermore:
“The lack of guidance from the Commission on the use of its disgorgement authority [following withdrawal of the Policy Statement] makes any such use inherently unpredictable and thus unfair. . . . The Commission therefore ought to reinstate the Policy Statement – either in its original form or in some modified form that the current Commissioners can agree on – or provide some additional guidance on when it plans to seek the extraordinary remedy of disgorgement in antitrust cases.”
In his critique of disgorgement, Commissioner Wright deployed law and economics analysis (and, in particular, optimal deterrence theory). He explained that regulators should be primarily concerned with over-deterrence in single-firm conduct cases such as this one, which raise the possibility of private treble damage actions. Wright stressed:
“I would . . . pursue disgorgement only against naked price fixing agreements among competitors or, in the case of single-firm conduct, only if the monopolist’s conduct violates the Sherman Act and has no plausible efficiency justification. . . . This case does not belong in that category. Declining to pursue disgorgement in most cases involving vertical restraints has the virtue of taking the remedy off the table – and thus reducing the risk of over-deterrence – in the cases that present the most difficulty in distinguishing between anticompetitive conduct that harms consumers and procompetitive conduct that benefits them, such as the present case.”
In sum, one may hope that in the future the FTC: (1) will be more attentive to the potential efficiencies of exclusive dealing; (2) will proceed far more cautiously before proposing an enforcement action in the exclusive dealing area; (3) will avoid applying disgorgement in exclusive dealing cases; and (4) will promulgate a new disgorgement policy statement that reserves disgorgement for unequivocally illegal antitrust offenses in which economic harm can readily be calculated with a high degree of certainty.
Today comes news that Senator Kohl has sent a letter to the DOJ urging “careful review” of the proposed Google/ITA merger. Underlying his concerns (or rather the “concerns raised by a number of industry participants and consumer advocates that I believe warrant careful review”) is this:
Many of ITA’s customers believe that access to ITA’s technology is critical to competition in online air travel search because it cannot be matched by other players in the travel search industry. They claim that ITA’s superior access to information and superior technology enables it to provide faster and better results to consumers. As a result, some of these industry participants and independent experts fear that the current high level of competition among online travel agents and metasearch providers could be undermined if Google were to acquire ITA and start its own OTA or metasearch service. If this were to happen, they argue, consumers would lose the benefits of a robustly competitive online air travel market.
For several reasons, these complaints are without merit and a challenge to the Google/ITA merger would be premature at best—and a costly mistake at worst.
The high-tech market is innovative and dynamic. Goods and services that were once inconceivable are now indispensable, and competition has improved the quality of technology while driving down its costs. But as the market continues to change, antitrust interventions are stuck using a static regulatory framework. As the government develops a strategy for regulating competition in the digital marketplace, it must tread carefully—excessive intervention will stifle innovation, harm consumers, and prevent growth. And given the link between innovation and economic growth, the stakes of “getting it right” are high. The individual nature of every decision, however, makes errors in antitrust enforcement inevitable. Some conduct that is bad for competition will be allowed to go on while some conduct that is good for competition will be blocked by intervention.
But prosecuting pro-competitive conduct is almost certainly more costly than mistakenly allowing anticompetitive conduct because mechanisms are in place to mitigate the latter but not the former. The cost of erroneous intervention is the loss to consumers directly and a deterrent effect on innovation—for fear of intervention, companies may not take large risks. Meanwhile, allowing conduct to persist amidst uncertainty allows the potential benefits of conduct to materialize while maintaining checks against practices that are bad for consumers: both the competitive marketplace and future enforcers have the power to mitigate specific anticompetitive outcomes that may arise. Unfortunately, current antitrust enforcement—abetted by influential congressmen like Senator Kohl—is more, rather than less, aggressive against innovative companies in high-tech industries. This aggression threatens to stifle growth and deter future innovation in a market with incredible potential.
Google has become a primary target of this scrutiny, and the company’s proposed acquisition of ITA, a software company that compiles and processes travel data, is a good example of aggressive scrutiny threatening to stifle growth.
Google’s acquisition of ITA is a straightforward merger where one company has decided to purchase another outright (instead of merely purchasing its services through contract). There are good reasons for integration. Most notably, Google gets to exercise direct control over ITA’s talented engineers if it owns ITA—influence that it would not have if the company simply signed a contract with ITA. If Google is correct that it can manage ITA’s resources better than ITA’s current management, then integration makes sense and is valuable for consumers.
The primary concern raised over Google’s proposed acquisition of ITA is that acquisition would “leverage” Google’s alleged dominance into another market—the online travel search market—and permit Google to prevent its competitors from accessing ITA’s high-quality analysis of flights and fares.
There are a few problems with this.
First, ITA does not provide or own the underlying data (this comes from the airlines themselves); rather it works only to analyze and process it—processing that other companies can and do undertake. It may have developed superior technology to engage in this processing, but that is precisely why it (and consumers) should not be penalized by its competitors’ efforts to hamstring it. Remember—although most of the hand-wringing surrounding this deal concerns Google, it is first and foremost the innovative entrepreneurs at ITA who would be prevented from capitalizing on their success if the deal is stopped.
Second, it is hard to see why, under the facts as alleged by the deal’s naysayers, consumers would be worse off if Google owns ITA than if ITA stands on its own. The claims seem to turn on ITA’s indispensability to the online travel industry. But if ITA is so indispensable—if it possesses such market power, in other words—it’s hard to see how its incentives to capitalize on that market power would change simply by virtue of a change in its management. Either ITA possesses market power and is already taking advantage of it (or else its managers are leaving money on the table and it most certainly should be taken over by another set of managers) or else it does not actually possess this market power and its combination with Google, even if Google were to keep all of ITA’s technology for itself, will do little to harm the rest of the industry as its competitors step up and step in to take its place.
Third and related to these is the simple repugnance of hamstringing successful entrepreneurs because of the exhortations of their competitors, and the implication that a successful company’s work product (like ITA’s “superior technology”) must be rendered widely-available, by government force if necessary.
Meanwhile, Google does not seem to have any interest in selling airline tickets or making airline reservations (just as it doesn’t sell the retail goods one can search for using its site). Instead, its interest is in providing its users easy access to airline flight and pricing data and giving online travel agencies the ability to bid on the sale of tickets to Google users looking to buy. The availability of this information via Google search will lower search costs for consumers and the expected bidding should increase competition and drive down travel costs for consumers. It is easy to see why companies like Kayak and Bing Travel and Expedia and Travelocity might be unhappy about this, but far more difficult to see how their woes should be a problem for the antitrust enforcers (or Congress, for that matter).
The point is not that we know that Google—or any other high-tech company’s—conduct is pro-competitive, but rather that the very uncertainty surrounding it counsels caution, not aggression. As the technology, usage and market structure change, so do the strategies of the various businesses that build up around them. These new strategies present unknown and unprecedented challenges to regulators, and these new challenges call for a deferential approach. New conduct is not necessarily anticompetitive conduct, and if our antitrust regulation does not accept this, we all lose.
Baker’s central thesis in Preserving a Political Bargain builds on earlier work concerning competition policy as an implicit political bargain that was reached during the 1940s between the more extreme positions of laissez-faire on the one hand and regulation on the other. The new piece tries to explain what Baker describes as the “non-interventionist” critique of monopolization enforcement within this framework. The piece is motivated, at least in part, by the Section 2 Report debates. Baker’s basic story is fairly straightforward. Under Baker’s account, competition policy is the outcome of the political bargaining process described above. The “competition policy bargain” was then successfully modified in the 1980s in response to the Chicago School critique. According to Baker, during the 1970s and 80s, “the Supreme Court revised many if not most of aspects of antitrust law along the lines suggested by legal and economic commentators loosely associated with the University of Chicago,” though this revolution changed the antitrust laws “dramatically but not fundamentally” and reflected a “bipartisan consensus in favor of reforming antitrust rules to enhance the efficiency gains arising from competition policy.”
Baker applies his “political bargain” framework to argue that the “modern non-interventionist critique,” unlike the successful attempt to modify the “terms” of the bargain in the 1980s, is highly likely to fail. Baker defines the non-interventionist critique as relying on a particular series of legal and economic arguments. For example, Baker describes the economic arguments deployed by the non-interventionists as that “markets are self-correcting,” “monopoly fosters economic growth,” “there is a single monopoly profit,” “excluded fringe rivals may not matter competitively,” “courts cannot reliably identify monopolization,” and so on. Animated by the Section 2 Hearings, Report, its withdrawal, and the subsequent controversy, Baker begins from the assumption the non-interventionists are trying to modify an existing bargain, since non-interventionists are “the primary source of recent criticism of monopolization standards.” From there, Baker argues that this concerted effort to modify the competition bargain in favor of less intervention is unlikely to succeed because such an attempted modification is unlikely to mobilize broader political support in the current social environment.
Let me start by saying that I agree entirely with the ultimate conclusion in so far as I don’t think there is any doubt that, in the current environment, it is unlikely that the implicit “policy bargain” will be modified in a way that makes it more difficult for monopolization plaintiffs. I have much more trouble with the premise of the exercise, and on how one knows a deviation from the current policy bargain when he sees one, and so will focus my critique on those issues.
Baker paints the picture of a dramatic and fundamental attack by non-interventionists on monopolization enforcement. My response to the premise of the paper was: “What non-interventionist effort to further relax monopolization standards?” To be sure, there are plenty of folks who have cautioned against expansive use of Section 2. It strikes me that the fundamental weakness in Baker’s analysis is that his starting point – the “terms” of the current political bargain — derives from assumptions that don’t seem to square with reality. In other words, rather than envisioning the current debates around Section 2 as an assault by non-interventionists, there is a much more compelling case that it is the interventionists attempting to “deviate” from whatever implicit political bargain exists with respect to competition policy. Christine Varney’s declaration that there is “no such thing as a false positive” – the presence of such being a seminal observation since The Limits of Antitrust (in 1984, no less) immediately leaps to mind. I will turn to making the case that it is the interventionists making the offer for modification below.
But first note that Baker leaves out of his list of “economic arguments” against Section 2 both error costs and that there is little empirical evidence that aggressive monopolization enforcement generates consumer benefits. This is, in my view, an important omission since Baker makes the point that all of the other economic arguments have attracted rebuttals. If there has been a rebuttal of the argument that the empirical evidence suggests that instances of anticompetitive exclusive dealing, RPM, tying and vertical integration are quite rare, or an empirical demonstration that monopolization enforcement has generated consumer welfare gains bet of error and administrative costs, I’d like to see it. Further, note that the original Chicago School argument, a la Director & Levi, against monopolization enforcement was not that anticompetitive exclusion was impossible, but rather that it was sufficiently rare in the world as an empirical matter as to be irrelevant to policy formation. Baker ignores this empirical, evidence-based non-interventionist critique, which, for example, has been the core of the position taken by modern academic skeptics of monopolization enforcement like myself, Dan Crane, Tim Muris, Bruce Kobayashi, Luke Froeb, and David Evans.
What is the evidence that there is a non-interventionist attack on the current competition policy bargain as it exists with respect to monopolization? Not much. The first is that the non-interventionists are the “source of criticism of recent monopolization standards.” In parts of the paper, Baker equates the non-interventionists with business interests. But under that formulation, there is not much evidence to support this proposition. If anything, and as Baker readily acknowledges in a footnote, the headlines seem to tell a story of AMD, Google, Microsoft, Adobe and others expending resources to instigate antitrust enforcement against rivals not to restrict the scope of Section 2.
Baker cites more generally the recent monopolization controversy as driven by the non-interventionist attempt to deviate from the status quo. But this part of the analysis reads to me as driven entirely by assertion that the competition policy preferences that Baker appears to prefer are in the “political bargain” and deeming opposition to those (interventionist) policies attempted “deviations.” Perhaps this is a problem of hammers and nails. Baker’s more interventionist than I and so sees obstacles between his ideal vision of antitrust law and reality as caused by non-interventionists. But I’ve got a different hammer and see different nails. For example, I read the Section 2 Report as largely (but not entirely) limited to a description of Section 2 law as it exists and the vigorously dissenting voices coming from the interventionist crowd. As George Priest has put it:
It’s fair enough for a succeeding administration to reject policies of its predecessor. But the Justice Department report was not authored by John Yoo or Alberto Gonzales. It was the work of a year-long study that considered recommendations from 29 panels and 119 witnesses, most of them critical of the minimalist Chicago School approach to antitrust law. The report’s conclusions basically track Supreme Court law with modest extensions in areas where the Supreme Court has not ruled. Ms. Varney denounced the report in its entirety.
Finding the evidence lacking of some strong non-interventionist attempt to impose dramatic change on Section 2 that deviates from the current political bargain, I offer an alternative hypothesis: it is the interventionists that are attempting to deviate from the current political bargain and propose change.
For starters, I think that Baker and I would agree that there actually is a “stable” competition policy bargain with respect to monopolization that has drawn bipartisan over the last twenty years – at least in the courts. Note that even restricting attention to decisions during the George W. Bush administration from 2004-08, the total vote count of these decisions was 86-9, with 7 of 11 decisions decided unanimously, and only Leegin attracted more than two votes of dissent (and more likely, as others have pointed out, for its implications with respect to abortion jurisprudence than anything to do with the antitrust analysis of vertical restraints!). The monopolization-related decisions of the modern era, including Trinko, Linkline, Credit Suisse, and Brooke Group have all made lift more difficult for plaintiffs in one way or another. But as I’ve written on this blog over and over again, the error-cost analysis embedded in these decisions is a key feature of modern Section 2 jurisprudence that is part of the current bargain. So as I understand it, these decisions must be part of the current bargain. It would be difficult, in fact, to find another area of law in which the Court has articulated principles with such overriding unanimity despite persistent attempts by some scholars to advocate for an alternate overarching legal framework. I think there is a much more compelling story – and one backed by greater evidence than Baker’s narrative — to tell about the modern attempt of the interventionists to renegotiate terms. Let’s discuss some of the evidence.
For starters, the strongly-toned dissents from the Section 2 Report from both Agencies after Hearings with witnesses and testimony from all possible sides of debate — even the parts that merely describe the law — suggest dissatisfaction with the terms of the modern bargain Baker describes and that are represented by the monopolization case law created over the past several decades by supermajority Supreme Court decisions. It is AAG Varney who recently, as Baker acknowledges in the paper, minimized the importance of Trinko under Section 2 in favor of “tried and true” cases like Aspen Skiing. This is, of course, to say nothing of AAG Varney’s endorsement of an antitrust policy free of error-cost considerations.
Further, it is the interventionists at the Federal Trade Commission that have turned to an expanded vision of Section 5 to evade the constraints imposed by Section 2. In fact, the Commission has explicitly announced that it does not think that the constraints imposed on plaintiffs under Section 2 should apply to the antitrust agencies! If this is not an attempt to deviate from the existing political bargain in an interventionist direction, I’m not sure what is. Put another way, interventionists are currently attempting to re-write existing Section 2 law – the “political bargain” – through Section 5. Given the Complaint in Intel and promised use of Section 5 in broad circumstances previously covered under the Section 2 law envisioned under the “stable” bargain that Baker describes as generating bipartisan support from Democrats and Republicans, surely this is an attempt to deviate from the prior bargain.
It is the interventionists that have provided new economic arguments in favor of greater antitrust enforcement. For example, the recent trend towards reliance on behavioral economics endorsed by the agencies emerges out of dissatisfaction with Chicago and Post-Chicago School theories that adopt rational actor models and, presumably, inability to get substantial traction in the federal courts from existing interventionist models provided by the Post-Chicago School.
The interventionist assault on the current implicit competition policy bargain goes further than the agencies though. Congress currently has in front of it pending legislation to take out of the courts the development of a rule of reason standard for minimum RPM, a Twombly-repealer, legislation to make reverse payments in pharmaceutical patent settlements illegal, and legislation to regulate interchange fees. Every one of these proposals represents an interventionist reaction attempting to overturn a judicial application of current competition law and suggest that perhaps the interventionists do not trust the courts to oversee the political bargain.
The premise of Baker’s analysis (that the non-interventionists are strongly challenging the current status quo) is either false to begin with or practically irrelevant in light of the much more important interventionist challenge. Note again that Baker’s claim is that the non-interventionists would fail in any attempt to reduce the scope of monopolization enforcement because they will not be able to generate more broad political support in the current environment. No doubt that is true. But what about the interventionists chances for success? Baker’s analysis provides a very interesting lens to analysis evaluate questions like whether the interventionists will be successful in renegotiating the terms of the competition policy bargain. At the moment, though things may be changing, they seem to have greater political support. I think the most interesting conflict arising out of Baker’s interesting conception of competition between stakeholders in antitrust policy is that it illuminates what might be a battle for supremacy in governing the bargain between agencies and courts. As Baker notes, the courts have been a critical part of establishing the terms of the bargain and adjudicating attempts to “re-negotiate” by private plaintiffs and agencies over time. Recently, interventionists have attempted to shift antitrust (and consumer protection) enforcement away from courts and towards administrative agencies, such as with Section 5 and the proposed CFPA. To me, these present more important and interesting policy questions than whether non-interventionists will be successful in further shrinking Section 2 law. I believe that the prediction emerging from Baker’s model depends on what happens with the political environment in the next few years.
My prediction, for what its worth, is that the current policy bargain will certainly hold together in the courts. The remarkable strength of the current Section 2 status quo is held together by a combination of the intuitive appeal of price theory for generalist judges relative to more interventionist Post-Chicago and Behavioral economic alternatives, the relative explanatory power of the so-called Chicago School theories relative to contenders. Nothing there has changed. I have less of a sense about the impact of Congressional changes, judicial nominations, and the rise of the EU as monopolization enforcer have on monopolization in the US.