Archives For United States Department of Justice

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]

The U.S. Department of Justice’s (DOJ) antitrust case against Google, which was filed in October 2020, will be a tough slog.[1] It is an alleged monopolization (Sherman Act, Sec. 2) case; and monopolization cases are always a tough slog.

In this brief essay I will lay out some of the issues in the case and raise an intriguing possibility.

What is the case about?

The case is about exclusivity and exclusion in the distribution of search engine services; that Google paid substantial sums to Apple and to the manufacturers of Android-based mobile phones and tablets and also to wireless carriers and web-browser proprietors—in essence, to distributors—to install the Google search engine as the exclusive pre-set (installed), default search program. The suit alleges that Google thereby made it more difficult for other search-engine providers (e.g., Bing; DuckDuckGo) to obtain distribution for their search-engine services and thus to attract search-engine users and to sell the online advertising that is associated with search-engine use and that provides the revenue to support the search “platform” in this “two-sided market” context.[2]

Exclusion can be seen as a form of “raising rivals’ costs.”[3]  Equivalently, exclusion can be seen as a form of non-price predation. Under either interpretation, the exclusionary action impedes competition.

It’s important to note that these allegations are different from those that motivated an investigation by the Federal Trade Commission (which the FTC dropped in 2013) and the cases by the European Union against Google.[4]  Those cases focused on alleged self-preferencing; that Google was unduly favoring its own products and services (e.g., travel services) in its delivery of search results to users of its search engine. In those cases, the impairment of competition (arguably) happens with respect to those competing products and services, not with respect to search itself.

What is the relevant market?

For a monopolization allegation to have any meaning, there needs to be the exercise of market power (which would have adverse consequences for the buyers of the product). And in turn, that exercise of market power needs to occur in a relevant market: one in which market power can be exercised.

Here is one of the important places where the DOJ’s case is likely to turn into a slog: the delineation of a relevant market for alleged monopolization cases remains as a largely unsolved problem for antitrust economics.[5]  This is in sharp contrast to the issue of delineating relevant markets for the antitrust analysis of proposed mergers.  For this latter category, the paradigm of the “hypothetical monopolist” and the possibility that this hypothetical monopolist could prospectively impose a “small but significant non-transitory increase in price” (SSNIP) has carried the day for the purposes of market delineation.

But no such paradigm exists for monopolization cases, in which the usual allegation is that the defendant already possesses market power and has used the exclusionary actions to buttress that market power. To see the difficulties, it is useful to recall the basic monopoly diagram from Microeconomics 101. A monopolist faces a negatively sloped demand curve for its product (at higher prices, less is bought; at lower prices, more is bought) and sets a profit-maximizing price at the level of output where its marginal revenue (MR) equals its marginal costs (MC). Its price is thereby higher than an otherwise similar competitive industry’s price for that product (to the detriment of buyers) and the monopolist earns higher profits than would the competitive industry.

But unless there are reliable benchmarks as to what the competitive price and profits would otherwise be, any information as to the defendant’s price and profits has little value with respect to whether the defendant already has market power. Also, a claim that a firm does not have market power because it faces rivals and thus isn’t able profitably to raise its price from its current level (because it would lose too many sales to those rivals) similarly has no value. Recall the monopolist from Micro 101. It doesn’t set a higher price than the one where MR=MC, because it would thereby lose too many sales to other sellers of other things.

Thus, any firm—regardless of whether it truly has market power (like the Micro 101 monopolist) or is just another competitor in a sea of competitors—should have already set its price at its profit-maximizing level and should find it unprofitable to raise its price from that level.[6]  And thus the claim, “Look at all of the firms that I compete with!  I don’t have market power!” similarly has no informational value.

Let us now bring this problem back to the Google monopolization allegation:  What is the relevant market?  In the first instance, it has to be “the provision of answers to user search queries.” After all, this is the “space” in which the exclusion occurred. But there are categories of search: e.g., search for products/services, versus more general information searches (“What is the current time in Delaware?” “Who was the 21st President of the United States?”). Do those separate categories themselves constitute relevant markets?

Further, what would the exercise of market power in a (delineated relevant) market look like?  Higher-than-competitive prices for advertising that targets search-results recipients is one obvious answer (but see below). In addition, because this is a two-sided market, the competitive “price” (or prices) might involve payments by the search engine to the search users (in return for their exposure to the lucrative attached advertising).[7]  And product quality might exhibit less variety than a competitive market would provide; and/or the monopolistic average level of quality would be lower than in a competitive market: e.g., more abuse of user data, and/or deterioration of the delivered information itself, via more self-preferencing by the search engine and more advertising-driven preferencing of results.[8]

In addition, a natural focus for a relevant market is the advertising that accompanies the search results. But now we are at the heart of the difficulty of delineating a relevant market in a monopolization context. If the relevant market is “advertising on search engine results pages,” it seems highly likely that Google has market power. If the relevant market instead is all online U.S. advertising (of which Google’s revenue share accounted for 32% in 2019[9]), then the case is weaker; and if the relevant market is all advertising in the United States (which is about twice the size of online advertising[10]), the case is weaker still. Unless there is some competitive benchmark, there is no easy way to delineate the relevant market.[11]

What exactly has Google been paying for, and why?

As many critics of the DOJ’s case have pointed out, it is extremely easy for users to switch their default search engine. If internet search were a normal good or service, this ease of switching would leave little room for the exercise of market power. But in that case, why is Google willing to pay $8-$12 billion annually for the exclusive default setting on Apple devices and large sums to the manufacturers of Android-based devices (and to wireless carriers and browser proprietors)? Why doesn’t Google instead run ads in prominent places that remind users how superior Google’s search results are and how easy it is for users (if they haven’t already done so) to switch to the Google search engine and make Google the user’s default choice?

Suppose that user inertia is important. Further suppose that users generally have difficulty in making comparisons with respect to the quality of delivered search results. If this is true, then being the default search engine on Apple and Android-based devices and on other distribution vehicles would be valuable. In this context, the inertia of their customers is a valuable “asset” of the distributors that the distributors may not be able to take advantage of, but that Google can (by providing search services and selling advertising). The question of whether Google’s taking advantage of this user inertia means that Google exercises market power takes us back to the issue of delineating the relevant market.

There is a further wrinkle to all of this. It is a well-understood concept in antitrust economics that an incumbent monopolist will be willing to pay more for the exclusive use of an essential input than a challenger would pay for access to the input.[12] The basic idea is straightforward. By maintaining exclusive use of the input, the incumbent monopolist preserves its (large) monopoly profits. If the challenger enters, the incumbent will then earn only its share of the (much lower, more competitive) duopoly profits. Similarly, the challenger can expect only the lower duopoly profits. Accordingly, the incumbent should be willing to outbid (and thereby exclude) the challenger and preserve the incumbent’s exclusive use of the input, so as to protect those monopoly profits.

To bring this to the Google monopolization context, if Google does possess market power in some aspect of search—say, because online search-linked advertising is a relevant market—then Google will be willing to outbid Microsoft (which owns Bing) for the “asset” of default access to Apple’s (inertial) device owners. That Microsoft is a large and profitable company and could afford to match (or exceed) Google’s payments to Apple is irrelevant. If the duopoly profits for online search-linked advertising would be substantially lower than Google’s current profits, then Microsoft would not find it worthwhile to try to outbid Google for that default access asset.

Alternatively, this scenario could be wholly consistent with an absence of market power. If search users (who can easily switch) consider Bing to be a lower-quality search service, then large payments by Microsoft to outbid Google for those exclusive default rights would be largely wasted, since the “acquired” default search users would quickly switch to Google (unless Microsoft provided additional incentives for the users not to switch).

But this alternative scenario returns us to the original puzzle:  Why is Google making such large payments to the distributors for those exclusive default rights?

An intriguing possibility

Consider the following possibility. Suppose that Google was paying that $8-$12 billion annually to Apple in return for the understanding that Apple would not develop its own search engine for Apple’s device users.[13] This possibility was not raised in the DOJ’s complaint, nor is it raised in the subsequent suits by the state attorneys general.

But let’s explore the implications by going to an extreme. Suppose that Google and Apple had a formal agreement that—in return for the $8-$12 billion per year—Apple would not develop its own search engine. In this event, this agreement not to compete would likely be seen as a violation of Section 1 of the Sherman Act (which does not require a market delineation exercise) and Apple would join Google as a co-conspirator. The case would take on the flavor of the FTC’s prosecution of “pay-for-delay” agreements between the manufacturers of patented pharmaceuticals and the generic drug manufacturers that challenge those patents and then receive payments from the former in return for dropping the patent challenge and delaying the entry of the generic substitute.[14]

As of this writing, there is no evidence of such an agreement and it seems quite unlikely that there would have been a formal agreement. But the DOJ will be able to engage in discovery and take depositions. It will be interesting to find out what the relevant executives at Google—and at Apple—thought was being achieved by those payments.

What would be a suitable remedy/relief?

The DOJ’s complaint is vague with respect to the remedy that it seeks. This is unsurprising. The DOJ may well want to wait to see how the case develops and then amend its complaint.

However, even if Google’s actions have constituted monopolization, it is difficult to conceive of a suitable and effective remedy. One apparently straightforward remedy would be to require simply that Google not be able to purchase exclusivity with respect to the pre-set default settings. In essence, the device manufacturers and others would always be able to sell parallel default rights to other search engines: on the basis, say, that the default rights for some categories of customers—or even a percentage of general customers (randomly selected)—could be sold to other search-engine providers.

But now the Gilbert-Newbery insight comes back into play. Suppose that a device manufacturer knows (or believes) that Google will pay much more if—even in the absence of any exclusivity agreement—Google ends up being the pre-set search engine for all (or nearly all) of the manufacturer’s device sales, as compared with what the manufacturer would receive if those default rights were sold to multiple search-engine providers (including, but not solely, Google). Can that manufacturer (recall that the distributors are not defendants in the case) be prevented from making this sale to Google and thus (de facto) continuing Google’s exclusivity?[15]

Even a requirement that Google not be allowed to make any payment to the distributors for a default position may not improve the competitive environment. Google may be able to find other ways of making indirect payments to distributors in return for attaining default rights, e.g., by offering them lower rates on their online advertising.

Further, if the ultimate goal is an efficient outcome in search, it is unclear how far restrictions on Google’s bidding behavior should go. If Google were forbidden from purchasing any default installation rights for its search engine, would (inert) consumers be better off? Similarly, if a distributor were to decide independently that its customers were better served by installing the Google search engine as the default, would that not be allowed? But if it is allowed, how could one be sure that Google wasn’t indirectly paying for this “independent” decision (e.g., through favorable advertising rates)?

It’s important to remember that this (alleged) monopolization is different from the Standard Oil case of 1911 or even the (landline) AT&T case of 1984. In those cases, there were physical assets that could be separated and spun off to separate companies. For Google, physical assets aren’t important. Although it is conceivable that some of Google’s intellectual property—such as Gmail, YouTube, or Android—could be spun off to separate companies, doing so would do little to cure the (arguably) fundamental problem of the inert device users.

In addition, if there were an agreement between Google and Apple for the latter not to develop a search engine, then large fines for both parties would surely be warranted. But what next? Apple can’t be forced to develop a search engine.[16] This differentiates such an arrangement from the “pay-for-delay” arrangements for pharmaceuticals, where the generic manufacturers can readily produce a near-identical substitute for the patented drug and are otherwise eager to do so.

At the end of the day, forbidding Google from paying for exclusivity may well be worth trying as a remedy. But as the discussion above indicates, it is unlikely to be a panacea and is likely to require considerable monitoring for effective enforcement.


The DOJ’s case against Google will be a slog. There are unresolved issues—such as how to delineate a relevant market in a monopolization case—that will be central to the case. Even if the DOJ is successful in showing that Google violated Section 2 of the Sherman Act in monopolizing search and/or search-linked advertising, an effective remedy seems problematic. But there also remains the intriguing question of why Google was willing to pay such large sums for those exclusive default installation rights?

The developments in the case will surely be interesting.

[1] The DOJ’s suit was joined by 11 states.  More states subsequently filed two separate antitrust lawsuits against Google in December.

[2] There is also a related argument:  That Google thereby gained greater volume, which allowed it to learn more about its search users and their behavior, and which thereby allowed it to provide better answers to users (and thus a higher-quality offering to its users) and better-targeted (higher-value) advertising to its advertisers.  Conversely, Google’s search-engine rivals were deprived of that volume, with the mirror-image negative consequences for the rivals.  This is just another version of the standard “learning-by-doing” and the related “learning curve” (or “experience curve”) concepts that have been well understood in economics for decades.

[3] See, for example, Steven C. Salop and David T. Scheffman, “Raising Rivals’ Costs: Recent Advances in the Theory of Industrial Structure,” American Economic Review, Vol. 73, No. 2 (May 1983), pp.  267-271; and Thomas G. Krattenmaker and Steven C. Salop, “Anticompetitive Exclusion: Raising Rivals’ Costs To Achieve Power Over Price,” Yale Law Journal, Vol. 96, No. 2 (December 1986), pp. 209-293.

[4] For a discussion, see Richard J. Gilbert, “The U.S. Federal Trade Commission Investigation of Google Search,” in John E. Kwoka, Jr., and Lawrence J. White, eds. The Antitrust Revolution: Economics, Competition, and Policy, 7th edn.  Oxford University Press, 2019, pp. 489-513.

[5] For a more complete version of the argument that follows, see Lawrence J. White, “Market Power and Market Definition in Monopolization Cases: A Paradigm Is Missing,” in Wayne D. Collins, ed., Issues in Competition Law and Policy. American Bar Association, 2008, pp. 913-924.

[6] The forgetting of this important point is often termed “the cellophane fallacy”, since this is what the U.S. Supreme Court did in a 1956 antitrust case in which the DOJ alleged that du Pont had monopolized the cellophane market (and du Pont, in its defense claimed that the relevant market was much wider: all flexible wrapping materials); see U.S. v. du Pont, 351 U.S. 377 (1956).  For an argument that profit data and other indicia argued for cellophane as the relevant market, see George W. Stocking and Willard F. Mueller, “The Cellophane Case and the New Competition,” American Economic Review, Vol. 45, No. 1 (March 1955), pp. 29-63.

[7] In the context of differentiated services, one would expect prices (positive or negative) to vary according to the quality of the service that is offered.  It is worth noting that Bing offers “rewards” to frequent searchers; see  It is unclear whether this pricing structure of payment to Bing’s customers represents what a more competitive framework in search might yield, or whether the payment just indicates that search users consider Bing to be a lower-quality service.

[8] As an additional consequence of the impairment of competition in this type of search market, there might be less technological improvement in the search process itself – to the detriment of users.

[9] As estimated by eMarketer:

[10] See

[11] And, again, if we return to the du Pont cellophane case:  Was the relevant market cellophane?  Or all flexible wrapping materials?

[12] This insight is formalized in Richard J. Gilbert and David M.G. Newbery, “Preemptive Patenting and the Persistence of Monopoly,” American Economic Review, Vol. 72, No. 3 (June 1982), pp. 514-526.

[13] To my knowledge, Randal C. Picker was the first to suggest this possibility; see  Whether Apple would be interested in trying to develop its own search engine – given the fiasco a decade ago when Apple tried to develop its own maps app to replace the Google maps app – is an open question.  In addition, the Gilbert-Newbery insight applies here as well:  Apple would be less inclined to invest the substantial resources that would be needed to develop a search engine when it is thereby in a duopoly market.  But Google might be willing to pay “insurance” to reinforce any doubts that Apple might have.

[14] The U.S. Supreme Court, in FTC v. Actavis, 570 U.S. 136 (2013), decided that such agreements could be anti-competitive and should be judged under the “rule of reason”.  For a discussion of the case and its implications, see, for example, Joseph Farrell and Mark Chicu, “Pharmaceutical Patents and Pay-for-Delay: Actavis (2013),” in John E. Kwoka, Jr., and Lawrence J. White, eds. The Antitrust Revolution: Economics, Competition, and Policy, 7th edn.  Oxford University Press, 2019, pp. 331-353.

[15] This is an example of the insight that vertical arrangements – in this case combined with the Gilbert-Newbery effect – can be a way for dominant firms to raise rivals’ costs.  See, for example, John Asker and Heski Bar-Isaac. 2014. “Raising Retailers’ Profits: On Vertical Practices and the Exclusion of Rivals.” American Economic Review, Vol. 104, No. 2 (February 2014), pp. 672-686.

[16] And, again, for the reasons discussed above, Apple might not be eager to make the effort.

[TOTM: The following is part of a digital symposium by TOTM guests and authors on the law, economics, and policy of the antitrust lawsuits against Google. The entire series of posts is available here.]

Judges sometimes claim that they do not pick winners when they decide antitrust cases. Nothing could be further from the truth.

Competitive conduct by its nature harms competitors, and so if antitrust were merely to prohibit harm to competitors, antitrust would then destroy what it is meant to promote.

What antitrust prohibits, therefore, is not harm to competitors but rather harm to competitors that fails to improve products. Only in this way is antitrust able to distinguish between the good firm that harms competitors by making superior products that consumers love and that competitors cannot match and the bad firm that harms competitors by degrading their products without offering consumers anything better than what came before.

That means, however, that antitrust must pick winners: antitrust must decide what is an improvement and what not. And a more popular search engine is a clear winner.

But one should not take its winningness for granted. For once upon a time there was another winner that the courts always picked, blocking antitrust case after antitrust case. Until one day the courts stopped picking it.

That was the economy of scale.

The Structure of the Google Case

Like all antitrust cases that challenge the exercise of power, the government’s case against Google alleges denial of an input to competitors in some market. Here the input is default search status in smartphones, the competitors are rival search providers, and the market is search advertising. The basic structure of the case is depicted in the figure below.

Although brought as a monopolization case under Section 2 of the Sherman Act, this is at heart an exclusive dealing case of the sort normally brought under Section 1 of the Sherman Act: the government’s core argument is that Google uses contracts with smartphone makers, pursuant to which the smartphone makers promise to make Google, and not competitors, the search default, to harm competing search advertising providers and by extension competition in the search advertising market.

The government must show anticompetitive conduct, monopoly power, and consumer harm in order to prevail.

Let us assume that there is monopoly power. The company has more than 70% of the search advertising market, which is in the zone normally required to prove that element of a monopolization claim.

The problem of anticompetitive conduct is only slightly more difficult.

Anticompetitive conduct is only ever one thing in antitrust: denial of an essential input to a competitor. There is no other way to harm rivals.

(To be sure, antitrust prohibits harm to competition, not competitors, but that means only that harm to competitors necessary but insufficient for liability. The consumer harm requirement decides whether the requisite harm to competitors is also harm to competition.)

It is not entirely clear just how important default search status really is to running a successful search engine, but let us assume that it is essential, as the government suggests.

Then the question whether Google’s contracts are anticompetitive turns on how much of the default search input Google’s contracts foreclose to rival search engines. If a lot, then the rivals are badly harmed. If a little, then there may be no harm at all.

The answer here is that there is a lot of foreclosure, at least if the government’s complaint is to be believed. Through its contracts with Apple and makers of Android phones, Google has foreclosed default search status to rivals on virtually every single smartphone.

That leaves consumer harm. And here is where things get iffy.

Usage as a Product Improvement: A Very Convenient Argument

The inquiry into consumer harm evokes measurements of the difference between demand curves and price lines, or extrapolations of compensating and equivalent variation using indifference curves painstakingly pieced together based on the assumptions of revealed preference.

But while the parties may pay experts plenty to spin such yarns, and judges may pretend to listen to them, in the end, for the judges, it always comes down to one question only: did exclusive dealing improve the product?

If it did, then the judge assumes that the contracts made consumers better off and the defendant wins. And if it did not, then off with their heads.

So, does foreclosing all this default search space to competitors make Google search advertising more valuable to advertisers?

Those who leap to Google’s defense say yes, for default search status increases the number of people who use Google’s search engine. And the more people use Google’s search engine, the more Google learns about how best to answer search queries and which advertisements will most interest which searchers. And that ensures that even more people will use Google’s search engine, and that Google will do an even better job of targeting ads on its search engine.

And that in turn makes Google’s search advertising even better: able to reach more people and to target ads more effectively to them.

None of that would happen if defaults were set to other engines and users spurned Google, and so foreclosing default search space to rivals undoubtedly improves Google’s product.

This is a nice argument. Indeed, it is almost too nice, for it seems to suggest that almost anything Google might do to steer users away from competitors and to itself deserves antitrust immunity. Suppose Google were to brandish arms to induce you to run your next search on Google. That would be a crime, but, on this account, not an antitrust crime. For getting you to use Google does make Google better.

The argument that locking up users improves the product is of potential use not just to Google but to any of the many tech companies that run on advertising—Facebook being a notable example—so it potentially immunizes an entire business model from antitrust scrutiny.

It turns out that has happened before.

Economies of Scale as a Product Improvement: Once a Convenient Argument

Once upon a time, antitrust exempted another kind of business for which products improve the more people used them. The business was industrial production, and it differs from online advertising only in the irrelevant characteristic that the improvement that comes with expanding use is not in the quality of the product but in the cost per unit of producing it.

The hallmark of the industrial enterprise is high fixed costs and low marginal costs. The textile mill differs from pre-industrial piecework weaving in that once a $10 million investment in machinery has been made, the mill can churn out yard after yard of cloth for pennies. The pieceworker, by contrast, makes a relatively small up-front investment—the cost of raising up the hovel in which she labors and making her few tools—but spends the same large amount of time to produce each new yard of cloth.

Large fixed costs and low marginal costs lie at the heart of the bounty of the modern age: the more you produce, the lower the unit cost, and so the lower the price at which you can sell your product. This is a recipe for plenty.

But it also means that, so long as consumer demand in a given market is lower than the capacity of any particular plant, driving buyers to a particular seller and away from competitors always improves the product, in the sense that it enables the firm to increase volume and reduce unit cost, and therefore to sell the product at a lower price.

If the promise of the modern age is goods at low prices, then the implication is that antitrust should never punish firms for driving rivals from the market and taking over their customers. Indeed, efficiency requires that only one firm should ever produce in any given market, at least in any market for which a single plant is capable of serving all customers.

For antitrust in the late 19th and early 20th centuries, beguiled by this advantage to size, exclusive dealing, refusals to deal, even the knife in a competitor’s back: whether these ran afoul of other areas of law or not, it was all for the better because it allowed industrial enterprises to achieve economies of scale.

It is no accident that, a few notable triumphs aside, antitrust did not come into its own until the mid-1930s, 40 years after its inception, on the heels of an intellectual revolution that explained, for the first time, why it might actually be better for consumers to have more than one seller in a market.

The Monopolistic Competition Revolution

The revolution came in the form of the theory of monopolistic competition and its cousin, the theory of creative destruction, developed between the 1920s and 1940s by Edward Chamberlin, Joan Robinson and Joseph Schumpeter.

These theories suggested that consumers might care as much about product quality as they do about product cost, and indeed would be willing to abandon a low-cost product for a higher-quality, albeit more expensive, one.

From this perspective, the world of economies of scale and monopoly production was the drab world of Soviet state-owned enterprises churning out one type of shoe, one brand of cleaning detergent, and so on.

The world of capitalism and technological advance, by contrast, was one in which numerous firms produced batches of differentiated products in amounts sometimes too small fully to realize all scale economies, but for which consumers were nevertheless willing to pay because the products better fit their preferences.

What is more, the striving of monopolistically competitive firms to lure away each other’s customers with products that better fit their tastes led to disruptive innovation— “creative destruction” was Schumpeter’s famous term for it—that brought about not just different flavors of the same basic concept but entirely new concepts. The competition to create a better flip phone, for example, would lead inevitably to a whole new paradigm, the smartphone.

This reasoning combined with work in the 1940s and 1950s on economic growth that quantified for the first time the key role played by technological change in the vigor of capitalist economies—the famous Solow residual—to suggest that product improvements, and not the cost reductions that come from capital accumulation and their associated economies of scale, create the lion’s share of consumer welfare. Innovation, not scale, was king.

Antitrust responded by, for the first time in its history, deciding between kinds of product improvements, rather than just in favor of improvements, casting economies of scale out of the category of improvements subject to antitrust immunity, while keeping quality improvements immune.

Casting economies of scale out of the protected product improvement category gave antitrust something to do for the first time. It meant that big firms had to plead more than just the cost advantages of being big in order to obtain license to push their rivals around. And government could now start reliably to win cases, rather than just the odd cause célèbre.

It is this intellectual watershed, and not Thurman Arnold’s tenacity, that was responsible for antitrust’s emergence as a force after World War Two.

Usage-Based Improvements Are Not Like Economies of Scale

The improvements in advertising that come from user growth fall squarely on the quality side of the ledger—the value they create is not due to the ability to average production costs over more ad buyers—and so they count as the kind of product improvements that antitrust continues to immunize today.

But given the pervasiveness of this mode of product improvement in the tech economy—the fact that virtually any tech firm that sells advertising can claim to be improving a product by driving users to itself and away from competitors—it is worth asking whether we have not reached a new stage in economic development in which this form of product improvement ought, like economies of scale, to be denied protection.

Shouldn’t the courts demand more and better innovation of big tech firms than just the same old big-data-driven improvements they serve up year after year?

Galling as it may be to those who, like myself, would like to see more vigorous antitrust enforcement in general, the answer would seem to be “no.” For what induced the courts to abandon antitrust immunity for economies of scale in the mid-20th century was not the mere fact that immunizing economies of scale paralyzed antitrust. Smashing big firms is not, after all, an end in itself.

Instead, monopolistic competition, creative destruction and the Solow residual induced the change, because they suggested both that other kinds of product improvement are more important than economies of scale and, crucially, that protecting economies of scale impedes development of those other kinds of improvements.

A big firm that excludes competitors in order to reach scale economies not only excludes competitors who might have produced an identical or near-identical product, but also excludes competitors who might have produced a better-quality product, one that consumers would have preferred to purchase even at a higher price.

To cast usage-based improvements out of the product improvement fold, a case must be made that excluding competitors in order to pursue such improvements will block a different kind of product improvement that contributes even more to consumer welfare.

If we could say, for example, that suppressing search competitors suppresses more-innovative search engines that ad buyers would prefer, even if those innovative search engines were to lack the advantages that come from having a large user base, then a case might be made that user growth should no longer count as a product improvement immune from antitrust scrutiny.

And even then, the case against usage-based improvements would need to be general enough to justify an epochal change in policy, rather than be limited to a particular technology in a particular lawsuit. For the courts hate to balance in individual cases, statements to the contrary in their published opinions notwithstanding.

But there is nothing in the Google complaint, much less the literature, to suggest that usage-based improvements are problematic in this way. Indeed, much of the value created by the information revolution seems to inhere precisely in its ability to centralize usage.

Americans Keep Voting to Centralize the Internet

In the early days of the internet, theorists mistook its decentralized architecture for a feature, rather than a bug. But internet users have since shown, time and again, that they believe the opposite.

For example, the basic protocols governing email were engineered to allow every American to run his own personal email server.

But Americans hated the freedom that created—not least the spam—and opted instead to get their email from a single server: the one run by Google as Gmail.

The basic protocols governing web traffic were also designed to allow every American to run whatever other communications services he wished—chat, video chat, RSS, webpages—on his own private server in distributed fashion.

But Americans hated the freedom that created—not least having to build and rebuild friend networks across platforms–—and they voted instead overwhelmingly to get their social media from a single server: Facebook.

Indeed, the basic protocols governing internet traffic were designed to allow every business to store and share its own data from its own computers, in whatever form.

But American businesses hated that freedom—not least the cost of having to buy and service their own data storage machines—and instead 40% of the internet is now stored and served from Amazon Web Services.

Similarly, advertisers have the option of placing advertisements on the myriad independently-run websites that make up the internet—known in the business as the “open web”—by placing orders through competitive ad exchanges. But advertisers have instead voted mostly to place ads on the handful of highly centralized platforms known as “walled gardens,” including Facebook, Google’s YouTube and, of course, Google Search.

The communications revolution, they say, is all about “bringing people together.” It turns out that’s true.

And that Google should win on consumer harm.

Remember the Telephone

Indeed, the same mid-20th century antitrust that thought so little of economies of scale as a defense immunized usage-based improvements when it encountered them in that most important of internet precursors: the telephone.

The telephone, like most internet services, gets better as usage increases. The more people are on a particular telephone network, the more valuable the network becomes to subscribers.

Just as with today’s internet services, the advantage of a large user base drove centralization of telephone services a century ago into the hands of a single firm: AT&T. Aside from a few business executives who liked the look of a desk full of handsets, consumers wanted one phone line that they could use to call everyone.

Although the government came close to breaking AT&T up in the early 20th century, the government eventually backed off, because a phone system in which you must subscribe to the right carrier to reach a friend just doesn’t make sense.

Instead, Congress and state legislatures stepped in to take the edge off monopoly by regulating phone pricing. And when antitrust finally did break AT&T up in 1982, it did so in a distinctly regulatory fashion, requiring that AT&T’s parts connect each other’s phone calls, something that Congress reinforced in the Telecommunications Act of 1996.

The message was clear: the sort of usage-based improvements one finds in communications are real product improvements. And antitrust can only intervene if it has a way to preserve them.

The equivalent of interconnection in search, that the benefits of usage, in the form of data and attention, be shared among competing search providers, might be feasible. But it is hard to imagine the court in the Google case ordering interconnection without the benefit of decades of regulatory experience with the defendant’s operations that the district court in 1982 could draw upon in the AT&T case.

The solution for the tech giants today is the same as the solution for AT&T a century ago: to regulate rather than to antitrust.

Microsoft Not to the Contrary, Because Users Were in Common

Parallels to the government’s 1990s-era antitrust case against Microsoft are not to the contrary.

As Sam Weinstein has pointed out to me, Microsoft, like Google, was at heart an exclusive dealing case: Microsoft contracted with computer manufacturers to prevent Netscape Navigator, an early web browser, from serving as the default web browser on Windows PCs.

That prevented Netscape, the argument went, from growing to compete with Windows in the operating system market, much the way the Google’s Chrome browser has become a substitute for Windows on low-end notebook computers today.

The D.C. Circuit agreed that default status was an essential input for Netscape as it sought eventually to compete with Windows in the operating system market.

The court also accepted the argument that the exclusive dealing did not improve Microsoft’s operating system product.

This at first seems to contradict the notion that usage improves products, for, like search advertising, operating systems get better as their user bases increase. The more people use an operating system, the more application developers are willing to write for the system, and the better the system therefore becomes.

It seems to follow that keeping competitors off competing operating systems and on Windows made Windows better. If the court nevertheless held Microsoft liable, it must be because the court refused to extend antitrust immunity to usage-based improvements.

The trouble with this line of argument is that it ignores the peculiar thing about the Microsoft case: that while the government alleged that Netscape was a potential competitor of Windows, Netscape was also an application that ran on Windows.

That means that, unlike Google and rival search engines, Windows and Netscape shared users.

So, Microsoft’s exclusive dealing did not increase its user base and therefore could not have improved Windows, at least not by making Windows more appealing for applications developers. Driving Netscape from Windows did not enable developers to reach even one more user. Conversely, allowing Netscape to be the default browser on Windows would not have reduced the number of Windows users, because Netscape ran on Windows.

By contrast, a user who runs a search in Bing does not run the same search simultaneously in Google, and so Bing users are not Google users. Google’s exclusive dealing therefore increases its user base and improves Google’s product, whereas Microsoft’s exclusive dealing served only to reduce Netscape’s user base and degrade Netscape’s product.

Indeed, if letting Netscape be the default browser on Windows was a threat to Windows, it was not because it prevented Microsoft from improving its product, but because Netscape might eventually have become an operating system, and indeed a better operating system, than Windows, and consumers and developers, who could be on both at the same time if they wished, might have nevertheless chosen eventually to go with Netscape alone.

Though it does not help the government in the Google case, Microsoft still does offer a beacon of hope for those concerned about size, for Microsoft’s subsequent history reminds us that yesterday’s behemoth is often today’s also ran.

And the favorable settlement terms Microsoft ultimately used to escape real consequences for its conduct 20 years ago imply that, at least in high-tech markets, we don’t always need antitrust for that to be true.

What is a search engine?

Dirk Auer —  21 October 2020

What is a search engine? This might seem like an innocuous question, but it lies at the heart of the US Department of Justice and state Attorneys’ General antitrust complaint against Google, as well as the European Commission’s Google Search and Android decisions. It is also central to a report published by the UK’s Competition & Markets Authority (“CMA”). To varying degrees, all of these proceedings are premised on the assumption that Google enjoys a monopoly/dominant position over online search. But things are not quite this simple. 

Despite years of competition decisions and policy discussions, there are still many unanswered questions concerning the operation of search markets. For example, it is still unclear exactly which services compete against Google Search, and how this might evolve in the near future. Likewise, there has only been limited scholarly discussion as to how a search engine monopoly would exert its market power. In other words, what does a restriction of output look like on a search platform — particularly on the user side

Answering these questions will be essential if authorities wish to successfully bring an antitrust suit against Google for conduct involving search. Indeed, as things stand, these uncertainties greatly complicate efforts (i) to rigorously define the relevant market(s) in which Google Search operates, (ii) to identify potential anticompetitive effects, and (iii) to apply the quantitative tools that usually underpin antitrust proceedings.

In short, as explained below, antitrust authorities and other plaintiffs have their work cut out if they are to prevail in court.

Consumers demand information 

For a start, identifying the competitive constraints faced by Google presents authorities and plaintiffs with an important challenge.

Even proponents of antitrust intervention recognize that the market for search is complex. For instance, the DOJ and state AGs argue that Google dominates a narrow market for “general search services” — as opposed to specialized search services, content sites, social networks, and online marketplaces, etc. The EU Commission reached the same conclusion in its Google Search decision. Finally, commenting on the CMA’s online advertising report, Fiona Scott Morton and David Dinielli argue that: 

General search is a relevant market […]

In this way, an individual specialized search engine competes with a small fraction of what the Google search engine does, because a user could employ either for one specific type of search. The CMA concludes that, from the consumer standpoint, a specialized search engine exerts only a limited competitive constraint on Google.

(Note that the CMA stressed that it did not perform a market definition exercise: “We have not carried out a formal market definition assessment, but have instead looked at competitive constraints across the sector…”).

In other words, the above critics recognize that search engines are merely tools that can serve multiple functions, and that competitive constraints may be different for some of these. But this has wider ramifications that policymakers have so far overlooked. 

When quizzed about his involvement with Neuralink (a company working on implantable brain–machine interfaces), Elon Musk famously argued that human beings already share a near-symbiotic relationship with machines (a point already made by others):

The purpose of Neuralink [is] to create a high-bandwidth interface to the brain such that we can be symbiotic with AI. […] Because we have a bandwidth problem. You just can’t communicate through your fingers. It’s just too slow.

Commentators were quick to spot this implications of this technology for the search industry:

Imagine a world when humans would no longer require a device to search for answers on the internet, you just have to think of something and you get the answer straight in your head from the internet.

As things stand, this example still belongs to the realm of sci-fi. But it neatly illustrates a critical feature of the search industry. 

Search engines are just the latest iteration (but certainly not the last) of technology that enables human beings to access specific pieces of information more rapidly. Before the advent of online search, consumers used phone directories, paper maps, encyclopedias, and other tools to find the information they were looking for. They would read newspapers and watch television to know the weather forecast. They went to public libraries to undertake research projects (some still do), etc.

And, in some respects, the search engine is already obsolete for many of these uses. For instance, virtual assistants like Alexa, Siri, Cortana and Google’s own Google Assistant offering can perform many functions that were previously the preserve of search engines: checking the weather, finding addresses and asking for directions, looking up recipes, answering general knowledge questions, finding goods online, etc. Granted, these virtual assistants partly rely on existing search engines to complete tasks. However, Google is much less dominant in this space, and search engines are not the sole source on which virtual assistants rely to generate results. Amazon’s Alexa provides a fitting example (here and here).

Along similar lines, it has been widely reported that 60% of online shoppers start their search on Amazon, while only 26% opt for Google Search. In other words, Amazon’s ability to rapidly show users the product they are looking for somewhat alleviates the need for a general search engine. In turn, this certainly constrains Google’s behavior to some extent. And much of the same applies to other websites that provide a specific type of content (think of Twitter, LinkedIn, Tripadvisor,, etc.)

Finally, it is also revealing that the most common searches on Google are, in all likelihood, made to reach other websites — a function for which competition is literally endless:

The upshot is that Google Search and other search engines perform a bundle of functions. Most of these can be done via alternative means, and this will increasingly be the case as technology continues to advance. 

This is all the more important given that the vast majority of search engine revenue derives from roughly 30 percent of search terms (notably those that are linked to product searches). The remaining search terms are effectively a loss leader. And these profitable searches also happen to be those where competition from alternative means is, in all likelihood, the strongest (this includes competition from online retail platforms, and online travel agents like or Kayak, but also from referral sites, direct marketing, and offline sources). In turn, this undermines US plaintiffs’ claims that Google faces little competition from rivals like Amazon, because they don’t compete for the entirety of Google’s search results (in other words, Google might face strong competition for the most valuable ads):

108. […] This market share understates Google’s market power in search advertising because many search-advertising competitors offer only specialized search ads and thus compete with Google only in a limited portion of the market. 

Critics might mistakenly take the above for an argument that Google has no market power because competition is “just a click away”. But the point is more subtle, and has important implications as far as market definition is concerned.

Authorities should not define the search market by arguing that no other rival is quite like Google (or one if its rivals) — as the DOJ and state AGs did in their complaint:

90. Other search tools, platforms, and sources of information are not reasonable substitutes for general search services. Offline and online resources, such as books, publisher websites, social media platforms, and specialized search providers such as Amazon, Expedia, or Yelp, do not offer consumers the same breadth of information or convenience. These resources are not “one-stop shops” and cannot respond to all types of consumer queries, particularly navigational queries. Few consumers would find alternative sources a suitable substitute for general search services. Thus, there are no reasonable substitutes for general search services, and a general search service monopolist would be able to maintain quality below the level that would prevail in a competitive market. 

And as the EU Commission did in the Google Search decision:

(162) For the reasons set out below, there is, however, limited demand side substitutability between general search services and other online services. […]

(163) There is limited substitutability between general search services and content sites. […]

(166) There is also limited substitutability between general search services and specialised search services. […]

(178) There is also limited substitutability between general search services and social networking sites.

Ad absurdum, if consumers suddenly decided to access information via other means, Google could be the only firm to provide general search results and yet have absolutely no market power. 

Take the example of Yahoo: Despite arguably remaining the most successful “web directory”, it likely lost any market power that it had when Google launched a superior — and significantly more successful — type of search engine. Google Search may not have provided a complete, literal directory of the web (as did Yahoo), but it offered users faster access to the information they wanted. In short, the Yahoo example shows that being unique is not equivalent to having market power. Accordingly, any market definition exercise that merely focuses on the idiosyncrasies of firms is likely to overstate their actual market power. 

Given what precedes, the question that authorities should ask is thus whether Google Search (or another search engine) performs so many unique functions that it may be in a position to restrict output. So far, no one appears to have convincingly answered this question.

Similar uncertainties surround the question of how a search engine might restrict output, especially on the user side of the search market. Accordingly, authorities will struggle to produce evidence (i) the Google has market power, especially on the user side of the market, and (ii) that its behavior has anticompetitive effects.

Consider the following:

The SSNIP test (which is the standard method of defining markets in antitrust proceedings) is inapplicable to the consumer side of search platforms. Indeed, it is simply impossible to apply a hypothetical 10% price increase to goods that are given away for free.

This raises a deeper question: how would a search engine exercise its market power? 

For a start, it seems unlikely that it would start charging fees to its users. For instance, empirical research pertaining to the magazine industry (also an ad-based two-sided market) suggests that increased concentration does not lead to higher magazine prices. Minjae Song notably finds that:

Taking the advantage of having structural models for both sides, I calculate equilibrium outcomes for hypothetical ownership structures. Results show that when the market becomes more concentrated, copy prices do not necessarily increase as magazines try to attract more readers.

It is also far from certain that a dominant search engine would necessarily increase the amount of adverts it displays. To the contrary, market power on the advertising side of the platform might lead search engines to decrease the number of advertising slots that are available (i.e. reducing advertising output), thus showing less adverts to users. 

Finally, it is not obvious that market power would lead search engines to significantly degrade their product (as this could ultimately hurt ad revenue). For example, empirical research by Avi Goldfarb and Catherine Tucker suggests that there is some limit to the type of adverts that search engines could profitably impose upon consumers. They notably find that ads that are both obtrusive and targeted decrease subsequent purchases:

Ads that match both website content and are obtrusive do worse at increasing purchase intent than ads that do only one or the other. This failure appears to be related to privacy concerns: the negative effect of combining targeting with obtrusiveness is strongest for people who refuse to give their income and for categories where privacy matters most.

The preceding paragraphs find some support in the theoretical literature on two-sided markets literature, which suggests that competition on the user side of search engines is likely to be particularly intense and beneficial to consumers (because they are more likely to single-home than advertisers, and because each additional user creates a positive externality on the advertising side of the market). For instance, Jean Charles Rochet and Jean Tirole find that:

The single-homing side receives a large share of the joint surplus, while the multi-homing one receives a small share.

This is just a restatement of Mark Armstrong’s “competitive bottlenecks” theory:

Here, if it wishes to interact with an agent on the single-homing side, the multi-homing side has no choice but to deal with that agent’s chosen platform. Thus, platforms have monopoly power over providing access to their single-homing customers for the multi-homing side. This monopoly power naturally leads to high prices being charged to the multi-homing side, and there will be too few agents on this side being served from a social point of view (Proposition 4). By contrast, platforms do have to compete for the single-homing agents, and high profits generated from the multi-homing side are to a large extent passed on to the single-homing side in the form of low prices (or even zero prices).

All of this is not to suggest that Google Search has no market power, or that monopoly is necessarily less problematic in the search engine industry than in other markets. 

Instead, the argument is that analyzing competition on the user side of search platforms is unlikely to yield dispositive evidence of market power or anticompetitive effects. This is because market power is hard to measure on this side of the market, and because even a monopoly platform might not significantly restrict user output. 

That might explain why the DOJ and state AGs analysis of anticompetitive effects is so limited. Take the following paragraph (provided without further supporting evidence):

167. By restricting competition in general search services, Google’s conduct has harmed consumers by reducing the quality of general search services (including dimensions such as privacy, data protection, and use of consumer data), lessening choice in general search services, and impeding innovation. 

Given these inherent difficulties, antitrust investigators would do better to focus on the side of those platforms where mainstream IO tools are much easier to apply and where a dominant search engine would likely restrict output: the advertising market. Not only is it the market where search engines are most likely to exert their market power (thus creating a deadweight loss), but — because it involves monetary transactions — this side of the market lends itself to the application of traditional antitrust tools.  

Looking at the right side of the market

Finally, and unfortunately for Google’s critics, available evidence suggests that its position on the (online) advertising market might not meet the requirements necessary to bring a monopolization case (at least in the US).

For a start, online advertising appears to exhibit the prima facie signs of a competitive market. As Geoffrey Manne, Sam Bowman and Eric Fruits have argued:

Over the past decade, the price of advertising has fallen steadily while output has risen. Spending on digital advertising in the US grew from $26 billion in 2010 to nearly $130 billion in 2019, an average increase of 20% a year. Over the same period the Producer Price Index for Internet advertising sales declined by nearly 40%. The rising spending in the face of falling prices indicates the number of ads bought and sold increased by approximately 27% a year. Since 2000, advertising spending has been falling as a share of GDP, with online advertising growing as a share of that. The combination of increasing quantity, decreasing cost, and increasing total revenues are consistent with a growing and increasingly competitive market.

Second, empirical research suggests that the market might need to be widened to include offline advertising. For instance, Avi Goldfarb and Catherine Tucker show that there can be important substitution effects between online and offline advertising channels:

Using data on the advertising prices paid by lawyers for 139 Google search terms in 195 locations, we exploit a natural experiment in “ambulance-chaser” regulations across states. When lawyers cannot contact clients by mail, advertising prices per click for search engine advertisements are 5%–7% higher. Therefore, online advertising substitutes for offline advertising.

Of course, a careful examination of the advertising industry could also lead authorities to define a narrower relevant market. For example, the DOJ and state AG complaint argued that Google dominated the “search advertising” market:

97. Search advertising in the United States is a relevant antitrust market. The search advertising market consists of all types of ads generated in response to online search queries, including general search text ads (offered by general search engines such as Google and Bing) […] and other, specialized search ads (offered by general search engines and specialized search providers such as Amazon, Expedia, or Yelp). 

Likewise, the European Commission concluded that Google dominated the market for “online search advertising” in the AdSense case (though the full decision has not yet been made public). Finally, the CMA’s online platforms report found that display and search advertising belonged to separate markets. 

But these are empirical questions that could dispositively be answered by applying traditional antitrust tools, such as the SSNIP test. And yet, there is no indication that the authorities behind the US complaint undertook this type of empirical analysis (and until its AdSense decision is made public, it is not clear that the EU Commission did so either). Accordingly, there is no guarantee that US courts will go along with the DOJ and state AGs’ findings.

In short, it is far from certain that Google currently enjoys an advertising monopoly, especially if the market is defined more broadly than that for “search advertising” (or the even narrower market for “General Search Text Advertising”). 

Concluding remarks

The preceding paragraphs have argued that a successful antitrust case against Google is anything but a foregone conclusion. In order to successfully bring a suit, authorities would notably need to figure out just what market it is that Google is monopolizing. In turn, that would require a finer understanding of what competition, and monopoly, look like in the search and advertising industries.

The Antitrust Division of the U.S. Department of Justice (DOJ) ignored sound law and economics principles in its August 4 decision announcing a new interpretation of seventy-five year-old music licensing consent decrees it had entered into separately with the two major American “performing rights organizations” (PROs)  —  the American Society of Composers, Authors, and Publishers (see ASCAP) and Broadcast Music, Inc. (see BMI).  It also acted in a matter at odds with international practice.  DOJ should promptly rescind its new interpretation and restore the welfare-enhancing licensing flexibility that ASCAP and BMI previously enjoyed.   If DOJ fails to do this, the court overseeing the decrees or Congress should be prepared to act.


ASCAP and BMI contract with music copyright holders to act as intermediaries that provide “blanket” licenses to music users (e.g., television and radio stations, bars, and internet music distributors) for use of their full copyrighted musical repertoires, without the need for song-specific licensing negotiations.  This greatly reduces the transactions costs of arranging for the playing of musical works, benefiting music users, the listening public, and copyright owners (all of whom are assured of at least some compensation for their endeavors).  ASCAP and BMI are big businesses, with each PRO holding licenses to over ten million works and accounting for roughly 45 percent of the domestic music licensing market (ninety percent combined).  Because both ASCAP and BMI pool copyrighted songs that could otherwise compete with each other, and both grant users a single-price “blanket license” conveying the rights to play their full set of copyrighted works, the two organizations could be seen as restricting competition among copyrighted works and fixing the prices of copyrighted substitutes – raising serious questions under section 1 of the Sherman Antitrust Act, which condemns contracts that unreasonably restrain trade.  This led the DOJ to bring antitrust suits against ASCAP and BMI over eighty years ago, which were settled by separate judicially-filed consent decrees in 1941.  The decrees imposed a variety of limitations on the two PROs’ licensing practices, aimed at preventing ASCAP and BMI from exercising anticompetitive market power (such as the setting of excessive licensing rates).  The decrees were amended twice over the years, most recently in 2001, to take account of changing market conditions.  The U.S. Supreme Court noted the constraining effect of the decrees in BMI v. CBS (1979), in ruling that the BMI and ASCAP blanket licenses did not constitute per se illegal price fixing.  The Court held, rather, that the licenses should be evaluated on a case-by-case basis under the antitrust “rule of reason,” since the licenses inherently generated great efficiency benefits (“the immediate use of covered compositions, without the delay of prior individual negotiations”) that had to be weighed against potential anticompetitive harms.

The August 4, 2016 DOJ Consent Decree Interpretation

Fast forward to 2014, when DOJ undertook a new review of the ASCAP and BMI decrees, and requested the submission of public comments to aid it in its deliberations.  This review came to an official conclusion two year laters, on August 4, 2016, when DOJ decided not to amend the decrees – but announced a decree interpretation that limits ASCAP’s and BMI’s flexibility.  Specifically, DOJ stated that the decrees needed to be “more consistently applied.”  By this, the DOJ meant that BMI and ASCAP should only grant blanket licenses that cover all of the rights to 100 percent of the works in the PROs’ respective catalogs, not licenses that cover only partial interests in those works.  DOJ stated:

Only full-work licensing can yield the substantial procompetitive benefits associated with blanket licenses that distinguish ASCAP’s and BMI’s activities from other agreements among competitors that present serious issues under the antitrust laws.

The New DOJ Interpretation is bad as a Matter of Policy

DOJ’s August 4 interpretation rejects industry practice.  Under it, ASCAP and BMI will only be able to offer a license covering all of the copyright interests in a musical competition, even if the license covers a joint work.  For example, consider a band of five composer-musicians, each of whom has a fractional interest in the copyright covering the band’s new album which is a joint work.  Previously, each musician was able to offer a partial interest in the joint work to a performance rights organization, reflecting the relative shares of the total copyright interest covering the work. The organization could offer a partial license, and a user could aggregate different partial licenses in order to cover the whole joint work.

Now, however, under DOJ’s new interpretation, BMI and ASCAP will be prevented from offering partial licenses to that work to users. This may deny the band’s individual members the opportunity to deal profitably with BMI and ASCAP, thereby undermining their ability to receive fair compensation.  As the two PROs have noted, this approach “will cause unnecessary chaos in the marketplace and place unfair financial burdens and creative constraints on songwriters and composers.”  According to ASCAP President Paul Williams, “It is as if the DOJ saw songwriters struggling to stay afloat in a sea of outdated regulations and decided to hand us an anchor, in the form of 100 percent licensing, instead of a life preserver.”  Furthermore, the president and CEO of BMI, Mike O’Neill, stated:  “We believe the DOJ’s interpretation benefits no one – not BMI or ASCAP, not the music publishers, and not the music users – but we are most sensitive to the impact this could have on you, our songwriters and composers.”  These views are bolstered by a January 2016 U.S. Copyright Office report, which concluded that “an interpretation of the consent decrees that would require 100-percent licensing or removal of a work from the ASCAP or BMI repertoire would appear to be fraught with legal and logistical problems, and might well result in a sharp decrease in repertoire available through these [performance rights organizations’] blanket licenses.”  Regrettably, during the decree review period, DOJ ignored the expert opinion of the Copyright Office, as well as the public record comments of numerous publishers and artists (see here, for example) indicating that a 100 percent licensing requirement would depress returns to copyright owners and undermine the creative music industry.

Most fundamentally, DOJ’s new interpretation of the BMI and ASCAP consent decrees involves an abridgment of economic freedom.  It further limits the flexibility of copyright music holders and music users to contract with intermediaries to promote the efficient distribution of music performance rights, in a manner that benefits the listening public while allowing creative artists sufficient compensation for their efforts.  DOJ made no compelling showing that a new consent decree constraint is needed to promote competition (100 percent licensing only).  Far from promoting competition, DOJ’s new interpretation undermines it.  In short, DOJ micromanagement of copyright licensing by consent decree reinterpretation is a costly new regulatory initiative that reflects a lack of appreciation for intellectual property rights, which incentivize innovation.  In short, DOJ’s latest interpretation of the ASCAP and BMI decrees is terrible policy.

The New DOJ Interpretation is bad as a Matter of Law

DOJ’s new interpretation not only is bad policy, it is inconsistent with sound textual construction of the decrees themselves.  As counsel for BMI explained in an August 4 federal court filing (in the Southern District of New York, which oversees the decrees), the BMI decree (and therefore the analogous ASCAP decree as well) does not expressly require 100 percent licensing and does not unambiguously prohibit fractional licensing.  Accordingly, since a consent decree is an injunction, and any activity not expressly required or prohibited thereunder is permitted, fractional shares licensing should be authorized.  DOJ’s new interpretation ignores this principle.  It also is at odds with a report of the U.S. Copyright Office that concluded the BMI consent decree “must be understood to include partial interests in musical works.”  Furthermore, the new interpretation is belied by the fact that the PRO licensing market has developed and functioned efficiently for decades by pricing, colleting, and distributing fees for royalties on a fractional basis.  Courts view such evidence of trade practice and custom as relevant in determining the meaning of a consent decree.


The New DOJ Interpretation Runs Counter to International Norms

Finally, according to Gadi Oron, Director General of the International Confederation of Societies of Authors and Composers (CISAC), a Paris-based organization that regroups 239 rights societies from 123 countries, including ASCAP, BMI, and SESAC, adoption of the new interpretation would depart from international norms in the music licensing industry and have disruptive international effects:

It is clear that the DoJ’s decisions have been made without taking the interests of creators, neither American nor international, into account. It is also clear that they were made with total disregard for the international framework, where fractional licensing is practiced, even if it’s less of a factor because many countries only have one performance rights organization representing songwriters in their territory. International copyright laws grant songwriters exclusive rights, giving them the power to decide who will license their rights in each territory and it is these rights that underpin the landscape in which authors’ societies operate. The international system of collective management of rights, which is based on reciprocal representation agreements and founded on the freedom of choice of the rights holder, would be negatively affected by such level of government intervention, at a time when it needs support more than ever.


In sum, DOJ should take account of these concerns and retract its new interpretation of the ASCAP and BMI consent decrees, restoring the status quo ante.  If it fails to do so, a federal court should be prepared to act, and, if necessary, Congress should seriously consider appropriate corrective legislation.

Last week concluded round 3 of Congressional hearings on mergers in the healthcare provider and health insurance markets. Much like the previous rounds, the hearing saw predictable representatives, of predictable constituencies, saying predictable things.

The pattern is pretty clear: The American Hospital Association (AHA) makes the case that mergers in the provider market are good for consumers, while mergers in the health insurance market are bad. A scholar or two decries all consolidation in both markets. Another interested group, like maybe the American Medical Association (AMA), also criticizes the mergers. And it’s usually left to a representative of the insurance industry, typically one or more of the merging parties themselves, or perhaps a scholar from a free market think tank, to defend the merger.

Lurking behind the public and politicized airings of these mergers, and especially the pending Anthem/Cigna and Aetna/Humana health insurance mergers, is the Affordable Care Act (ACA). Unfortunately, the partisan politics surrounding the ACA, particularly during this election season, may be trumping the sensible economic analysis of the competitive effects of these mergers.

In particular, the partisan assessments of the ACA’s effect on the marketplace have greatly colored the Congressional (mis-)understandings of the competitive consequences of the mergers.  

Witness testimony and questions from members of Congress at the hearings suggest that there is widespread agreement that the ACA is encouraging increased consolidation in healthcare provider markets, for example, but there is nothing approaching unanimity of opinion in Congress or among interested parties regarding what, if anything, to do about it. Congressional Democrats, for their part, have insisted that stepped up vigilance, particularly of health insurance mergers, is required to ensure that continued competition in health insurance markets isn’t undermined, and that the realization of the ACA’s objectives in the provider market aren’t undermined by insurance companies engaging in anticompetitive conduct. Meanwhile, Congressional Republicans have generally been inclined to imply (or outright state) that increased concentration is bad, so that they can blame increasing concentration and any lack of competition on the increased regulatory costs or other effects of the ACA. Both sides appear to be missing the greater complexities of the story, however.

While the ACA may be creating certain impediments in the health insurance market, it’s also creating some opportunities for increased health insurance competition, and implementing provisions that should serve to hold down prices. Furthermore, even if the ACA is encouraging more concentration, those increases in concentration can’t be assumed to be anticompetitive. Mergers may very well be the best way for insurers to provide benefits to consumers in a post-ACA world — that is, the world we live in. The ACA may have plenty of negative outcomes, and there may be reasons to attack the ACA itself, but there is no reason to assume that any increased concentration it may bring about is a bad thing.

Asking the right questions about the ACA

We don’t need more self-serving and/or politicized testimony We need instead to apply an economic framework to the competition issues arising from these mergers in order to understand their actual, likely effects on the health insurance marketplace we have. This framework has to answer questions like:

  • How do we understand the effects of the ACA on the marketplace?
    • In what ways does the ACA require us to alter our understanding of the competitive environment in which health insurance and healthcare are offered?
    • Does the ACA promote concentration in health insurance markets?
    • If so, is that a bad thing?
  • Do efficiencies arise from increased integration in the healthcare provider market?
  • Do efficiencies arise from increased integration in the health insurance market?
  • How do state regulatory regimes affect the understanding of what markets are at issue, and what competitive effects are likely, for antitrust analysis?
  • What are the potential competitive effects of increased concentration in the health care markets?
  • Does increased health insurance market concentration exacerbate or counteract those effects?

Beginning with this post, at least a few of us here at TOTM will take on some of these issues, as part of a blog series aimed at better understanding the antitrust law and economics of the pending health insurance mergers.

Today, we will focus on the ambiguous competitive implications of the ACA. Although not a comprehensive analysis, in this post we will discuss some key insights into how the ACA’s regulations and subsidies should inform our assessment of the competitiveness of the healthcare industry as a whole, and the antitrust review of health insurance mergers in particular.

The ambiguous effects of the ACA

It’s an understatement to say that the ACA is an issue of great political controversy. While many Democrats argue that it has been nothing but a boon to consumers, Republicans usually have nothing good to say about the law’s effects. But both sides miss important but ambiguous effects of the law on the healthcare industry. And because they miss (or disregard) this ambiguity for political reasons, they risk seriously misunderstanding the legal and economic implications of the ACA for healthcare industry mergers.

To begin with, there are substantial negative effects, of course. Requiring insurance companies to accept patients with pre-existing conditions reduces the ability of insurance companies to manage risk. This has led to upward pricing pressure for premiums. While the mandate to buy insurance was supposed to help bring more young, healthy people into the risk pool, so far the projected signups haven’t been realized.

The ACA’s redefinition of what is an acceptable insurance policy has also caused many consumers to lose the policy of their choice. And the ACA’s many regulations, such as the Minimum Loss Ratio requiring insurance companies to spend 80% of premiums on healthcare, have squeezed the profit margins of many insurance companies, leading, in some cases, to exit from the marketplace altogether and, in others, to a reduction of new marketplace entry or competition in other submarkets.

On the other hand, there may be benefits from the ACA. While many insurers participated in private exchanges even before the ACA-mandated health insurance exchanges, the increased consumer education from the government’s efforts may have helped enrollment even in private exchanges, and may also have helped to keep premiums from increasing as much as they would have otherwise. At the same time, the increased subsidies for individuals have helped lower-income people afford those premiums. Some have even argued that increased participation in the on-demand economy can be linked to the ability of individuals to buy health insurance directly. On top of that, there has been some entry into certain health insurance submarkets due to lower barriers to entry (because there is less need for agents to sell in a new market with the online exchanges). And the changes in how Medicare pays, with a greater focus on outcomes rather than services provided, has led to the adoption of value-based pricing from both health care providers and health insurance companies.

Further, some of the ACA’s effects have  decidedly ambiguous consequences for healthcare and health insurance markets. On the one hand, for example, the ACA’s compensation rules have encouraged consolidation among healthcare providers, as noted. One reason for this is that the government gives higher payments for Medicare services delivered by a hospital versus an independent doctor. Similarly, increased regulatory burdens have led to higher compliance costs and more consolidation as providers attempt to economize on those costs. All of this has happened perhaps to the detriment of doctors (and/or patients) who wanted to remain independent from hospitals and larger health network systems, and, as a result, has generally raised costs for payors like insurers and governments.

But much of this consolidation has also arguably led to increased efficiency and greater benefits for consumers. For instance, the integration of healthcare networks leads to increased sharing of health information and better analytics, better care for patients, reduced overhead costs, and other efficiencies. Ultimately these should translate into higher quality care for patients. And to the extent that they do, they should also translate into lower costs for insurers and lower premiums — provided health insurers are not prevented from obtaining sufficient bargaining power to impose pricing discipline on healthcare providers.

In other words, both the AHA and AMA could be right as to different aspects of the ACA’s effects.

Understanding mergers within the regulatory environment

But what they can’t say is that increased consolidation per se is clearly problematic, nor that, even if it is correlated with sub-optimal outcomes, it is consolidation causing those outcomes, rather than something else (like the ACA) that is causing both the sub-optimal outcomes as well as consolidation.

In fact, it may well be the case that increased consolidation improves overall outcomes in healthcare provider and health insurance markets relative to what would happen under the ACA absent consolidation. For Congressional Democrats and others interested in bolstering the ACA and offering the best possible outcomes for consumers, reflexively challenging health insurance mergers because consolidation is “bad,” may be undermining both of these objectives.

Meanwhile, and for the same reasons, Congressional Republicans who decry Obamacare should be careful that they do not likewise condemn mergers under what amounts to a “big is bad” theory that is inconsistent with the rigorous law and economics approach that they otherwise generally support. To the extent that the true target is not health insurance industry consolidation, but rather underlying regulatory changes that have encouraged that consolidation, scoring political points by impugning mergers threatens both health insurance consumers in the short run, as well as consumers throughout the economy in the long run (by undermining the well-established economic critiques of a reflexive “big is bad” response).

It is simply not clear that ACA-induced health insurance mergers are likely to be anticompetitive. In fact, because the ACA builds on state regulation of insurance providers, requiring greater transparency and regulatory review of pricing and coverage terms, it seems unlikely that health insurers would be free to engage in anticompetitive price increases or reduced coverage that could harm consumers.

On the contrary, the managerial and transactional efficiencies from the proposed mergers, combined with greater bargaining power against now-larger providers are likely to lead to both better quality care and cost savings passed-on to consumers. Increased entry, at least in part due to the ACA in most of the markets in which the merging companies will compete, along with integrated health networks themselves entering and threatening entry into insurance markets, will almost certainly lead to more consumer cost savings. In the current regulatory environment created by the ACA, in other words, insurance mergers have considerable upside potential, with little downside risk.


In sum, regardless of what one thinks about the ACA and its likely effects on consumers, it is not clear that health insurance mergers, especially in a post-ACA world, will be harmful.

Rather, assessing the likely competitive effects of health insurance mergers entails consideration of many complicated (and, unfortunately, politicized) issues. In future blog posts we will discuss (among other things): the proper treatment of efficiencies arising from health insurance mergers, the appropriate geographic and product markets for health insurance merger reviews, the role of state regulations in assessing likely competitive effects, and the strengths and weaknesses of arguments for potential competitive harms arising from the mergers.

Trial begins today in the Southern District of New York in United States v. Apple (the Apple e-books case), which I discussed previously here. Along with co-author Will Rinehart, I also contributed an  essay to a discussion of the case in Concurrences (alongside contributions from Jon Jacobson and Mark Powell, among others).

Much of my writing on the case has essentially addressed it as a rule of reason case, assessing the economic merits of Apple’s contract terms. And as I mention in this Reuters article from yesterday on the case, one of the key issues in this analysis (and one of the government’s key targets in the case) is the use of MFN clauses.

But as Josh pointed out in a blog post last year,

my hunch is that if the case is litigated its legacy will be as an “agreement” case rather than what it contributes to rule of reason analysis.  In other words, if Apple gets to the rule of reason, the DOJ (like most plaintiffs in rule of reason cases) are likely to lose — especially in light of at least preliminary evidence of dramatic increases in output.  The critical question — I suspect — will be about proof of an actual naked price fixing agreement among publishers and Apple, and as a legal matter, what evidence is sufficient to establish that agreement for the purposes of Section 1 of the Sherman Act.

He’s likely correct, of course, that a central question at trial will be whether or not this is a per se or rule of reason case, and that trial will focus in significant part on the sufficiency of the evidence of agreement. But because this determination will turn considerably on the purpose and function of the MFN and price cap terms in Apple’s agreements with the publishers, I don’t think there should (or will) be much difference. Nor do I think the government should (or will) win.

Before the court can apply the per se rule, it must satisfy itself that the conduct at issue “would always or almost always tend to restrict competition and decrease output.” But it is not true as a matter of economics — and certainly not true as a matter of law — that MFNs meet this standard.

After State Oil v. Kahn there can be no question about the rule of reason (if not per se legal) status of price caps. And as the Court noted in Leegin:

Resort to per se rules is confined to restraints, like those mentioned, “that would always or almost always tend to restrict competition and decrease output.” To justify a per se prohibition a restraint must have “manifestly anticompetitive” effects, and “lack any redeeming virtue.

As a consequence, the per se rule is appropriate only after courts have had considerable experience with the type of restraint at issue, and only if courts can predict with confidence that it would be invalidated in all or almost all instances under the rule of reason. It should come as no surprise, then, that “we have expressed reluctance to adopt per se rules with regard to restraints imposed in the context of business relationships where the economic impact of certain practices is not immediately obvious.” And, as we have stated, a “departure from the rule-of-reason standard must be based upon demonstrable economic effect rather than . . . upon formalistic line drawing.”

After Leegin, all vertical non-price restraints, including MFNs, are assessed under the rule of reason.  Courts neither have “considerable experience” with MFNs, nor can they remotely “predict with confidence that they would be invalidated in all or almost all instances under the rule of reason.” As a recent article in Antitrust points out,

The DOJ and FTC have brought approximately ten cases over the last two decades challenging MFNs. Most of these cases involved the health care industry and all were resolved by consent judgments.

Even if the court does take a harder look at whether a per se rule should govern, however, as a practical matter there is not likely to be much difference between a “does this merit per se treatment” analysis and analysis of the facts under the rule of reason. As the Court pointed out in California Dental Association,

The truth is that our categories of analysis of anticompetitive effect are less fixed than terms like “per se,” “quick look,” and “rule of reason” tend to make them appear. We have recognized, for example, that “there is often no bright line separating per se from Rule of Reason analysis,” since “considerable inquiry into market conditions” may be required before the application of any so-called “per se” condemnation is justified. “[W]hether the ultimate finding is the product of a presumption or actual market analysis, the essential inquiry remains the same–whether or not the challenged restraint enhances competition.”

And as my former classmate Tom Nachbar points out in a recent article,

it’s hard to identity much relative simplicity in the per se rule. Indeed, the moniker “per se” has become somewhat misleading, as cases determining whether to apply the per se or rule of reason become as long as ones actually applying the rule of reason itself.

Of course that doesn’t end the analysis, and the government’s filings do all they can to sidestep the direct antitrust treatment of MFNs and instead assert that they (and other evidence alleged) permit the court to infer Apple’s participation as the coordinator of a horizontal price-fixing conspiracy among the publishers.

But as Apple argues in its filings,

The[ relevant] cases mandate an inquiry into the possibility that the challenged contract terms and negotiation approach were in Apple’s independent economic interests. The evidence is overwhelming—not just possible—that Apple acted for its own valid business reasons and not to “raise consumer prices market-wide.”…Plaintiffs ask this Court to infer Apple’s participation in a conspiracy from (1) its MFN and price cap terms and (2) negotiations with publishers.

* * *

What is obvious, however, is that Apple has not fixed prices with its competitors. What is remarkable is that the government seeks to impose grave legal consequences on an inherently pro-competitive act—entry—accomplished via agency, an MFN, and price caps, none of which is per se unlawful.

The government’s strenuous objection to Apple’s interpretation of the controlling Supreme Court authority, Monsanto v. Spray-Rite, notwithstanding, it’s difficult to see the MFN clauses as evidence of Apple’s participation in the publishers’ alleged conspiracy.

An important point supporting Apple’s argument here is that, unlike the “hubs” in the other “hub and spoke” conspiracies on which the DOJ bases its case, Apple has no significant leverage over the alleged co-conspirators, and thus no power to coordinate — let alone enforce — a price-fixing scheme. As Apple argues in its Opposition brief,

The only “power” Apple could wield over the publishers was the attractiveness of a business opportunity—hardly the “make or break” scenarios found in Interstate Circuit and [Toys-R-Us]. Far from capitulating to Apple’s requested core business terms, the publishers fought Apple tooth and nail and negotiated intensely to the very end, and the largest, Random House, declined.

And as Will and I note in our Concurrences article,

MFNs are essentially an important way of…offering some protection against publishers striking a deal with a competitor that leaves Apple forced to price its ebooks out of the market.

There is nothing, that we know of, in the MFNs or elsewhere in the agreements that requires the publishers to impose higher resale prices elsewhere, or prevents the publishers from selling through Apple at a lower price, if necessary. Most important, for Apple’s negotiated prices to dominate in the market it would have to enjoy market power – a condition, currently at least, that is exceedingly unlikely given its 10% share of the ebook market.

The point is that, even if everything the government alleges about the publishers’ price fixing scheme were true, it’s extremely difficult to see Apple as a co-conspirator in such a scheme. The Supreme Court’s holding in Monsanto stands for nothing if not the principle that courts may not infer a vertical party’s participation in a horizontal price-fixing scheme from the existence of otherwise-legal and -defensible interactions between the vertically related parties. Because MFNs have valid purposes outside the realm of price-fixing, they may not be converted into illegal conduct on Apple’s part simply because they might also “sharpen [a publisher’s] incentives” to try to raise prices elsewhere.

Remember, we are in a world where the requisite anticompetitive conduct can’t be simply the vertical restraint itself. Rather, we’re evaluating whether the vertical restraint was part of a broader anticompetitive scheme among the publishers. For the MFN clauses to be part of that alleged scheme they must have an identifiable place in the scheme.

First of all, it is unremarkable that Apple might offer terms to any individual publisher (or to all publishers independently) that might be more favorable to the publisher than terms it is getting elsewhere; that’s how a new entrant in Apple’s position attracts suppliers. It is likewise unremarkable that Apple would seek to impose terms (like the MFN) that would preserve its ability to offer a publisher’s books for the same price they are offered elsewhere (which is necessary because the agency agreements negotiated by Apple otherwise remove pricing authority from Apple and confer it on the publishers themselves). And finally it is unremarkable that each publisher would try to negotiate similarly favorable terms with other distributors (or, more accurately, continue to try: bargaining over distribution terms with other distributors hardly started only after the agreements were signed with Apple). What would be notable is if the publishers engaged in concerted action to negotiate these more-favorable terms with other publishers, and what would be problematic for Apple is if its agreement with each publisher facilitated that collusion.

But I don’t see any persuasive evidence that the terms of Apple’s deals with each publisher did any such thing. For MFNs to perform the function alleged by the DOJ it seems to me that the MFNs would have to contribute to the alleged agreement between the publishers, just as the actions of the vertical co-conspirators in Interstate Circuit and Toys-R-Us were alleged to facilitate coordination. But neither the agency agreement itself nor the MFN and price cap terms in the contracts in any way affected the publishers’ incentive to compete with each other. Nor, as noted above, did they require any individual publisher to cause its books to be sold at higher prices through other distributors.

On this latter point, the DOJ alleges that the MFNs “sharpen[ed publishers’] incentives” to raise prices:

If a retailer were allowed to remain on wholesale terms, and that retailer continued to price new release e-books at $9.99, the Publisher Defendant would be forced to lower the iBookstore price to match the $9.99 price

Not only does this say nothing about the incentives of the publishers to compete with each other on price (except that it may have increased that incentive by undermining the prevailing $9.99-for-all-books standard), it seems far-fetched to suggest that fear of having to lower prices for books sold in Apple’s relatively trivial corner of the market would have an apreciable effect on a publisher’s incentives to raise prices elsewhere. For what it’s worth, it also seems far-fetched to suggest that Apple’s motivation was to raise prices given that e-book sales generate only about .0005% of Apple’s total revenues.

Beyond this, the DOJ essentially argues that Apple coordinated agreement among the publishers to accept the terms being offered by Apple, with the intent and effect that this would lead to imposition by the publishers of similar terms (and higher prices) on other distributors. Perhaps, but it’s a stretch. And if it is true, it isn’t because of the MFN clauses. Moreover, it isn’t clear to me (maybe I’m missing some obvious controlling case law?) that agreement over the type of contract used amounts to an illegal horizontal agreement; arguably in this case, at least, it is closer to an ancillary restraint or  justified agreement (as in BMI, e.g.) than, say, a group boycott or bid rigging. In any case, if the DOJ has a case at all turning on this scenario, I think it will have to be based entirely on the alleged evidence of direct coordination (i.e., communications between Apple and publishers during dinners and phone calls) rather than the operation of the contract terms themselves.

In any case, it will be interesting to see how the trial unfolds.

Did Apple conspire with e-book publishers to raise e-book prices?  That’s what DOJ argues in a lawsuit filed yesterday. But does that violate the antitrust laws?  Not necessarily—and even if it does, perhaps it shouldn’t.

Antitrust’s sole goal is maximizing consumer welfare.  While that generally means antitrust regulators should focus on lower prices, the situation is more complicated when we’re talking about markets for new products, where technologies for distribution and consumption are evolving rapidly along with business models.  In short, the so-called Agency pricing model Apple and publishers adopted may mean (and may not mean) higher e-book prices in the short run, but it also means more variability in pricing, and it might well have facilitated Apple’s entry into the market, increasing e-book retail competition and promoting innovation among e-book readers, while increasing funding for e-book content creators.

The procompetitive story goes something like the following.  (As always with antitrust, the question isn’t so much which model is better, but that no one really knows what the right model is—least of all antitrust regulators—and that, the more unclear the consumer welfare effects of a practice are, as in rapidly evolving markets, the more we should err on the side of restraint).

Apple versus Amazon

Apple–decidedly a hardware company–entered the e-book market as a device maker eager to attract consumers to its expensive iPad tablets by offering appealing media content.  In this it is the very opposite of Amazon, a general retailer that naturally moved into retailing digital content, and began selling hardware (Kindle readers) only as a way of getting consumers to embrace e-books.

The Kindle is essentially a one-trick pony (the latest Kindle notwithstanding), and its focus is on e-books.  By contrast, Apple’s platform (the iPad and, to a lesser degree, the iPhone) is a multi-use platform, offering Internet browsing, word processing, music, apps, and other products, of which books probably accounted–and still account–for a relatively small percentage of revenue.  Importantly, unlike Amazon, Apple has many options for promoting adoption of its platform—not least, the “sex appeal” of its famously glam products.  Without denigrating Amazon’s offerings, Amazon, by contrast, competes largely on the basis of its content, and its devices sell only as long as the content is attractive and attractively priced.

In essence, Apple’s iPad is a platform; Amazon’s Kindle is a book merchant wrapped up in a cool device.

What this means is that Apple, unlike Amazon, is far less interested in controlling content prices for books and other content; it hardly needs to control that lever to effectively market its platform, and it can easily rely on content providers’ self interest to ensure that enough content flows through its devices.

In other words, Apple is content to act as a typical platform would, acting as a conduit for others’ content, which the content owner controls.  Amazon surely has “platform” status in its sights, but reliant as it is on e-books, and nascent as that market is, it is not quite ready to act like a “pure” platform.  (For more on this, see my blog post from 2010).

The Agency Model

As it happens, publishers seem to prefer the Agency Model, as well, preferring to keep control over their content in this medium rather than selling it (as in the brick-and-mortar model) to a retailer like Amazon to price, market, promote and re-sell at will.  For the publishers, the Agency Model is essentially a form of resale price maintenance — ensuring that retailers who sell their products do not inefficiently discount prices.  (For a clear exposition of the procompetitive merits of RPM, see this article by Benjamin Klein).

(As a side note, I suspect that they may well be wrong to feel this way.  The inclination seems to stem from a fear of e-books’ threat to their traditional business model — a fear of technological evolution that can have catastrophic consequences (cf. Kodak, about which I wrote a few weeks ago).  But then content providers moving into digital media have been consistently woeful at understanding digital markets).

So the publishers strike a deal with Apple that gives the publishers control over pricing and Apple a cut (30%) of the profits.  Contrary to the DOJ’s claim in its complaint, this model happens to look exactly like Apple’s arrangement for apps and music, as well, right down to the same percentage Apple takes from sales.  This makes things easier for Apple, gives publishers more control over pricing, and offers Apple content and a good return sufficient to induce it to market and sell its platform.

It is worth noting here that there is no reason to think that the wholesale model wouldn’t also have generated enough content and enough return for Apple, so I don’t think the ultimate motivation here for Apple was higher prices (which could well have actually led to lower total return given fewer sales), but rather that it wasn’t interested in paying for control.  So in exchange for a (possibly) larger slice of the pie, as well as consistency with its existing content provider back-end and the avoidance of having to monitor and make pricing decisions,  Apple happily relinquished decision-making over pricing and other aspects of sales.

The Most Favored Nation Clauses

Having given up this price control, Apple has one remaining problem: no guarantee of being able to offer attractive content at an attractive price if it is forced to try to sell e-books at a high price while its competitors can undercut it.  And so, as is common in this sort of distribution agreement, Apple obtains “Most Favored Nation” (MFN) clauses from publishers to ensure that if they are permitting other platforms to sell their books at a lower price, Apple will at least be able to do so, as well.  The contracts at issue in the case specify maximum resale prices for content and ensure Apple that if a publisher permits, say, Amazon to sell the same content at a lower price, it will likewise offer the content via Apple’s iBooks store for the same price.

The DOJ is fighting a war against MFNs, which is a story for another day, and it seems clear from the terms of the settlement with the three setting publishers that indeed MFNs are a big part of the target here.  But there is nothing inherently problematic about MFNs, and there is plenty of scholarship explaining why they are beneficial.  Here, and important among these, they facilitate entry by offering some protection for an entrant’s up-front investment in challenging an incumbent, and prevent subsequent entrants from undercutting this price.  In this sense MFNs are essentially an important way of inducing retailers like Apple to sign on to an RPM (no control) model by offering some protection against publishers striking a deal with a competitor that leaves Apple forced to price its e-books out of the market.

There is nothing, that I know of, in the MFNs or elsewhere in the agreements that requires the publishers to impose higher resale prices elsewhere, or prevents the publishers from selling throughApple at a lower price, if necessary.  That said, it may well have been everyone’s hope that, as the DOJ alleges, the MFNs would operate like price floors instead of price ceilings, ensuring higher prices for publishers.  But hoping for higher prices is not an antitrust offense, and, as I’ve discussed, it’s not even clear that, viewed more broadly in terms of the evolution of the e-book and e-reader markets, higher prices in the short run would be bad for consumers.

The Legal Standard

To the extent that book publishers don’t necessarily know what’s really in their best interest, the DOJ is even more constrained in judging the benefits (or costs) for consumers at large from this scheme.  As I’ve suggested, there is a pretty clear procompetitive story here, and a court may indeed agree that this should not be judged under a per se liability standard (as would apply in the case of naked price-fixing).

Most important, here there is no allegation that the publishers and Apple (or the publishers among themselves) agreed on price.  Rather, the allegation is that they agreed to adopt a particular business model (one that, I would point out, probably resulted in greater variation in price, rather than less, compared to Amazon’s traditional $9.99-for-all pricing scheme).  If the DOJ can convince a court that this nevertheless amounts to a naked price-fixing agreement among publishers, with Apple operating as the hub, then they are probably sunk.  But while antitrust law is suspicious of collective action among rivals in coordinating on prices, this change in business model does not alone coordinate on prices.  Each individual publisher can set its own price, and it’s not clear that the DOJ’s evidence points to any agreement with respect to actual pricing level.

It does seem pretty clear that there is coordination here on the shift in business models.  But sometimes antitrust law condones such collective action to take account of various efficiencies (think standard setting or joint ventures or collective rights groups like BMI).  Here, there is a more than plausible case that coordinated action to move to a plausibly-more-efficient business model was necessary and pro-competitive.  If Apple can convince a court of that, then the DOJ has a rule of reason case on its hands and is facing a very uphill battle.

[Cross-posted at Tech Liberation Front]

Milton Mueller responded to my post Wednesday on the DOJ’s decision to halt the AT&T/T-Mobile merger by asserting that there was no evidence the merger would lead to “anything innovative and progressive” and claiming “[t]he spectrum argument fell apart months ago, as factual inquiries revealed that AT&T had more spectrum than Verizon and the mistakenly posted lawyer’s letter revealed that it would be much less expensive to expand its capacity than to acquire T-Mobile.”  With respect to Milton, I think he’s been suckered by the “big is bad” crowd at Public Knowledge and Free Press.  But he’s hardly alone and these claims — claims that may well have under-girded the DOJ’s decision to step in to some extent — merit thorough refutation.

To begin with, LTE is “progress” and “innovation” over 3G and other quasi-4G technologies.  AT&T is attempting to make an enormous (and risky) investment in deploying LTE technology reliably and to almost everyone in the US–something T-Mobile certainly couldn’t do on its own and something AT&T would have been able to do only partially and over a longer time horizon and, presumably, at greater expense.  Such investments are exactly the things that spur innovation across the ecosystem in the first place.  No doubt AT&T’s success here would help drive the next big thing–just as quashing it will make the next big thing merely the next medium-sized thing.

The “Spectrum Argument”

The spectrum argument that Milton claims “fell apart months ago” is the real story here, the real driver of this merger, and the reason why the DOJ’s action yesterday is, indeed, a blow to progress.  That argument, unfortunately, still stands firm.  Even more, the irony is that to a significant extent the spectrum shortfall is a product of the government’s own making–through mismanagement of spectrum by the FCC, political dithering by Congress, and local government intransigence on tower siting and co-location–and the notion of the government now intervening here to “fix” one of the most significant private efforts to make progress despite these government impediments is really troubling.

Anyway, here’s what we know about spectrum:  There isn’t enough of it in large enough blocks and in bands suitable for broadband deployment using available technology to fully satisfy current–let alone future–demand.

Two incredibly detailed government sources for this conclusion are the FCC’s 15th Annual Wireless Competition Report and the National Broadband Plan.  Here’s FCC Chairman Julius Genachowski summarizing the current state of affairs (pdf):

The point deserves emphasis:  the clock is ticking on our mobile future. The FCC is an expert agency staffed with first-rate employees who have been working on spectrum allocation for decades – and let me tell you what the career engineers are telling me. Demand for spectrum is rapidly outstripping supply. The networks we have today won’t be able to handle consumer and business needs.

* * *

To avoid this crisis, the National Broadband Plan recommended reallocating 500 megahertz of spectrum for broadband, nearly double the amount that is currently available.

* * *

First, there are some who say that the spectrum crunch is greatly exaggerated – indeed, that there is no crunch coming. They also suggest that there are large blocks of spectrum just lying around – and that some licensees, such as cable and wireless companies, are just sitting on top of, or “hoarding,” unused spectrum that could readily solve that problem. That’s just not true.

* * *

The looming spectrum shortage is real – and it is the alleged hoarding that is illusory.

It is not hoarding if a company paid millions or billions of dollars for spectrum at auction and is complying with the FCC’s build-out rules. There is no evidence of non-compliance. . . . [T]he spectrum crunch will not be solved by the build-out of already allocated spectrum.

All of the evidence suggests that spectrum suitable for mobile broadband is scarce and growing scarcer.  Full stop.

It is troubling that critics–particularly those with little if any business experience–are so certain that even with no obvious source of additional spectrum suitable for LTE coming from the government any time soon, and even with exponential growth in broadband (including mobile) data use, AT&T’s current spectrum holdings are sufficient to satisfy its business plans (and its investors and stockholders).  You’d think AT&T would be delighted to hear this news–what we really need is a shareholder resolution to put Gigi Sohn on the board!

But seriously, put yourself in AT&T’s shoes for a moment.  Its long-term plans require the company to deploy significantly more spectrum than it currently holds in a reasonable time horizon (evengranting Milton’s dubious premise that the company is squatting on scads of unused spectrum–remember that even if AT&T had all the spectrum sitting in its proverbial bank vault it would still be just about a third of the total amount of spectrum we’re predicted to need in just a few years).  Considering the various impediments of net neutrality regulation, congressional politics, presidential politics (think this had anything to do with claims about job losses from the merger, by chance?), reluctant broadcasters, the FCC, state PUCs, environmental groups and probably 10-12 others . . . the chances of being able to obtain the necessary spectrum and cell tower sitings in any other reasonable fashion were perhaps appropriately deemed . . . slim.

With the T-Mobile deal, on the other hand, “AT&T will gain cell sites equivalent to what would have taken on average five years to build without the transaction, and double that in some markets. AT&T’s network density will increase by approximately 30 percent in some of its most populated areas.” (Source).  I just don’t see how this jibes with the claim that the spectrum argument has fallen apart.

But there is a larger, “meta” point to make here, and it’s one that policy scolds and government regulators too often forget.  Even if none of that were true, as long as we don’t know for sure what is optimal and do know the DOJ is both a political organization made up of human beings operating not only under said ignorance but with incentives that don’t necessarily translate into “maximize social welfare” and also devoid of any actual “skin in the game,” I think the basic, simple, time-tested, logical and self-evident error cost principle counsels pretty firmly against intervention.  Humility, not hubris should rule the roost.

And that’s especially true since you know what will happen if the DOJ (or the FCC) succeeds in preventing AT&T from buying T-Mobile?  T-Mobile will still disappear and we’ll still be left with (according to the DOJ’s analysis) the terrifying prospect of only 3 national wireless telecom providers.  Only, in that case, everyone’s going to think a lot harder about investing in future developments that might warrant integration or cooperation or . . . well, the DOJ will challenge anything, so add to the list patent pools, too much success, not enough sharing, etc., etc.  And you wonder why I think this might constitute an assault on innovation?

Now, as for Milton’s specific claims, reminiscent of Public Knowledge’s and Free Press’ talking points, let me quote AT&T’s Public Interest Statement discussing its own particular spectrum holdings:

Because of the high demand for broadband service, AT&T already has had to deploy four carriers (for a total of 40 MHz of spectrum) for UMTS [3G] in some areas—and it will need to deploy more in the near future, even if doing so squeezes its GSM spectrum allocation and compromises GSM service quality . . . .  AT&T expects that, given the relative infancy of the LTE ecosystem and the time needed to migrate subscribers, it will need to continue to allocate spectrum to UMTS services for a substantial number of years—indeed, even longer than AT&T needs to continue allocating spectrum for GSM services.

* * *

AT&T has begun deployment of LTE services using its AWS and 700 MHz spectrum and currently plans to cover more than 250 million people by the end of 2013

* * *

AT&T projects it will need to use its 850 MHz and 1900 MHz spectrum holdings to support GSM and UMTS services for a number of years and, in the meantime, will not be able to re-deploy them for more spectrally efficient LTE services.

* * *

AT&T’s existing WCS spectrum holdings cannot be used for this purpose either, because the technical rules for the WCS band, such as limits on the power spectral density limits, make it infeasible to use that band for broadband service.

In other words, I don’t think AT&T has been (nor could it be, given the FCC’s detailed knowledge on the subject) hiding its spectrum holdings.  Instead, the company has been making quite clear that the spectrum it has is simply insufficient to meet anticipated demand.  And, well, duh!  Anyone who uses AT&T knows its network is overloaded.  Some of that’s because of tower-siting issues, some because it simply didn’t anticipate the extent of demand it would face.  I heard somewhere that no matter how hard they try to account for their perpetual under-accounting, every estimate by every mobile provider of anticipated spectrum needs in the past two decades or so has fallen short.  I’m quite sure that AT&T didn’t anticipate in 2007 that spectrum usage would increase by 8000% (yes, that’s thousand) by 2010.

Moreover, there will always (in any sensible system) be excess capacity at times–as it happens, at (conveniently) the times when spectrum usage is often counted–in order to deal with peak loads.  It is no more sensible to deploy capacity sufficient to handle the maximum load 100% of the time than it is to deploy capacity to handle only the minimum load 100% of the time.  Does that mean the often-unused spectrum is “excess”?  Clearly not.

Moreover (again), not all spectrum is in contiguous blocks sufficient to deploy LTE.  AT&T (at least) claims that is the case with much of its existing spectrum.  Spectrum isn’t simply fungible, and un-nuanced claims that “AT&T has X megahertz of spectrum and it is plenty” are just meaningless.  Again, just because Free Press says otherwise does not make it so.  You can simply discount AT&T’s claims if you like–I’m sure it’s possible they’re just lying; but you should probably be careful whose “information” you believe instead.

But, no, Milton, the spectrum argument did not “fall apart months ago.”  Gigi Sohn, Harold Feld and Sprint just said it did.  There’s a difference.


As for the infamous letter alleged to show that AT&T could expand LTE service from its previously-planned 80% of the country to the 97% it promises if the merger goes through for significantly less than it would cost to buy T-Mobile:  I don’t know exactly what its import is—but no one outside AT&T and, maybe, the FCC really does, either.  But I think a little sensible skepticism is in order.

First, for those who haven’t read it, the letter says, in relevant part:

The purpose of the meeting was to discuss AT&T’s current LTE deployment plans to reach 80 percent of the U.S. population by the end of 2013…; the estimated [Begin Confidential Information] $3.8 billion [End Confidential Information] in additional capital expenditures to expand LTE coverage from 80 to 97 percent of the U.S. population; and AT&T’s commitment to expand LTE to over 97 percent of the U.S. population as a result of this transaction.

That part, “$3.8 billion,” between the words “Begin Confidential Information” and “End Confidential Information” was supposed to be redacted, but apparently wasn’t when the letter was first posted to the FCC’s website.

While Public Knowledge and other critics of the deal would have you believe that this proves AT&T could roll-out nationwide LTE service for 1/10 of the cost of the T-Mobile deal, it’s basically impossible to tell what this number really means–except it certainly doesn’t mean that.

Claims about its meaning are actually largely content-less; nothing I’ve seen asks (or can possibly answer) whether the number in the letter was full cost, partial cost, annualized cost, based off of what baseline, etc., etc.  Moreover, unless I’m mistaken, nothing in the letter said anything at all about $3.8 billion being used to relieve congestion, meet future demand, increase speeds, reduce latency, expand coverage in urban areas, etc.  It seems to me that it’s referring to “additional” (additional to what?) capital expense to build infrastructure to make it even possible to offer LTE coverage to 97% of the U.S. population following the merger.  AT&T has from the outset said (bragged, more like it, because it’s supposed to bring lots of jobs and that’s what the politicians care about) that it planned to spend an “additional” $8 billion–additional to the $39 billion required to buy T-Mobile, that is–to build out its infrastructure as part of the deal.  But neither this letter nor any of AT&T’s statements (nor anyone with any familiarity with the relevant facts) has ever said it could or would have full-speed, LTE service available and up and running to 97% of the country for $3.8 billion or even $8 billion–or even merely $39 billion.  In fact, AT&T seemed to be saying that it was going to cost at least $47 billion to make that happen (and I can assure you that doesn’t begin to account for all the costs associated with integrating T-Mobile with AT&T once the $39 billion is out the door).

As I’ve alluded to above, deploying LTE service to rural areas is probably not as important for AT&T as increasing its network’s capacity in urban areas. The T-Mobile deal allows AT&T to alleviate the congestion problems experienced by its existing customers in urban areas more quickly than any other option–and because T-Mobile’s network is already up and running, that’s still true even if the federal government was somehow able to make tons of spectrum immediately available.  Moreover, with respect to the $3.8 billion, as I’ve discussed at length above, without T-Mobile’s–or someone’s!–additional spectrum and the miraculous removal of local government impediments to tower construction, pretty much no amount of money would enable AT&T to actually deliver LTE service to 97% of the country.  Is that what it would cost to build the extra pieces of hardware necessary to support such an offering?  That sounds plausible.  But actually deliver it? Hardly.

And just to play this out, let’s say the letter did mean just that — that AT&T could deliver real, fine LTE service to 97% of the country for a mere $3.8 billion direct, marginal outlay, even without T-Mobile.  It is still the case that none of us outsiders knows what such a claim would assume about where the necessary spectrum would come from and what, absent the merger, the effect would be on existing 3G coverage, congestion, pricing, etc., and what the expected ROI for such a project would be.  Elsewhere in the letter its author states that AT&T considered whether making this investment (without the T-Mobile merger) was prudent, and repeatedly rejected it.  In other words, all those armchair CEOs are organizing AT&T’s business and spending its money without the foggiest clue as to what the real consequences would be of doing so–and then claiming that, although, unlike them, actually in possession of the data relevant to such an assessment, AT&T must be lying, and could only justify spending $39 billion to buy T-Mobile as a means of securing its monopoly power.

And I think it’s important to gut check that claim, as well, as it’s what critics claim to fear (The Ma Bell from the Black Lagoon).  Unpacked, it goes something like this:

Given that:

  1.  AT&T is going to spend $39 billion to buy T-Mobile;
  2. It is going to spend $8 billion to build additional infrastructure;
  3. Having bought T-Mobile, it is going to incur some ungodly amount of expense integrating T-Mobile’s assets and employees with its own;
  4. It is going to incur huge, ongoing additional costs to govern a now-larger, more-complex organization;
  5. It is going to continue to be regulated by the FCC and watched carefully by the DOJ and its unofficial consumer watchdog minions;
  6. It will continue to face competition from its current largest and second-largest competitor;
  7. It will continue to face entry threats from the likes of Dish and Lightsquared;
  8. It will continue to face competition from fixed broadband offered by the likes of Comcast and Time Warner;
  9. It will do all this quite publicly, under the watchful eyes of Congress and its union to whom it has made all manner of politically-expedient promises;

 Then it follows that:

  1. Although it can’t muster the gumption to risk $3.8 billion to legitimately (it is claimed) extend full LTE coverage to 97% of the U.S. population, it nevertheless thinks it’s a sure bet that it will be able to recoup all of these expenditures, in this competitive and regulatory environment, by virtue of having thus taken out not its largest, not even its second-largest, but its smallest “national” competitor, and thereby having converted itself into an unfettered monopolist. QED.

The mind boggles.

So.  Back to Milton and his suggestion that I was wrong to claim that the DOJ’s action here is a threat to innovation and progress and his assertion that AT&T’s claims surrounding the benefits of the transaction fail to stand up to scrutiny:  C’mon, Miltons of the world!  Where’s your normally healthy skepticism?  I know you don’t like big infrastructure providers.  I know you’re angry your iPhone isn’t as functional as it is beautiful.  I know capitalists are only slightly more trustworthy than regulators (or is it the other way around?).  But why give in so credulously to the claims of the professional critics?  Isn’t it more likely that the deal’s critics are just blowing smoke here because they don’t like any consolidation?  It doesn’t take much research to understand (to the extent anyone can understand something so complex) the current state of the U.S. broadband market and its discontents–and why something like this merger is a plausible response.  And you don’t have to like, trust, or even stand the sight of any business executive to know that, however stupid or evil, he is still constrained by powerful market forces beyond his ken.  And “Letter-Gate” is just another pseudo-scandal contrived to suit an agenda of aggressive government meddling.

We all ought to be more wary of such claims, less quick to join anyone in condemning big as bad, and far less quick to, implicitly or explicitly, substitute the known depredations of the government for the possible ones of the market without a hell of a lot better evidence to do so.

As Josh noted, the DOJ filed a complaint today to block the merger.  I’m sure we’ll have much, much more to say on the topic, but here are a few things that jump out at me from perusing the complaint:

  • The DOJ distinguishes between the business (“Enterprise”) market and the consumer market.  This is actually a good play on their part, on the one hand, because it is more sensible to claim a national market for business customers who may be purchasing plans for widely-geographically-dispersed employees.  I would question how common this actually is, however, given that, I’m sure, most businesses that buy group cell plans are not IBM but are instead pretty small and pretty local, but still, it’s a good ploy.
  • But it has one significant problem:  The DOJ also seems to be stressing a coordinated effects story, making T-Mobile out to be a disruptive maverick disciplining the bigger carriers.  But–and this is, of course an empirical matter I will have to look in to–I highly doubt that T-Mobile plays anything like this role in the Enterprise market, at least for those enterprises that fit the DOJ’s overly-broad description.  In fact, the DOJ admits as much in para. 43 of its Complaint.  Of course, the DOJ claims this was all about to change, but that’s not a very convincing story coupled with the fact that DT, T-Mobile’s parent, was reducing its investment in the company anyway.  The reality is that Enterprise was not a key part of T-Mobile’s business model–if it occupied any cognizable part of it at all– and it can hardly be considered a maverick in a market in which it doesn’t actually operate.
  • On coordinated effects, I think the claim that T-Mobile is a maverick is pretty easily refuted, and not only in the Enterprise realm.  As Josh has pointed out in his Congressional testimony, a maverick is a term of art in antitrust, and it’s just not enough that a firm may be offering products at a lower price–there is nothing “maverick-y” about a firm that offers a different, less valuable product at a lower price.  I have seen no evidence to suggest that T-Mobile offered the kind of pricing constraint on AT&T that would be required to make it out to be a maverick.
  • Meanwhile, I know this is just a complaint and even post-Twombly pleading standards are lower than standards of proof, but the DOJ does seem t make a lot out of its HHI numbers.  In part this is a function of its adoption of a national relevant geographic market.  But (as noted above even for most Enterprise customers) this is just absurd.  As the FCC itself has noted, consumers buy cell service where they “live, work and travel.”  For most everyone, this is local.
  • Meanwhile, even on a national level, the blithe dismissal of a whole range of competitors is untenable.  MetroPCS, Cell South and many other companies have broad regional coverage (MetroPCS even has next-gen LTE service in something like 17 cities) and roaming agreements with each other and with the larger carriers that give them national coverage.  Why they should be excluded from consideration is baffling.  Moreover, Dish has just announced plans to build a national 4G network (take that, DOJ claim that entry is just impossible here!).  And perhaps most important the real competition here is not for mobile telephone service.  The merger is about broadband.  Mobile is one way of getting broadband.  So is cable and DSL and WiMax, etc.  That market includes such insignificant competitors as Time Warner, Comcast and Cox.  Calling this a 4 to 3 merger strains credulity, particularly under the new merger guidelines.
  • Moreover, the DOJ already said as much!  In its letter to the FCC on the FCC’s National Broadband Plan the DOJ says:

Ultimately what matters for any given consumer is the set of broadband offerings available to that consumer, including their technical characteristics and the commercial terms and conditions on which they are offered.  Competitive conditions vary considerably for consumers in different geographic locales.

  • The DOJ also said this, in the same letter:

[W]ith differentiated products subject to large economies of scale (relative to the size of the market), the Department does not expect to see a large number of suppliers. . . . [Rather, the DOJ cautions the FCC agains] striving for broadband markets that look like textbook markets of perfect competition, with many price-taking firms.  That market structure is unsuitable for the provision of broadband services.

Quite the different tune, now that it’s the DOJ’s turn to spring into action rather than simply admonish the antitrust activities of a sister agency!

I’m sure there is lots more, but I must say I’m really surprised and disappointed by this filing.  Effective, efficient provision of mobile broadband service is a complicated business.  It is severely hampered by constraints of the government’s own doing — both in terms of the government’s failure to make available spectrum to enable companies to build out large-scale broadband networks, and in local governments’ continued intransigence in permitting new cell towers and even co-location of cell sites on existing towers that would relieve some of the infuriating congestion we now experience.

This decision by the DOJ is an ill-conceived assault on innovation and progress in what may be the one shining segment of our bedraggled economy.

The press release is here. Notably, the settlement obligates Google to continue product development and to license ITA software on commercially-reasonable terms, seemingly for 5 years.  Frankly, I can’t imagine Google wouldn’t have done this anyway, so the settlement is not likely much of a binding constraint.

Also notable is what the settlement doesn’t seem to do: Impose any remedies intended to “correct” (or even acknowledge) so-called search neutrality issues.  This has to be considered a huge victory for Google and for common sense.  I’m sure Josh and I will have more to say once the pleadings and settlement are available.  Later today or tomorrow we will post a paper we have just completed on the issue of search neutrality.

Unfortunately, this settlement doesn’t put the matter to rest, and we still have to see what the FTC has in store now.  But for now, this is, as I said, a huge victory for Google . . . and for all of us who travel!