Archives For tying and bundling

A lawsuit filed by the State of Texas and nine other states in December 2020 alleges, among other things, that Google has engaged in anticompetitive conduct related to its online display-advertising business.

Broadly, the Texas complaint (previously discussed in this TOTM symposium) alleges that Google possesses market power in ad-buying tools and in search, illustrated in the figure below.

The complaint also alleges anticompetitive conduct by Google with respect to YouTube in a separate “inline video-advertising market.” According to the complaint, this market power is leveraged to force transactions through Google’s exchange, AdX, and its network, Google Display Network. The leverage is further exercised by forcing publishers to license Google’s ad server, Google Ad Manager.

Although the Texas complaint raises many specific allegations, the key ones constitute four broad claims: 

  1. Google forces publishers to license Google’s ad server and trade in Google’s ad exchange;
  2. Google uses its control over publishers’ inventory to block exchange competition;
  3. Google has disadvantaged technology known as “header bidding” in order to prevent publishers from accessing its competitors; and
  4. Google prevents rival ad-placement services from competing by not allowing them to buy YouTube ad space.

Alleged harms

The Texas complaint alleges Google’s conduct has caused harm to competing networks, exchanges, and ad servers. The complaint also claims that the plaintiff states’ economies have been harmed “by depriving the Plaintiff States and the persons within each Plaintiff State of the benefits of competition.”

In a nod to the widely accepted Consumer Welfare Standard, the Texas complaint alleges harm to three categories of consumers:

  1. Advertisers who pay for their ads to be displayed, but should be paying less;
  2. Publishers who are paid to provide space on their sites to display ads, but should be paid more; and
  3. Users who visit the sites, view the ads, and purchase or use the advertisers’ and publishers’ products and services.

The complaint claims users are harmed by above-competitive prices paid by advertisers, in that these higher costs are passed on in the form of higher prices and lower quality for the products and services they purchase from those advertisers. The complaint simultaneously claims that users are harmed by the below-market prices received by publishers in the form of “less content (lower output of content), lower-quality content, less innovation in content delivery, more paywalls, and higher subscription fees.”

Without saying so explicitly, the complaint insinuates that if intermediaries (e.g., Google and competing services) charged lower fees for their services, advertisers would pay less, publishers would be paid more, and consumers would be better off in the form of lower prices and better products from advertisers, as well as improved content and lower fees on publishers’ sites.

Effective competition is not an antitrust offense

A flawed premise underlies much of the Texas complaint. It asserts that conduct by a dominant incumbent firm that makes competition more difficult for competitors is inherently anticompetitive, even if that conduct confers benefits on users.

This amounts to a claim that Google is acting anti-competitively by innovating and developing products and services to benefit one or more display-advertising constituents (e.g., advertisers, publishers, or consumers) or by doing things that benefit the advertising ecosystem more generally. These include creating new and innovative products, lowering prices, reducing costs through vertical integration, or enhancing interoperability.

The argument, which is made explicitly elsewhere, is that Google must show that it has engineered and implemented its products to minimize obstacles its rivals face, and that any efficiencies created by its products must be shown to outweigh the costs imposed by those improvements on the company’s competitors.

Similarly, claims that Google has acted in an anticompetitive fashion rest on the unsupportable notion that the company acts unfairly when it designs products to benefit itself without considering how those designs would affect competitors. Google could, it is argued, choose alternate arrangements and practices that would possibly confer greater revenue on publishers or lower prices on advertisers without imposing burdens on competitors.

For example, a report published by the Omidyar Network sketching a “roadmap” for a case against Google claims that, if Google’s practices could possibly be reimagined to achieve the same benefits in ways that foster competition from rivals, then the practices should be condemned as anticompetitive:

It is clear even to us as lay people that there are less anticompetitive ways of delivering effective digital advertising—and thereby preserving the substantial benefits from this technology—than those employed by Google.

– Fiona M. Scott Morton & David C. Dinielli, “Roadmap for a Digital Advertising Monopolization Case Against Google”

But that’s not how the law—or the economics—works. This approach converts beneficial aspects of Google’s ad-tech business into anticompetitive defects, essentially arguing that successful competition and innovation create barriers to entry that merit correction through antitrust enforcement.

This approach turns U.S. antitrust law (and basic economics) on its head. As some of the most well-known words of U.S. antitrust jurisprudence have it:

A single producer may be the survivor out of a group of active competitors, merely by virtue of his superior skill, foresight and industry. In such cases a strong argument can be made that, although, the result may expose the public to the evils of monopoly, the Act does not mean to condemn the resultant of those very forces which it is its prime object to foster: finis opus coronat. The successful competitor, having been urged to compete, must not be turned upon when he wins.

– United States v. Aluminum Co. of America, 148 F.2d 416 (2d Cir. 1945)

U.S. antitrust law is intended to foster innovation that creates benefits for consumers, including innovation by incumbents. The law does not proscribe efficiency-enhancing unilateral conduct on the grounds that it might also inconvenience competitors, or that there is some other arrangement that could be “even more” competitive. Under U.S. antitrust law, firms are “under no duty to help [competitors] survive or expand.”  

To be sure, the allegations against Google are couched in terms of anticompetitive effect, rather than being described merely as commercial disagreements over the distribution of profits. But these effects are simply inferred, based on assumptions that Google’s vertically integrated business model entails an inherent ability and incentive to harm rivals.

The Texas complaint claims Google can surreptitiously derive benefits from display advertisers by leveraging its search-advertising capabilities, or by “withholding YouTube inventory,” rather than altruistically opening Google Search and YouTube up to rival ad networks. The complaint alleges Google uses its access to advertiser, publisher, and user data to improve its products without sharing this data with competitors.

All these charges may be true, but they do not describe inherently anticompetitive conduct. Under U.S. law, companies are not obliged to deal with rivals and certainly are not obliged to do so on those rivals’ preferred terms

As long ago as 1919, the U.S. Supreme Court held that:

In the absence of any purpose to create or maintain a monopoly, the [Sherman Act] does not restrict the long recognized right of [a] trader or manufacturer engaged in an entirely private business, freely to exercise his own independent discretion as to parties with whom he will deal.

– United States v. Colgate & Co.

U.S. antitrust law does not condemn conduct on the basis that an enforcer (or a court) is able to identify or hypothesize alternative conduct that might plausibly provide similar benefits at lower cost. In alleging that there are ostensibly “better” ways that Google could have pursued its product design, pricing, and terms of dealing, both the Texas complaint and Omidyar “roadmap” assert that, had the firm only selected a different path, an alternative could have produced even more benefits or an even more competitive structure.

The purported cure of tinkering with benefit-producing unilateral conduct by applying an “even more competition” benchmark is worse than the supposed disease. The adjudicator is likely to misapply such a benchmark, deterring the very conduct the law seeks to promote.

For example, Texas complaint alleges: “Google’s ad server passed inside information to Google’s exchange and permitted Google’s exchange to purchase valuable impressions at artificially depressed prices.” The Omidyar Network’s “roadmap” claims that “after purchasing DoubleClick, which became its publisher ad server, Google apparently lowered its prices to publishers by a factor of ten, at least according to one publisher’s account related to the CMA. Low prices for this service can force rivals to depart, thereby directly reducing competition.”

In contrast, as current U.S. Supreme Court Associate Justice Stephen Breyer once explained, in the context of above-cost low pricing, “the consequence of a mistake here is not simply to force a firm to forego legitimate business activity it wishes to pursue; rather, it is to penalize a procompetitive price cut, perhaps the most desirable activity (from an antitrust perspective) that can take place in a concentrated industry where prices typically exceed costs.”  That commentators or enforcers may be able to imagine alternative or theoretically more desirable conduct is beside the point.

It has been reported that the U.S. Justice Department (DOJ) may join the Texas suit or bring its own similar action against Google in the coming months. If it does, it should learn from the many misconceptions and errors in the Texas complaint that leave it on dubious legal and economic grounds.

The American Choice and Innovation Online Act (previously called the Platform Anti-Monopoly Act), introduced earlier this summer by U.S. Rep. David Cicilline (D-R.I.), would significantly change the nature of digital platforms and, with them, the internet itself. Taken together, the bill’s provisions would turn platforms into passive intermediaries, undermining many of the features that make them valuable to consumers. This seems likely to remain the case even after potential revisions intended to minimize the bill’s unintended consequences.

In its current form, the bill is split into two parts that each is dangerous in its own right. The first, Section 2(a), would prohibit almost any kind of “discrimination” by platforms. Because it is so open-ended, lawmakers might end up removing it in favor of the nominally more focused provisions of Section 2(b), which prohibit certain named conduct. But despite being more specific, this section of the bill is incredibly far-reaching and would effectively ban swaths of essential services.

I will address the potential effects of these sections point-by-point, but both elements of the bill suffer from the same problem: a misguided assumption that “discrimination” by platforms is necessarily bad from a competition and consumer welfare point of view. On the contrary, this conduct is often exactly what consumers want from platforms, since it helps to bring order and legibility to otherwise-unwieldy parts of the Internet. Prohibiting it, as both main parts of the bill do, would make the Internet harder to use and less competitive.

Section 2(a)

Section 2(a) essentially prohibits any behavior by a covered platform that would advantage that platform’s services over any others that also uses that platform; it characterizes this preferencing as “discrimination.”

As we wrote when the House Judiciary Committee’s antitrust bills were first announced, this prohibition on “discrimination” is so broad that, if it made it into law, it would prevent platforms from excluding or disadvantaging any product of another business that uses the platform or advantaging their own products over those of their competitors.

The underlying assumption here is that platforms should be like telephone networks: providing a way for different sides of a market to communicate with each other, but doing little more than that. When platforms do do more—for example, manipulating search results to favor certain businesses or to give their own products prominence —it is seen as exploitative “leveraging.”

But consumers often want platforms to be more than just a telephone network or directory, because digital markets would be very difficult to navigate without some degree of “discrimination” between sellers. The Internet is so vast and sellers are often so anonymous that any assistance which helps you choose among options can serve to make it more navigable. As John Gruber put it:

From what I’ve seen over the last few decades, the quality of the user experience of every computing platform is directly correlated to the amount of control exerted by its platform owner. The current state of the ownerless world wide web speaks for itself.

Sometimes, this manifests itself as “self-preferencing” of another service, to reduce additional time spent searching for the information you want. When you search for a restaurant on Google, it can be very useful to get information like user reviews, the restaurant’s phone number, a button on mobile to phone them directly, estimates of how busy it is, and a link to a Maps page to see how to actually get there.

This is, undoubtedly, frustrating for competitors like Yelp, who would like this information not to be there and for users to have to click on either a link to Yelp or a link to Google Maps. But whether it is good or bad for Yelp isn’t relevant to whether it is good for users—and it is at least arguable that it is, which makes a blanket prohibition on this kind of behavior almost inevitably harmful.

If it isn’t obvious why removing this kind of feature would be harmful for users, ask yourself why some users search in Yelp’s app directly for this kind of result. The answer, I think, is that Yelp gives you all the information above that Google does (and sometimes is better, although I tend to trust Google Maps’ reviews over Yelp’s), and it’s really convenient to have all that on the same page. If Google could not provide this kind of “rich” result, many users would probably stop using Google Search to look for restaurant information in the first place, because a new friction would have been added that made the experience meaningfully worse. Removing that option would be good for Yelp, but mainly because it removes a competitor.

If all this feels like stating the obvious, then it should highlight a significant problem with Section 2(a) in the Cicilline bill: it prohibits conduct that is directly value-adding for consumers, and that creates competition for dedicated services like Yelp that object to having to compete with this kind of conduct.

This is true across all the platforms the legislation proposes to regulate. Amazon prioritizes some third-party products over others on the basis of user reviews, rates of returns and complaints, and so on; Amazon provides private label products to fill gaps in certain product lines where existing offerings are expensive or unreliable; Apple pre-installs a Camera app on the iPhone that, obviously, enjoys an advantage over rival apps like Halide.

Some or all of this behavior would be prohibited under Section 2(a) of the Cicilline bill. Combined with the bill’s presumption that conduct must be defended affirmatively—that is, the platform is presumed guilty unless it can prove that the challenged conduct is pro-competitive, which may be very difficult to do—and the bill could prospectively eliminate a huge range of socially valuable behavior.

Supporters of the bill have already been left arguing that the law simply wouldn’t be enforced in these cases of benign discrimination. But this would hardly be an improvement. It would mean the Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) have tremendous control over how these platforms are built, since they could challenge conduct in virtually any case. The regulatory uncertainty alone would complicate the calculus for these firms as they refine, develop, and deploy new products and capabilities. 

So one potential compromise might be to do away with this broad-based rule and proscribe specific kinds of “discriminatory” conduct instead. This approach would involve removing Section 2(a) from the bill but retaining Section 2(b), which enumerates 10 practices it deems to be “other discriminatory conduct.” This may seem appealing, as it would potentially avoid the worst abuses of the broad-based prohibition. In practice, however, it would carry many of the same problems. In fact, many of 2(b)’s provisions appear to go even further than 2(a), and would proscribe even more procompetitive conduct that consumers want.

Sections 2(b)(1) and 2(b)(9)

The wording of these provisions is extremely broad and, as drafted, would seem to challenge even the existence of vertically integrated products. As such, these prohibitions are potentially even more extensive and invasive than Section 2(a) would have been. Even a narrower reading here would seem to preclude safety and privacy features that are valuable to many users. iOS’s sandboxing of apps, for example, serves to limit the damage that a malware app can do on a user’s device precisely because of the limitations it imposes on what other features and hardware the app can access.

Section 2(b)(2)

This provision would preclude a firm from conditioning preferred status on use of another service from that firm. This would likely undermine the purpose of platforms, which is to absorb and counter some of the risks involved in doing business online. An example of this is Amazon’s tying eligibility for its Prime program to sellers that use Amazon’s delivery service (FBA – Fulfilled By Amazon). The bill seems to presume in an example like this that Amazon is leveraging its power in the market—in the form of the value of the Prime label—to profit from delivery. But Amazon could, and already does, charge directly for listing positions; it’s unclear why it would benefit from charging via FBA when it could just charge for the Prime label.

An alternate, simpler explanation is that FBA improves the quality of the service, by granting customers greater assurance that a Prime product will arrive when Amazon says it will. Platforms add value by setting out rules and providing services that reduce the uncertainties between buyers and sellers they’d otherwise experience if they transacted directly with each other. This section’s prohibition—which, as written, would seem to prevent any kind of quality assurance—likely would bar labelling by a platform, even where customers explicitly want it.

Section 2(b)(3)

As written, this would prohibit platforms from using aggregated data to improve their services at all. If Apple found that 99% of its users uninstalled an app immediately after it was installed, it would be reasonable to conclude that the app may be harmful or broken in some way, and that Apple should investigate. This provision would ban that.

Sections 2(b)(4) and 2(b)(6)

These two provisions effectively prohibit a platform from using information it does not also provide to sellers. Such prohibitions ignore the fact that it is often good for sellers to lack certain information, since withholding information can prevent abuse by malicious users. For example, a seller may sometimes try to bribe their customers to post positive reviews of their products, or even threaten customers who have posted negative ones. Part of the role of a platform is to combat that kind of behavior by acting as a middleman and forcing both consumer users and business users to comply with the platform’s own mechanisms to control that kind of behavior.

If this seems overly generous to platforms—since, obviously, it gives them a lot of leverage over business users—ask yourself why people use platforms at all. It is not a coincidence that people often prefer Amazon to dealing with third-party merchants and having to navigate those merchants’ sites themselves. The assurance that Amazon provides is extremely valuable for users. Much of it comes from the company’s ability to act as a middleman in this way, lowering the transaction costs between buyers and sellers.

Section 2(b)(5)

This provision restricts the treatment of defaults. It is, however, relatively restrained when compared to, for example, the DOJ’s lawsuit against Google, which treats as anticompetitive even payment for defaults that can be changed. Still, many of the arguments that apply in that case also apply here: default status for apps can be a way to recoup income foregone elsewhere (e.g., a browser provided for free that makes its money by selling the right to be the default search engine).

Section 2(b)(7)

This section gets to the heart of why “discrimination” can often be procompetitive: that it facilitates competition between platforms. The kind of self-preferencing that this provision would prohibit can allow firms that have a presence in one market to extend that position into another, increasing competition in the process. Both Apple and Amazon have used their customer bases in smartphones and e-commerce, respectively, to grow their customer bases for video streaming, in competition with Netflix, Google’s YouTube, cable television, and each other. If Apple designed a search engine to compete with Google, it would do exactly the same thing, and we would be better off because of it. Restricting this kind of behavior is, perversely, exactly what you would do if you wanted to shield these incumbents from competition.

Section 2(b)(8)

As with other provisions, this one would preclude one of the mechanisms by which platforms add value: creating assurance for customers about the products they can expect if they visit the platform. Some of this relates to child protection; some of the most frustrating stories involve children being overcharged when they use an iPhone or Android app, and effectively being ripped off because of poor policing of the app (or insufficiently strict pricing rules by Apple or Google). This may also relate to rules that state that the seller cannot offer a cheaper product elsewhere (Amazon’s “General Pricing Rule” does this, for example). Prohibiting this would simply impose a tax on customers who cannot shop around and would prefer to use a platform that they trust has the lowest prices for the item they want.

Section 2(b)(10)

Ostensibly a “whistleblower” provision, this section could leave platforms with no recourse, not even removing a user from its platform, in response to spurious complaints intended purely to extract value for the complaining business rather than to promote competition. On its own, this sort of provision may be fairly harmless, but combined with the provisions above, it allows the bill to add up to a rent-seekers’ charter.

Conclusion

In each case above, it’s vital to remember that a reversed burden of proof applies. So, there is a high chance that the law will side against the defendant business, and a large downside for conduct that ends up being found to violate these provisions. That means that platforms will likely err on the side of caution in many cases, avoiding conduct that is ambiguous, and society will probably lose a lot of beneficial behavior in the process.

Put together, the provisions undermine much of what has become an Internet platform’s role: to act as an intermediary, de-risk transactions between customers and merchants who don’t know each other, and tweak the rules of the market to maximize its attractiveness as a place to do business. The “discrimination” that the bill would outlaw is, in practice, behavior that makes it easier for consumers to navigate marketplaces of extreme complexity and uncertainty, in which they often know little or nothing about the firms with whom they are trying to transact business.

Customers do not want platforms to be neutral, open utilities. They can choose platforms that are like that already, such as eBay. They generally tend to prefer ones like Amazon, which are not neutral and which carefully cultivate their service to be as streamlined, managed, and “discriminatory” as possible. Indeed, many of people’s biggest complaints with digital platforms relate to their openness: the fake reviews, counterfeit products, malware, and spam that come with letting more unknown businesses use your service. While these may be unavoidable by-products of running a platform, platforms compete on their ability to ferret them out. Customers are unlikely to thank legislators for regulating Amazon into being another eBay.

[This post adapts elements of “Should ASEAN Antitrust Laws Emulate European Competition Policy?”, published in the Singapore Economic Review (2021). Open access working paper here.]

U.S. and European competition laws diverge in numerous ways that have important real-world effects. Understanding these differences is vital, particularly as lawmakers in the United States, and the rest of the world, consider adopting a more “European” approach to competition.

In broad terms, the European approach is more centralized and political. The European Commission’s Directorate General for Competition (DG Comp) has significant de facto discretion over how the law is enforced. This contrasts with the common law approach of the United States, in which courts elaborate upon open-ended statutes through an iterative process of case law. In other words, the European system was built from the top down, while U.S. antitrust relies on a bottom-up approach, derived from arguments made by litigants (including the government antitrust agencies) and defendants (usually businesses).

This procedural divergence has significant ramifications for substantive law. European competition law includes more provisions akin to de facto regulation. This is notably the case for the “abuse of dominance” standard, in which a “dominant” business can be prosecuted for “abusing” its position by charging high prices or refusing to deal with competitors. By contrast, the U.S. system places more emphasis on actual consumer outcomes, rather than the nature or “fairness” of an underlying practice.

The American system thus affords firms more leeway to exclude their rivals, so long as this entails superior benefits for consumers. This may make the U.S. system more hospitable to innovation, since there is no built-in regulation of conduct for innovators who acquire a successful market position fairly and through normal competition.

In this post, we discuss some key differences between the two systems—including in areas like predatory pricing and refusals to deal—as well as the discretionary power the European Commission enjoys under the European model.

Exploitative Abuses

U.S. antitrust is, by and large, unconcerned with companies charging what some might consider “excessive” prices. The late Associate Justice Antonin Scalia, writing for the Supreme Court majority in the 2003 case Verizon v. Trinko, observed that:

The mere possession of monopoly power, and the concomitant charging of monopoly prices, is not only not unlawful; it is an important element of the free-market system. The opportunity to charge monopoly prices—at least for a short period—is what attracts “business acumen” in the first place; it induces risk taking that produces innovation and economic growth.

This contrasts with European competition-law cases, where firms may be found to have infringed competition law because they charged excessive prices. As the European Court of Justice (ECJ) held in 1978’s United Brands case: “In this case charging a price which is excessive because it has no reasonable relation to the economic value of the product supplied would be such an abuse.”

While United Brands was the EU’s foundational case for excessive pricing, and the European Commission reiterated that these allegedly exploitative abuses were possible when it published its guidance paper on abuse of dominance cases in 2009, the commission had for some time demonstrated apparent disinterest in bringing such cases. In recent years, however, both the European Commission and some national authorities have shown renewed interest in excessive-pricing cases, most notably in the pharmaceutical sector.

European competition law also penalizes so-called “margin squeeze” abuses, in which a dominant upstream supplier charges a price to distributors that is too high for them to compete effectively with that same dominant firm downstream:

[I]t is for the referring court to examine, in essence, whether the pricing practice introduced by TeliaSonera is unfair in so far as it squeezes the margins of its competitors on the retail market for broadband connection services to end users. (Konkurrensverket v TeliaSonera Sverige, 2011)

As Scalia observed in Trinko, forcing firms to charge prices that are below a market’s natural equilibrium affects firms’ incentives to enter markets, notably with innovative products and more efficient means of production. But the problem is not just one of market entry and innovation.  Also relevant is the degree to which competition authorities are competent to determine the “right” prices or margins.

As Friedrich Hayek demonstrated in his influential 1945 essay The Use of Knowledge in Society, economic agents use information gleaned from prices to guide their business decisions. It is this distributed activity of thousands or millions of economic actors that enables markets to put resources to their most valuable uses, thereby leading to more efficient societies. By comparison, the efforts of central regulators to set prices and margins is necessarily inferior; there is simply no reasonable way for competition regulators to make such judgments in a consistent and reliable manner.

Given the substantial risk that investigations into purportedly excessive prices will deter market entry, such investigations should be circumscribed. But the court’s precedents, with their myopic focus on ex post prices, do not impose such constraints on the commission. The temptation to “correct” high prices—especially in the politically contentious pharmaceutical industry—may thus induce economically unjustified and ultimately deleterious intervention.

Predatory Pricing

A second important area of divergence concerns predatory-pricing cases. U.S. antitrust law subjects allegations of predatory pricing to two strict conditions:

  1. Monopolists must charge prices that are below some measure of their incremental costs; and
  2. There must be a realistic prospect that they will able to recoup these initial losses.

In laying out its approach to predatory pricing, the U.S. Supreme Court has identified the risk of false positives and the clear cost of such errors to consumers. It thus has particularly stressed the importance of the recoupment requirement. As the court found in 1993’s Brooke Group Ltd. v. Brown & Williamson Tobacco Corp., without recoupment, “predatory pricing produces lower aggregate prices in the market, and consumer welfare is enhanced.”

Accordingly, U.S. authorities must prove that there are constraints that prevent rival firms from entering the market after the predation scheme, or that the scheme itself would effectively foreclose rivals from entering the market in the first place. Otherwise, the predator would be undercut by competitors as soon as it attempts to recoup its losses by charging supra-competitive prices.

Without the strong likelihood that a monopolist will be able to recoup lost revenue from underpricing, the overwhelming weight of economic evidence (to say nothing of simple logic) is that predatory pricing is not a rational business strategy. Thus, apparent cases of predatory pricing are most likely not, in fact, predatory; deterring or punishing them would actually harm consumers.

By contrast, the EU employs a more expansive legal standard to define predatory pricing, and almost certainly risks injuring consumers as a result. Authorities must prove only that a company has charged a price below its average variable cost, in which case its behavior is presumed to be predatory. Even when a firm charges prices that are between its average variable and average total cost, it can be found guilty of predatory pricing if authorities show that its behavior was part of a plan to eliminate a competitor. Most significantly, in neither case is it necessary for authorities to show that the scheme would allow the monopolist to recoup its losses.

[I]t does not follow from the case‑law of the Court that proof of the possibility of recoupment of losses suffered by the application, by an undertaking in a dominant position, of prices lower than a certain level of costs constitutes a necessary precondition to establishing that such a pricing policy is abusive. (France Télécom v Commission, 2009).

This aspect of the legal standard has no basis in economic theory or evidence—not even in the “strategic” economic theory that arguably challenges the dominant Chicago School understanding of predatory pricing. Indeed, strategic predatory pricing still requires some form of recoupment, and the refutation of any convincing business justification offered in response. For example, ​​in a 2017 piece for the Antitrust Law Journal, Steven Salop lays out the “raising rivals’ costs” analysis of predation and notes that recoupment still occurs, just at the same time as predation:

[T]he anticompetitive conditional pricing practice does not involve discrete predatory and recoupment periods, as in the case of classical predatory pricing. Instead, the recoupment occurs simultaneously with the conduct. This is because the monopolist is able to maintain its current monopoly power through the exclusionary conduct.

The case of predatory pricing illustrates a crucial distinction between European and American competition law. The recoupment requirement embodied in American antitrust law serves to differentiate aggressive pricing behavior that improves consumer welfare—because it leads to overall price decreases—from predatory pricing that reduces welfare with higher prices. It is, in other words, entirely focused on the welfare of consumers.

The European approach, by contrast, reflects structuralist considerations far removed from a concern for consumer welfare. Its underlying fear is that dominant companies could use aggressive pricing to engender more concentrated markets. It is simply presumed that these more concentrated markets are invariably detrimental to consumers. Both the Tetra Pak and France Télécom cases offer clear illustrations of the ECJ’s reasoning on this point:

[I]t would not be appropriate, in the circumstances of the present case, to require in addition proof that Tetra Pak had a realistic chance of recouping its losses. It must be possible to penalize predatory pricing whenever there is a risk that competitors will be eliminated… The aim pursued, which is to maintain undistorted competition, rules out waiting until such a strategy leads to the actual elimination of competitors. (Tetra Pak v Commission, 1996).

Similarly:

[T]he lack of any possibility of recoupment of losses is not sufficient to prevent the undertaking concerned reinforcing its dominant position, in particular, following the withdrawal from the market of one or a number of its competitors, so that the degree of competition existing on the market, already weakened precisely because of the presence of the undertaking concerned, is further reduced and customers suffer loss as a result of the limitation of the choices available to them.  (France Télécom v Commission, 2009).

In short, the European approach leaves less room to analyze the concrete effects of a given pricing scheme, leaving it more prone to false positives than the U.S. standard explicated in the Brooke Group decision. Worse still, the European approach ignores not only the benefits that consumers may derive from lower prices, but also the chilling effect that broad predatory pricing standards may exert on firms that would otherwise seek to use aggressive pricing schemes to attract consumers.

Refusals to Deal

U.S. and EU antitrust law also differ greatly when it comes to refusals to deal. While the United States has limited the ability of either enforcement authorities or rivals to bring such cases, EU competition law sets a far lower threshold for liability.

As Justice Scalia wrote in Trinko:

Aspen Skiing is at or near the outer boundary of §2 liability. The Court there found significance in the defendant’s decision to cease participation in a cooperative venture. The unilateral termination of a voluntary (and thus presumably profitable) course of dealing suggested a willingness to forsake short-term profits to achieve an anticompetitive end. (Verizon v Trinko, 2003.)

This highlights two key features of American antitrust law with regard to refusals to deal. To start, U.S. antitrust law generally does not apply the “essential facilities” doctrine. Accordingly, in the absence of exceptional facts, upstream monopolists are rarely required to supply their product to downstream rivals, even if that supply is “essential” for effective competition in the downstream market. Moreover, as Justice Scalia observed in Trinko, the Aspen Skiing case appears to concern only those limited instances where a firm’s refusal to deal stems from the termination of a preexisting and profitable business relationship.

While even this is not likely the economically appropriate limitation on liability, its impetus—ensuring that liability is found only in situations where procompetitive explanations for the challenged conduct are unlikely—is completely appropriate for a regime concerned with minimizing the cost to consumers of erroneous enforcement decisions.

As in most areas of antitrust policy, EU competition law is much more interventionist. Refusals to deal are a central theme of EU enforcement efforts, and there is a relatively low threshold for liability.

In theory, for a refusal to deal to infringe EU competition law, it must meet a set of fairly stringent conditions: the input must be indispensable, the refusal must eliminate all competition in the downstream market, and there must not be objective reasons that justify the refusal. Moreover, if the refusal to deal involves intellectual property, it must also prevent the appearance of a new good.

In practice, however, all of these conditions have been relaxed significantly by EU courts and the commission’s decisional practice. This is best evidenced by the lower court’s Microsoft ruling where, as John Vickers notes:

[T]he Court found easily in favor of the Commission on the IMS Health criteria, which it interpreted surprisingly elastically, and without relying on the special factors emphasized by the Commission. For example, to meet the “new product” condition it was unnecessary to identify a particular new product… thwarted by the refusal to supply but sufficient merely to show limitation of technical development in terms of less incentive for competitors to innovate.

EU competition law thus shows far less concern for its potential chilling effect on firms’ investments than does U.S. antitrust law.

Vertical Restraints

There are vast differences between U.S. and EU competition law relating to vertical restraints—that is, contractual restraints between firms that operate at different levels of the production process.

On the one hand, since the Supreme Court’s Leegin ruling in 2006, even price-related vertical restraints (such as resale price maintenance (RPM), under which a manufacturer can stipulate the prices at which retailers must sell its products) are assessed under the rule of reason in the United States. Some commentators have gone so far as to say that, in practice, U.S. case law on RPM almost amounts to per se legality.

Conversely, EU competition law treats RPM as severely as it treats cartels. Both RPM and cartels are considered to be restrictions of competition “by object”—the EU’s equivalent of a per se prohibition. This severe treatment also applies to non-price vertical restraints that tend to partition the European internal market.

Furthermore, in the Consten and Grundig ruling, the ECJ rejected the consequentialist, and economically grounded, principle that inter-brand competition is the appropriate framework to assess vertical restraints:

Although competition between producers is generally more noticeable than that between distributors of products of the same make, it does not thereby follow that an agreement tending to restrict the latter kind of competition should escape the prohibition of Article 85(1) merely because it might increase the former. (Consten SARL & Grundig-Verkaufs-GMBH v. Commission of the European Economic Community, 1966).

This treatment of vertical restrictions flies in the face of longstanding mainstream economic analysis of the subject. As Patrick Rey and Jean Tirole conclude:

Another major contribution of the earlier literature on vertical restraints is to have shown that per se illegality of such restraints has no economic foundations.

Unlike the EU, the U.S. Supreme Court in Leegin took account of the weight of the economic literature, and changed its approach to RPM to ensure that the law no longer simply precluded its arguable consumer benefits, writing: “Though each side of the debate can find sources to support its position, it suffices to say here that economics literature is replete with procompetitive justifications for a manufacturer’s use of resale price maintenance.” Further, the court found that the prior approach to resale price maintenance restraints “hinders competition and consumer welfare because manufacturers are forced to engage in second-best alternatives and because consumers are required to shoulder the increased expense of the inferior practices.”

The EU’s continued per se treatment of RPM, by contrast, strongly reflects its “precautionary principle” approach to antitrust. European regulators and courts readily condemn conduct that could conceivably injure consumers, even where such injury is, according to the best economic understanding, exceedingly unlikely. The U.S. approach, which rests on likelihood rather than mere possibility, is far less likely to condemn beneficial conduct erroneously.

Political Discretion in European Competition Law

EU competition law lacks a coherent analytical framework like that found in U.S. law’s reliance on the consumer welfare standard. The EU process is driven by a number of laterally equivalent—and sometimes mutually exclusive—goals, including industrial policy and the perceived need to counteract foreign state ownership and subsidies. Such a wide array of conflicting aims produces lack of clarity for firms seeking to conduct business. Moreover, the discretion that attends this fluid arrangement of goals yields an even larger problem.

The Microsoft case illustrates this problem well. In Microsoft, the commission could have chosen to base its decision on various potential objectives. It notably chose to base its findings on the fact that Microsoft’s behavior reduced “consumer choice.”

The commission, in fact, discounted arguments that economic efficiency may lead to consumer welfare gains, because it determined “consumer choice” among media players was more important:

Another argument relating to reduced transaction costs consists in saying that the economies made by a tied sale of two products saves resources otherwise spent for maintaining a separate distribution system for the second product. These economies would then be passed on to customers who could save costs related to a second purchasing act, including selection and installation of the product. Irrespective of the accuracy of the assumption that distributive efficiency gains are necessarily passed on to consumers, such savings cannot possibly outweigh the distortion of competition in this case. This is because distribution costs in software licensing are insignificant; a copy of a software programme can be duplicated and distributed at no substantial effort. In contrast, the importance of consumer choice and innovation regarding applications such as media players is high. (Commission Decision No. COMP. 37792 (Microsoft)).

It may be true that tying the products in question was unnecessary. But merely dismissing this decision because distribution costs are near-zero is hardly an analytically satisfactory response. There are many more costs involved in creating and distributing complementary software than those associated with hosting and downloading. The commission also simply asserts that consumer choice among some arbitrary number of competing products is necessarily a benefit. This, too, is not necessarily true, and the decision’s implication that any marginal increase in choice is more valuable than any gains from product design or innovation is analytically incoherent.

The Court of First Instance was only too happy to give the commission a pass in its breezy analysis; it saw no objection to these findings. With little substantive reasoning to support its findings, the court fully endorsed the commission’s assessment:

As the Commission correctly observes (see paragraph 1130 above), by such an argument Microsoft is in fact claiming that the integration of Windows Media Player in Windows and the marketing of Windows in that form alone lead to the de facto standardisation of the Windows Media Player platform, which has beneficial effects on the market. Although, generally, standardisation may effectively present certain advantages, it cannot be allowed to be imposed unilaterally by an undertaking in a dominant position by means of tying.

The Court further notes that it cannot be ruled out that third parties will not want the de facto standardisation advocated by Microsoft but will prefer it if different platforms continue to compete, on the ground that that will stimulate innovation between the various platforms. (Microsoft Corp. v Commission, 2007)

Pointing to these conflicting effects of Microsoft’s bundling decision, without weighing either, is a weak basis to uphold the commission’s decision that consumer choice outweighs the benefits of standardization. Moreover, actions undertaken by other firms to enhance consumer choice at the expense of standardization are, on these terms, potentially just as problematic. The dividing line becomes solely which theory the commission prefers to pursue.

What such a practice does is vest the commission with immense discretionary power. Any given case sets up a “heads, I win; tails, you lose” situation in which defendants are easily outflanked by a commission that can change the rules of its analysis as it sees fit. Defendants can play only the cards that they are dealt. Accordingly, Microsoft could not successfully challenge a conclusion that its behavior harmed consumers’ choice by arguing that it improved consumer welfare, on net.

By selecting, in this instance, “consumer choice” as the standard to be judged, the commission was able to evade the constraints that might have been imposed by a more robust welfare standard. Thus, the commission can essentially pick and choose the objectives that best serve its interests in each case. This vastly enlarges the scope of potential antitrust liability, while also substantially decreasing the ability of firms to predict when their behavior may be viewed as problematic. It leads to what, in U.S. courts, would be regarded as an untenable risk of false positives that chill innovative behavior and create nearly unwinnable battles for targeted firms.

Interrogations concerning the role that economic theory should play in policy decisions are nothing new. Milton Friedman famously drew a distinction between “positive” and “normative” economics, notably arguing that theoretical models were valuable, despite their unrealistic assumptions. Kenneth Arrow and Gerard Debreu’s highly theoretical work on General Equilibrium Theory is widely acknowledged as one of the most important achievements of modern economics.

But for all their intellectual value and academic merit, the use of models to inform policy decisions is not uncontroversial. There is indeed a long and unfortunate history of influential economic models turning out to be poor depictions (and predictors) of real-world outcomes.

This raises a key question: should policymakers use economic models to inform their decisions and, if so, how? This post uses the economics of externalities to illustrate both the virtues and pitfalls of economic modeling. Throughout economic history, externalities have routinely been cited to support claims of market failure and calls for government intervention. However, as explained below, these fears have frequently failed to withstand empirical scrutiny.

Today, similar models are touted to support government intervention in digital industries. Externalities are notably said to prevent consumers from switching between platforms, allegedly leading to unassailable barriers to entry and deficient venture-capital investment. Unfortunately, as explained below, the models that underpin these fears are highly abstracted and far removed from underlying market realities.

Ultimately, this post argues that, while models provide a powerful way of thinking about the world, naïvely transposing them to real-world settings is misguided. This is not to say that models are useless—quite the contrary. Indeed, “falsified” models can shed powerful light on economic behavior that would otherwise prove hard to understand.

Bees

Fears surrounding economic externalities are as old as modern economics. For example, in the 1950s, economists routinely cited bee pollination as a source of externalities and, ultimately, market failure.

The basic argument was straightforward: Bees and orchards provide each other with positive externalities. Bees cross-pollinate flowers and orchards contain vast amounts of nectar upon which bees feed, thus improving honey yields. Accordingly, several famous economists argued that there was a market failure; bees fly where they please and farmers cannot prevent bees from feeding on their blossoming flowers—allegedly causing underinvestment in both. This led James Meade to conclude:

[T]he apple-farmer provides to the beekeeper some of his factors free of charge. The apple-farmer is paid less than the value of his marginal social net product, and the beekeeper receives more than the value of his marginal social net product.

A finding echoed by Francis Bator:

If, then, apple producers are unable to protect their equity in apple-nectar and markets do not impute to apple blossoms their correct shadow value, profit-maximizing decisions will fail correctly to allocate resources at the margin. There will be failure “by enforcement.” This is what I would call an ownership externality. It is essentially Meade’s “unpaid factor” case.

It took more than 20 years and painstaking research by Steven Cheung to conclusively debunk these assertions. So how did economic agents overcome this “insurmountable” market failure?

The answer, it turns out, was extremely simple. While bees do fly where they please, the relative placement of beehives and orchards has a tremendous impact on both fruit and honey yields. This is partly because bees have a very limited mean foraging range (roughly 2-3km). This left economic agents with ample scope to prevent free-riding.

Using these natural sources of excludability, they built a web of complex agreements that internalize the symbiotic virtues of beehives and fruit orchards. To cite Steven Cheung’s research

Pollination contracts usually include stipulations regarding the number and strength of the colonies, the rental fee per hive, the time of delivery and removal of hives, the protection of bees from pesticide sprays, and the strategic placing of hives. Apiary lease contracts differ from pollination contracts in two essential aspects. One is, predictably, that the amount of apiary rent seldom depends on the number of colonies, since the farmer is interested only in obtaining the rent per apiary offered by the highest bidder. Second, the amount of apiary rent is not necessarily fixed. Paid mostly in honey, it may vary according to either the current honey yield or the honey yield of the preceding year.

But what of neighboring orchards? Wouldn’t these entail a more complex externality (i.e., could one orchard free-ride on agreements concluded between other orchards and neighboring apiaries)? Apparently not:

Acknowledging the complication, beekeepers and farmers are quick to point out that a social rule, or custom of the orchards, takes the place of explicit contracting: during the pollination period the owner of an orchard either keeps bees himself or hires as many hives per area as are employed in neighboring orchards of the same type. One failing to comply would be rated as a “bad neighbor,” it is said, and could expect a number of inconveniences imposed on him by other orchard owners. This customary matching of hive densities involves the exchange of gifts of the same kind, which apparently entails lower transaction costs than would be incurred under explicit contracting, where farmers would have to negotiate and make money payments to one another for the bee spillover.

In short, not only did the bee/orchard externality model fail, but it failed to account for extremely obvious counter-evidence. Even a rapid flip through the Yellow Pages (or, today, a search on Google) would have revealed a vibrant market for bee pollination. In short, the bee externalities, at least as presented in economic textbooks, were merely an economic “fable.” Unfortunately, they would not be the last.

The Lighthouse

Lighthouses provide another cautionary tale. Indeed, Henry Sidgwick, A.C. Pigou, John Stuart Mill, and Paul Samuelson all cited the externalities involved in the provision of lighthouse services as a source of market failure.

Here, too, the problem was allegedly straightforward. A lighthouse cannot prevent ships from free-riding on its services when they sail by it (i.e., it is mostly impossible to determine whether a ship has paid fees and to turn off the lighthouse if that is not the case). Hence there can be no efficient market for light dues (lighthouses were seen as a “public good”). As Paul Samuelson famously put it:

Take our earlier case of a lighthouse to warn against rocks. Its beam helps everyone in sight. A businessman could not build it for a profit, since he cannot claim a price from each user. This certainly is the kind of activity that governments would naturally undertake.

He added that:

[E]ven if the operators were able—say, by radar reconnaissance—to claim a toll from every nearby user, that fact would not necessarily make it socially optimal for this service to be provided like a private good at a market-determined individual price. Why not? Because it costs society zero extra cost to let one extra ship use the service; hence any ships discouraged from those waters by the requirement to pay a positive price will represent a social economic loss—even if the price charged to all is no more than enough to pay the long-run expenses of the lighthouse.

More than a century after it was first mentioned in economics textbooks, Ronald Coase finally laid the lighthouse myth to rest—rebutting Samuelson’s second claim in the process.

What piece of evidence had eluded economists for all those years? As Coase observed, contemporary economists had somehow overlooked the fact that large parts of the British lighthouse system were privately operated, and had been for centuries:

[T]he right to operate a lighthouse and to levy tolls was granted to individuals by Acts of Parliament. The tolls were collected at the ports by agents (who might act for several lighthouses), who might be private individuals but were commonly customs officials. The toll varied with the lighthouse and ships paid a toll, varying with the size of the vessel, for each lighthouse passed. It was normally a rate per ton (say 1/4d or 1/2d) for each voyage. Later, books were published setting out the lighthouses passed on different voyages and the charges that would be made.

In other words, lighthouses used a simple physical feature to create “excludability” and prevent free-riding. The main reason ships require lighthouses is to avoid hitting rocks when they make their way to a port. By tying port fees and light dues, lighthouse owners—aided by mild government-enforced property rights—could easily earn a return on their investments, thus disproving the lighthouse free-riding myth.

Ultimately, this meant that a large share of the British lighthouse system was privately operated throughout the 19th century, and this share would presumably have been more pronounced if government-run “Trinity House” lighthouses had not crowded out private investment:

The position in 1820 was that there were 24 lighthouses operated by Trinity House and 22 by private individuals or organizations. But many of the Trinity House lighthouses had not been built originally by them but had been acquired by purchase or as the result of the expiration of a lease.

Of course, this system was not perfect. Some ships (notably foreign ones that did not dock in the United Kingdom) might free-ride on this arrangement. It also entailed some level of market power. The ability to charge light dues meant that prices were higher than the “socially optimal” baseline of zero (the marginal cost of providing light is close to zero). Though it is worth noting that tying port fees and light dues might also have decreased double marginalization, to the benefit of sailors.

Samuelson was particularly weary of this market power that went hand in hand with the private provision of public goods, including lighthouses:

Being able to limit a public good’s consumption does not make it a true-blue private good. For what, after all, are the true marginal costs of having one extra family tune in on the program? They are literally zero. Why then prevent any family which would receive positive pleasure from tuning in on the program from doing so?

However, as Coase explained, light fees represented only a tiny fraction of a ship’s costs. In practice, they were thus unlikely to affect market output meaningfully:

[W]hat is the gain which Samuelson sees as coming from this change in the way in which the lighthouse service is financed? It is that some ships which are now discouraged from making a voyage to Britain because of the light dues would in future do so. As it happens, the form of the toll and the exemptions mean that for most ships the number of voyages will not be affected by the fact that light dues are paid. There may be some ships somewhere which are laid up or broken up because of the light dues, but the number cannot be great, if indeed there are any ships in this category.

Samuelson’s critique also falls prey to the Nirvana Fallacy pointed out by Harold Demsetz: markets might not be perfect, but neither is government intervention. Market power and imperfect appropriability are the two (paradoxical) pitfalls of the first; “white elephants,” underinvestment, and lack of competition (and the information it generates) tend to stem from the latter.

Which of these solutions is superior, in each case, is an empirical question that early economists had simply failed to consider—assuming instead that market failure was systematic in markets that present prima facie externalities. In other words, models were taken as gospel without any circumspection about their relevance to real-world settings.

The Tragedy of the Commons

Externalities were also said to undermine the efficient use of “common pool resources,” such grazing lands, common irrigation systems, and fisheries—resources where one agent’s use diminishes that of others, and where exclusion is either difficult or impossible.

The most famous formulation of this problem is Garret Hardin’s highly influential (over 47,000 cites) “tragedy of the commons.” Hardin cited the example of multiple herdsmen occupying the same grazing ground:

The rational herdsman concludes that the only sensible course for him to pursue is to add another animal to his herd. And another; and another … But this is the conclusion reached by each and every rational herdsman sharing a commons. Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit—in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons.

In more technical terms, each economic agent purportedly exerts an unpriced negative externality on the others, thus leading to the premature depletion of common pool resources. Hardin extended this reasoning to other problems, such as pollution and allegations of global overpopulation.

Although Hardin hardly documented any real-world occurrences of this so-called tragedy, his policy prescriptions were unequivocal:

The most important aspect of necessity that we must now recognize, is the necessity of abandoning the commons in breeding. No technical solution can rescue us from the misery of overpopulation. Freedom to breed will bring ruin to all.

As with many other theoretical externalities, empirical scrutiny revealed that these fears were greatly overblown. In her Nobel-winning work, Elinor Ostrom showed that economic agents often found ways to mitigate these potential externalities markedly. For example, mountain villages often implement rules and norms that limit the use of grazing grounds and wooded areas. Likewise, landowners across the world often set up “irrigation communities” that prevent agents from overusing water.

Along similar lines, Julian Morris and I conjecture that informal arrangements and reputational effects might mitigate opportunistic behavior in the standard essential patent industry.

These bottom-up solutions are certainly not perfect. Many common institutions fail—for example, Elinor Ostrom documents several problematic fisheries, groundwater basins and forests, although it is worth noting that government intervention was sometimes behind these failures. To cite but one example:

Several scholars have documented what occurred when the Government of Nepal passed the “Private Forest Nationalization Act” […]. Whereas the law was officially proclaimed to “protect, manage and conserve the forest for the benefit of the entire country”, it actually disrupted previously established communal control over the local forests. Messerschmidt (1986, p.458) reports what happened immediately after the law came into effect:

Nepalese villagers began freeriding — systematically overexploiting their forest resources on a large scale.

In any case, the question is not so much whether private institutions fail, but whether they do so more often than government intervention. be it regulation or property rights. In short, the “tragedy of the commons” is ultimately an empirical question: what works better in each case, government intervention, propertization, or emergent rules and norms?

More broadly, the key lesson is that it is wrong to blindly apply models while ignoring real-world outcomes. As Elinor Ostrom herself put it:

The intellectual trap in relying entirely on models to provide the foundation for policy analysis is that scholars then presume that they are omniscient observers able to comprehend the essentials of how complex, dynamic systems work by creating stylized descriptions of some aspects of those systems.

Dvorak Keyboards

In 1985, Paul David published an influential paper arguing that market failures undermined competition between the QWERTY and Dvorak keyboard layouts. This version of history then became a dominant narrative in the field of network economics, including works by Joseph Farrell & Garth Saloner, and Jean Tirole.

The basic claim was that QWERTY users’ reluctance to switch toward the putatively superior Dvorak layout exerted a negative externality on the rest of the ecosystem (and a positive externality on other QWERTY users), thus preventing the adoption of a more efficient standard. As Paul David put it:

Although the initial lead acquired by QWERTY through its association with the Remington was quantitatively very slender, when magnified by expectations it may well have been quite sufficient to guarantee that the industry eventually would lock in to a de facto QWERTY standard. […]

Competition in the absence of perfect futures markets drove the industry prematurely into standardization on the wrong system — where decentralized decision making subsequently has sufficed to hold it.

Unfortunately, many of the above papers paid little to no attention to actual market conditions in the typewriter and keyboard layout industries. Years later, Stan Liebowitz and Stephen Margolis undertook a detailed analysis of the keyboard layout market. They almost entirely rejected any notion that QWERTY prevailed despite it being the inferior standard:

Yet there are many aspects of the QWERTY-versus-Dvorak fable that do not survive scrutiny. First, the claim that Dvorak is a better keyboard is supported only by evidence that is both scant and suspect. Second, studies in the ergonomics literature find no significant advantage for Dvorak that can be deemed scientifically reliable. Third, the competition among producers of typewriters, out of which the standard emerged, was far more vigorous than is commonly reported. Fourth, there were far more typing contests than just the single Cincinnati contest. These contests provided ample opportunity to demonstrate the superiority of alternative keyboard arrangements. That QWERTY survived significant challenges early in the history of typewriting demonstrates that it is at least among the reasonably fit, even if not the fittest that can be imagined.

In short, there was little to no evidence supporting the view that QWERTY inefficiently prevailed because of network effects. The falsification of this narrative also weakens broader claims that network effects systematically lead to either excess momentum or excess inertia in standardization. Indeed, it is tempting to characterize all network industries with heavily skewed market shares as resulting from market failure. Yet the QWERTY/Dvorak story suggests that such a conclusion would be premature.

Killzones, Zoom, and TikTok

If you are still reading at this point, you might think that contemporary scholars would know better than to base calls for policy intervention on theoretical externalities. Alas, nothing could be further from the truth.

For instance, a recent paper by Sai Kamepalli, Raghuram Rajan and Luigi Zingales conjectures that the interplay between mergers and network externalities discourages the adoption of superior independent platforms:

If techies expect two platforms to merge, they will be reluctant to pay the switching costs and adopt the new platform early on, unless the new platform significantly outperforms the incumbent one. After all, they know that if the entering platform’s technology is a net improvement over the existing technology, it will be adopted by the incumbent after merger, with new features melded with old features so that the techies’ adjustment costs are minimized. Thus, the prospect of a merger will dissuade many techies from trying the new technology.

Although this key behavioral assumption drives the results of the theoretical model, the paper presents no evidence to support the contention that it occurs in real-world settings. Admittedly, the paper does present evidence of reduced venture capital investments after mergers involving large tech firms. But even on their own terms, this data simply does not support the authors’ behavioral assumption.

And this is no isolated example. Over the past couple of years, several scholars have called for more muscular antitrust intervention in networked industries. A common theme is that network externalities, switching costs, and data-related increasing returns to scale lead to inefficient consumer lock-in, thus raising barriers to entry for potential rivals (here, here, here).

But there are also countless counterexamples, where firms have easily overcome potential barriers to entry and network externalities, ultimately disrupting incumbents.

Zoom is one of the most salient instances. As I have written previously:

To get to where it is today, Zoom had to compete against long-established firms with vast client bases and far deeper pockets. These include the likes of Microsoft, Cisco, and Google. Further complicating matters, the video communications market exhibits some prima facie traits that are typically associated with the existence of network effects.

Along similar lines, Geoffrey Manne and Alec Stapp have put forward a multitude of other examples. These include: The demise of Yahoo; the disruption of early instant-messaging applications and websites; MySpace’s rapid decline; etc. In all these cases, outcomes do not match the predictions of theoretical models.

More recently, TikTok’s rapid rise offers perhaps the greatest example of a potentially superior social-networking platform taking significant market share away from incumbents. According to the Financial Times, TikTok’s video-sharing capabilities and its powerful algorithm are the most likely explanations for its success.

While these developments certainly do not disprove network effects theory, they eviscerate the common belief in antitrust circles that superior rivals are unable to overthrow incumbents in digital markets. Of course, this will not always be the case. As in the previous examples, the question is ultimately one of comparing institutions—i.e., do markets lead to more or fewer error costs than government intervention? Yet this question is systematically omitted from most policy discussions.

In Conclusion

My argument is not that models are without value. To the contrary, framing problems in economic terms—and simplifying them in ways that make them cognizable—enables scholars and policymakers to better understand where market failures might arise, and how these problems can be anticipated and solved by private actors. In other words, models alone cannot tell us that markets will fail, but they can direct inquiries and help us to understand why firms behave the way they do, and why markets (including digital ones) are organized in a given way.

In that respect, both the theoretical and empirical research cited throughout this post offer valuable insights for today’s policymakers.

For a start, as Ronald Coase famously argued in what is perhaps his most famous work, externalities (and market failure more generally) are a function of transaction costs. When these are low (relative to the value of a good), market failures are unlikely. This is perhaps clearest in the “Fable of the Bees” example. Given bees’ short foraging range, there were ultimately few real-world obstacles to writing contracts that internalized the mutual benefits of bees and orchards.

Perhaps more importantly, economic research sheds light on behavior that might otherwise be seen as anticompetitive. The rules and norms that bind farming/beekeeping communities, as well as users of common pool resources, could easily be analyzed as a cartel by naïve antitrust authorities. Yet externality theory suggests they play a key role in preventing market failure.

Along similar lines, mergers and acquisitions (as well as vertical integration, more generally) can reduce opportunism and other externalities that might otherwise undermine collaboration between firms (here, here and here). And much of the same is true for certain types of unilateral behavior. Tying video games to consoles (and pricing the console below cost) can help entrants overcome network externalities that might otherwise shield incumbents. Likewise, Google tying its proprietary apps to the open source Android operating system arguably enabled it to earn a return on its investments, thus overcoming the externality problem that plagues open source software.

All of this raises a tantalizing prospect that deserves far more attention than it is currently given in policy circles: authorities around the world are seeking to regulate the tech space. Draft legislation has notably been tabled in the United States, European Union and the United Kingdom. These draft bills would all make it harder for large tech firms to implement various economic hierarchies, including mergers and certain contractual arrangements.

This is highly paradoxical. If digital markets are indeed plagued by network externalities and high transaction costs, as critics allege, then preventing firms from adopting complex hierarchies—which have traditionally been seen as a way to solve externalities—is just as likely to exacerbate problems. In other words, like the economists of old cited above, today’s policymakers appear to be focusing too heavily on simple models that predict market failure, and far too little on the mechanisms that firms have put in place to thrive within this complex environment.

The bigger picture is that far more circumspection is required when using theoretical models in real-world policy settings. Indeed, as Harold Demsetz famously put it, the purpose of normative economics is not so much to identify market failures, but to help policymakers determine which of several alternative institutions will deliver the best outcomes for consumers:

This nirvana approach differs considerably from a comparative institution approach in which the relevant choice is between alternative real institutional arrangements. In practice, those who adopt the nirvana viewpoint seek to discover discrepancies between the ideal and the real and if discrepancies are found, they deduce that the real is inefficient. Users of the comparative institution approach attempt to assess which alternative real institutional arrangement seems best able to cope with the economic problem […].

Democratic leadership of the House Judiciary Committee have leaked the approach they plan to take to revise U.S. antitrust law and enforcement, with a particular focus on digital platforms. 

Broadly speaking, the bills would: raise fees for larger mergers and increase appropriations to the FTC and DOJ; require data portability and interoperability; declare that large platforms can’t own businesses that compete with other businesses that use the platform; effectively ban large platforms from making any acquisitions; and generally declare that large platforms cannot preference their own products or services. 

All of these are ideas that have been discussed before. They are very much in line with the EU’s approach to competition, which places more regulation-like burdens on big businesses, and which is introducing a Digital Markets Act that mirrors the Democrats’ proposals. Some Republicans are reportedly supportive of the proposals, which is surprising since they mean giving broad, discretionary powers to antitrust authorities that are controlled by Democrats who take an expansive view of antitrust enforcement as a way to achieve their other social and political goals. The proposals may also be unpopular with consumers if, for example, they would mean that popular features like integrating Maps into relevant Google Search results becomes prohibited.

The multi-bill approach here suggests that the committee is trying to throw as much at the wall as possible to see what sticks. It may reflect a lack of confidence among the proposers in their ability to get their proposals through wholesale, especially given that Amy Klobuchar’s CALERA bill in the Senate creates an alternative that, while still highly interventionist, does not create ex ante regulation of the Internet the same way these proposals do.

In general, the bills are misguided for three main reasons. 

One, they seek to make digital platforms into narrow conduits for other firms to operate on, ignoring the value created by platforms curating their own services by, for example, creating quality controls on entry (as Apple does on its App Store) or by integrating their services with related products (like, say, Google adding events from Gmail to users’ Google Calendars). 

Two, they ignore the procompetitive effects of digital platforms extending into each other’s markets and competing with each other there, in ways that often lead to far more intense competition—and better outcomes for consumers—than if the only firms that could compete with the incumbent platform were small startups.

Three, they ignore the importance of incentives for innovation. Platforms invest in new and better products when they can make money from doing so, and limiting their ability to do that means weakened incentives to innovate. Startups and their founders and investors are driven, in part, by the prospect of being acquired, often by the platforms themselves. Making those acquisitions more difficult, or even impossible, means removing one of the key ways startup founders can exit their firms, and hence one of the key rewards and incentives for starting an innovative new business. 

For more, our “Joint Submission of Antitrust Economists, Legal Scholars, and Practitioners” set out why many of the House Democrats’ assumptions about the state of the economy and antitrust enforcement were mistaken. And my post, “Buck’s “Third Way”: A Different Road to the Same Destination”, argued that House Republicans like Ken Buck were misguided in believing they could support some of the proposals and avoid the massive regulatory oversight that they said they rejected.

Platform Anti-Monopoly Act 

The flagship bill, introduced by Antitrust Subcommittee Chairman David Cicilline (D-R.I.), establishes a definition of “covered platform” used by several of the other bills. The measures would apply to platforms with at least 500,000 U.S.-based users, a market capitalization of more than $600 billion, and that is deemed a “critical trading partner” with the ability to restrict or impede the access that a “dependent business” has to its users or customers.

Cicilline’s bill would bar these covered platforms from being able to promote their own products and services over the products and services of competitors who use the platform. It also defines a number of other practices that would be regarded as discriminatory, including: 

  • Restricting or impeding “dependent businesses” from being able to access the platform or its software on the same terms as the platform’s own lines of business;
  • Conditioning access or status on purchasing other products or services from the platform; 
  • Using user data to support the platform’s own products in ways not extended to competitors; 
  • Restricting the platform’s commercial users from using or accessing data generated on the platform from their own customers;
  • Restricting platform users from uninstalling software pre-installed on the platform;
  • Restricting platform users from providing links to facilitate business off of the platform;
  • Preferencing the platform’s own products or services in search results or rankings;
  • Interfering with how a dependent business prices its products; 
  • Impeding a dependent business’ users from connecting to services or products that compete with those offered by the platform; and
  • Retaliating against users who raise concerns with law enforcement about potential violations of the act.

On a basic level, these would prohibit lots of behavior that is benign and that can improve the quality of digital services for users. Apple pre-installing a Weather app on the iPhone would, for example, run afoul of these rules, and the rules as proposed could prohibit iPhones from coming with pre-installed apps at all. Instead, users would have to manually download each app themselves, if indeed Apple was allowed to include the App Store itself pre-installed on the iPhone, given that this competes with other would-be app stores.

Apart from the obvious reduction in the quality of services and convenience for users that this would involve, this kind of conduct (known as “self-preferencing”) is usually procompetitive. For example, self-preferencing allows platforms to compete with one another by using their strength in one market to enter a different one; Google’s Shopping results in the Search page increase the competition that Amazon faces, because it presents consumers with a convenient alternative when they’re shopping online for products. Similarly, Amazon’s purchase of the video-game streaming service Twitch, and the self-preferencing it does to encourage Amazon customers to use Twitch and support content creators on that platform, strengthens the competition that rivals like YouTube face. 

It also helps innovation, because it gives firms a reason to invest in services that would otherwise be unprofitable for them. Google invests in Android, and gives much of it away for free, because it can bundle Google Search into the OS, and make money from that. If Google could not self-preference Google Search on Android, the open source business model simply wouldn’t work—it wouldn’t be able to make money from Android, and would have to charge for it in other ways that may be less profitable and hence give it less reason to invest in the operating system. 

This behavior can also increase innovation by the competitors of these companies, both by prompting them to improve their products (as, for example, Google Android did with Microsoft’s mobile operating system offerings) and by growing the size of the customer base for products of this kind. For example, video games published by console manufacturers (like Nintendo’s Zelda and Mario games) are often blockbusters that grow the overall size of the user base for the consoles, increasing demand for third-party titles as well.

For more, check out “Against the Vertical Discrimination Presumption” by Geoffrey Manne and Dirk Auer’s piece “On the Origin of Platforms: An Evolutionary Perspective”.

Ending Platform Monopolies Act 

Sponsored by Rep. Pramila Jayapal (D-Wash.), this bill would make it illegal for covered platforms to control lines of business that pose “irreconcilable conflicts of interest,” enforced through civil litigation powers granted to the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ).

Specifically, the bill targets lines of business that create “a substantial incentive” for the platform to advantage its own products or services over those of competitors that use the platform, or to exclude or disadvantage competing businesses from using the platform. The FTC and DOJ could potentially order that platforms divest lines of business that violate the act.

This targets similar conduct as the previous bill, but involves the forced separation of different lines of business. It also appears to go even further, seemingly implying that companies like Google could not even develop services like Google Maps or Chrome because their existence would create such “substantial incentives” to self-preference them over the products of their competitors. 

Apart from the straightforward loss of innovation and product developments this would involve, requiring every tech company to be narrowly focused on a single line of business would substantially entrench Big Tech incumbents, because it would make it impossible for them to extend into adjacent markets to compete with one another. For example, Apple could not develop a search engine to compete with Google under these rules, and Amazon would be forced to sell its video-streaming services that compete with Netflix and Youtube.

For more, check out Geoffrey Manne’s written testimony to the House Antitrust Subcommittee and “Platform Self-Preferencing Can Be Good for Consumers and Even Competitors” by Geoffrey and me. 

Platform Competition and Opportunity Act

Introduced by Rep. Hakeem Jeffries (D-N.Y.), this bill would bar covered platforms from making essentially any acquisitions at all. To be excluded from the ban on acquisitions, the platform would have to present “clear and convincing evidence” that the acquired business does not compete with the platform for any product or service, does not pose a potential competitive threat to the platform, and would not in any way enhance or help maintain the acquiring platform’s market position. 

The two main ways that founders and investors can make a return on a successful startup are to float the company at IPO or to be acquired by another business. The latter of these, acquisitions, is extremely important. Between 2008 and 2019, 90 percent of U.S. start-up exits happened through acquisition. In a recent survey, half of current startup executives said they aimed to be acquired. One study found that countries that made it easier for firms to be taken over saw a 40-50 percent increase in VC activity, and that U.S. states that made acquisitions harder saw a 27 percent decrease in VC investment deals

So this proposal would probably reduce investment in U.S. startups, since it makes it more difficult for them to be acquired. It would therefore reduce innovation as a result. It would also reduce inter-platform competition by banning deals that allow firms to move into new markets, like the acquisition of Beats that helped Apple to build a Spotify competitor, or the deals that helped Google, Microsoft, and Amazon build cloud-computing services that all compete with each other. It could also reduce competition faced by old industries, by preventing tech companies from buying firms that enable it to move into new markets—like Amazon’s acquisitions of health-care companies that it has used to build a health-care offering. Even Walmart’s acquisition of Jet.com, which it has used to build an Amazon competitor, could have been banned under this law if Walmart had had a higher market cap at the time.

For more, check out Dirk Auer’s piece “Facebook and the Pros and Cons of Ex Post Merger Reviews” and my piece “Cracking down on mergers would leave us all worse off”. 

ACCESS Act

The Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, sponsored by Rep. Mary Gay Scanlon (D-Pa.), would establish data portability and interoperability requirements for platforms. 

Under terms of the legislation, covered platforms would be required to allow third parties to transfer data to their users or, with the user’s consent, to a competing business. It also would require platforms to facilitate compatible and interoperable communications with competing businesses. The law directs the FTC to establish technical committees to promulgate the standards for portability and interoperability. 

Data portability and interoperability involve trade-offs in terms of security and usability, and overseeing them can be extremely costly and difficult. In security terms, interoperability requirements prevent companies from using closed systems to protect users from hostile third parties. Mandatory openness means increasing—sometimes, substantially so—the risk of data breaches and leaks. In practice, that could mean users’ private messages or photos being leaked more frequently, or activity on a social media page that a user considers to be “their” private data, but that “belongs” to another user under the terms of use, can be exported and publicized as such. 

It can also make digital services more buggy and unreliable, by requiring that they are built in a more “open” way that may be more prone to unanticipated software mismatches. A good example is that of Windows vs iOS; Windows is far more interoperable with third-party software than iOS is, but tends to be less stable as a result, and users often prefer the closed, stable system. 

Interoperability requirements also entail ongoing regulatory oversight, to make sure data is being provided to third parties reliably. It’s difficult to build an app around another company’s data without assurance that the data will be available when users want it. For a requirement as broad as this bill’s, that could mean setting up quite a large new de facto regulator. 

In the UK, Open Banking (an interoperability requirement imposed on British retail banks) has suffered from significant service outages, and targets a level of uptime that many developers complain is too low for them to build products around. Nor has Open Banking yet led to any obvious competition benefits.

For more, check out Gus Hurwitz’s piece “Portable Social Media Aren’t Like Portable Phone Numbers” and my piece “Why Data Interoperability Is Harder Than It Looks: The Open Banking Experience”.

Merger Filing Fee Modernization Act

A bill that mirrors language in the Endless Frontier Act recently passed by the U.S. Senate, would significantly raise filing fees for the largest mergers. Rather than the current cap of $280,000 for mergers valued at more than $500 million, the bill—sponsored by Rep. Joe Neguse (D-Colo.)–the new schedule would assess fees of $2.25 million for mergers valued at more than $5 billion; $800,000 for those valued at between $2 billion and $5 billion; and $400,000 for those between $1 billion and $2 billion.

Smaller mergers would actually see their filing fees cut: from $280,000 to $250,000 for those between $500 million and $1 billion; from $125,000 to $100,000 for those between $161.5 million and $500 million; and from $45,000 to $30,000 for those less than $161.5 million. 

In addition, the bill would appropriate $418 million to the FTC and $252 million to the DOJ’s Antitrust Division for Fiscal Year 2022. Most people in the antitrust world are generally supportive of more funding for the FTC and DOJ, although whether this is actually good or not depends both on how it’s spent at those places. 

It’s hard to object if it goes towards deepening the agencies’ capacities and knowledge, by hiring and retaining higher quality staff with salaries that are more competitive with those offered by the private sector, and on greater efforts to study the effects of the antitrust laws and past cases on the economy. If it goes toward broadening the activities of the agencies, by doing more and enabling them to pursue a more aggressive enforcement agenda, and supporting whatever of the above proposals make it into law, then it could be very harmful. 

For more, check out my post “Buck’s “Third Way”: A Different Road to the Same Destination” and Thom Lambert’s post “Bad Blood at the FTC”.

Politico has released a cache of confidential Federal Trade Commission (FTC) documents in connection with a series of articles on the commission’s antitrust probe into Google Search a decade ago. The headline of the first piece in the series argues the FTC “fumbled the future” by failing to follow through on staff recommendations to pursue antitrust intervention against the company. 

But while the leaked documents shed interesting light on the inner workings of the FTC, they do very little to substantiate the case that the FTC dropped the ball when the commissioners voted unanimously not to bring an action against Google.

Drawn primarily from memos by the FTC’s lawyers, the Politico report purports to uncover key revelations that undermine the FTC’s decision not to sue Google. None of the revelations, however, provide evidence that Google’s behavior actually harmed consumers.

The report’s overriding claim—and the one most consistently forwarded by antitrust activists on Twitter—is that FTC commissioners wrongly sided with the agency’s economists (who cautioned against intervention) rather than its lawyers (who tenuously recommended very limited intervention). 

Indeed, the overarching narrative is that the lawyers knew what was coming and the economists took wildly inaccurate positions that turned out to be completely off the mark:

But the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed:

— They saw only “limited potential for growth” in ads that track users across the web — now the backbone of Google parent company Alphabet’s $182.5 billion in annual revenue.

— They expected consumers to continue relying mainly on computers to search for information. Today, about 62 percent of those queries take place on mobile phones and tablets, nearly all of which use Google’s search engine as the default.

— They thought rivals like Microsoft, Mozilla or Amazon would offer viable competition to Google in the market for the software that runs smartphones. Instead, nearly all U.S. smartphones run on Google’s Android and Apple’s iOS.

— They underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic.

The report thus asserts that:

The agency ultimately voted against taking action, saying changes Google made to its search algorithm gave consumers better results and therefore didn’t unfairly harm competitors.

That conclusion underplays what the FTC’s staff found during the probe. In 312 pages of documents, the vast majority never publicly released, staffers outlined evidence that Google had taken numerous steps to ensure it would continue to dominate the market — including emerging arenas such as mobile search and targeted advertising. [EMPHASIS ADDED]

What really emerges from the leaked memos, however, is analysis by both the FTC’s lawyers and economists infused with a healthy dose of humility. There were strong political incentives to bring a case. As one of us noted upon the FTC’s closing of the investigation: “It’s hard to imagine an agency under more pressure, from more quarters (including the Hill), to bring a case around search.” Yet FTC staff and commissioners resisted that pressure, because prediction is hard. 

Ironically, the very prediction errors that the agency’s staff cautioned against are now being held against them. Yet the claims that these errors (especially the economists’) systematically cut in one direction (i.e., against enforcement) and that all of their predictions were wrong are both wide of the mark. 

Decisions Under Uncertainty

In seeking to make an example out of the FTC economists’ inaccurate predictions, critics ignore that antitrust investigations in dynamic markets always involve a tremendous amount of uncertainty; false predictions are the norm. Accordingly, the key challenge for policymakers is not so much to predict correctly, but to minimize the impact of incorrect predictions.

Seen in this light, the FTC economists’ memo is far from the laissez-faire manifesto that critics make it out to be. Instead, it shows agency officials wrestling with uncertain market outcomes, and choosing a course of action under the assumption the predictions they make might indeed be wrong. 

Consider the following passage from FTC economist Ken Heyer’s memo:

The great American philosopher Yogi Berra once famously remarked “Predicting is difficult, especially about the future.” How right he was. And yet predicting, and making decisions based on those predictions, is what we are charged with doing. Ignoring the potential problem is not an option. So I will be reasonably clear about my own tentative conclusions and recommendation, recognizing that reasonable people, perhaps applying a somewhat different standard, may disagree. My recommendation derives from my read of the available evidence, combined with the standard I personally find appropriate to apply to Commission intervention. [EMPHASIS ADDED]

In other words, contrary to what many critics have claimed, it simply is not the case that the FTC’s economists based their recommendations on bullish predictions about the future that ultimately failed to transpire. Instead, they merely recognized that, in a dynamic and unpredictable environment, antitrust intervention requires both a clear-cut theory of anticompetitive harm and a reasonable probability that remedies can improve consumer welfare. According to the economists, those conditions were absent with respect to Google Search.

Perhaps more importantly, it is worth asking why the economists’ erroneous predictions matter at all. Do critics believe that developments the economists missed warrant a different normative stance today?

In that respect, it is worth noting that the economists’ skepticism appeared to have rested first and foremost on the speculative nature of the harms alleged and the difficulty associated with designing appropriate remedies. And yet, if anything, these two concerns appear even more salient today. 

Indeed, the remedies imposed against Google in the EU have not delivered the outcomes that enforcers expected (here and here). This could either be because the remedies were insufficient or because Google’s market position was not due to anticompetitive conduct. Similarly, there is still no convincing economic theory or empirical research to support the notion that exclusive pre-installation and self-preferencing by incumbents harm consumers, and a great deal of reason to think they benefit them (see, e.g., our discussions of the issue here and here). 

Against this backdrop, criticism of the FTC economists appears to be driven more by a prior assumption that intervention is necessary—and that it was and is disingenuous to think otherwise—than evidence that erroneous predictions materially affected the outcome of the proceedings.

To take one example, the fact that ad tracking grew faster than the FTC economists believed it would is no less consistent with vigorous competition—and Google providing a superior product—than with anticompetitive conduct on Google’s part. The same applies to the growth of mobile operating systems. Ditto the fact that no rival has managed to dislodge Google in its most important markets. 

In short, not only were the economist memos informed by the very prediction difficulties that critics are now pointing to, but critics have not shown that any of the staff’s (inevitably) faulty predictions warranted a different normative outcome.

Putting Erroneous Predictions in Context

So what were these faulty predictions, and how important were they? Politico asserts that “the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed,” tying this to the FTC’s failure to intervene against Google over “tactics that European regulators and the U.S. Justice Department would later label antitrust violations.” The clear message is that the current actions are presumptively valid, and that the FTC’s economists thwarted earlier intervention based on faulty analysis.

But it is far from clear that these faulty predictions would have justified taking a tougher stance against Google. One key question for antitrust authorities is whether they can be reasonably certain that more efficient competitors will be unable to dislodge an incumbent. This assessment is necessarily forward-looking. Framed this way, greater market uncertainty (for instance, because policymakers are dealing with dynamic markets) usually cuts against antitrust intervention.

This does not entirely absolve the FTC economists who made the faulty predictions. But it does suggest the right question is not whether the economists made mistakes, but whether virtually everyone did so. The latter would be evidence of uncertainty, and thus weigh against antitrust intervention.

In that respect, it is worth noting that the staff who recommended that the FTC intervene also misjudged the future of digital markets.For example, while Politico surmises that the FTC “underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic,” there is a case to be made that the FTC overestimated this power. If anything, Google’s continued growth has opened new niches in the online advertising space.

Pinterest provides a fitting example; despite relying heavily on Google for traffic, its ad-funded service has witnessed significant growth. The same is true of other vertical search engines like Airbnb, Booking.com, and Zillow. While we cannot know the counterfactual, the vertical search industry has certainly not been decimated by Google’s “monopoly”; quite the opposite. Unsurprisingly, this has coincided with a significant decrease in the cost of online advertising, and the growth of online advertising relative to other forms.

Politico asserts not only that the economists’ market share and market power calculations were wrong, but that the lawyers knew better:

The economists, relying on data from the market analytics firm Comscore, found that Google had only limited impact. They estimated that between 10 and 20 percent of traffic to those types of sites generally came from the search engine.

FTC attorneys, though, used numbers provided by Yelp and found that 92 percent of users visited local review sites from Google. For shopping sites like eBay and TheFind, the referral rate from Google was between 67 and 73 percent.

This compares apples and oranges, or maybe oranges and grapefruit. The economists’ data, from Comscore, applied to vertical search overall. They explicitly noted that shares for particular sites could be much higher or lower: for comparison shopping, for example, “ranging from 56% to less than 10%.” This, of course, highlights a problem with the data provided by Yelp, et al.: it concerns only the websites of companies complaining about Google, not the overall flow of traffic for vertical search.

But the more important point is that none of the data discussed in the memos represents the overall flow of traffic for vertical search. Take Yelp, for example. According to the lawyers’ memo, 92 percent of Yelp searches were referred from Google. Only, that’s not true. We know it’s not true because, as Yelp CEO Jerry Stoppelman pointed out around this time in Yelp’s 2012 Q2 earnings call: 

When you consider that 40% of our searches come from mobile apps, there is quite a bit of un-monetized mobile traffic that we expect to unlock in the near future.

The numbers being analyzed by the FTC staff were apparently limited to referrals to Yelp’s website from browsers. But is there any reason to think that is the relevant market, or the relevant measure of customer access? Certainly there is nothing in the staff memos to suggest they considered the full scope of the market very carefully here. Indeed, the footnote in the lawyers’ memo presenting the traffic data is offered in support of this claim:

Vertical websites, such as comparison shopping and local websites, are heavily dependent on Google’s web search results to reach users. Thus, Google is in the unique position of being able to “make or break any web-based business.”

It’s plausible that vertical search traffic is “heavily dependent” on Google Search, but the numbers offered in support of that simply ignore the (then) 40 percent of traffic that Yelp acquired through its own mobile app, with no Google involvement at all. In any case, it is also notable that, while there are still somewhat fewer app users than web users (although the number has consistently increased), Yelp’s app users view significantly more pages than its website users do — 10 times as many in 2015, for example.

Also noteworthy is that, for whatever speculative harm Google might be able to visit on the company, at the time of the FTC’s analysis Yelp’s local ad revenue was consistently increasing — by 89% in Q3 2012. And that was without any ad revenue coming from its app (display ads arrived on Yelp’s mobile app in Q1 2013, a few months after the staff memos were written and just after the FTC closed its Google Search investigation). 

In short, the search-engine industry is extremely dynamic and unpredictable. Contrary to what many have surmised from the FTC staff memo leaks, this cuts against antitrust intervention, not in favor of it.

The FTC Lawyers’ Weak Case for Prosecuting Google

At the same time, although not discussed by Politico, the lawyers’ memo also contains errors, suggesting that arguments for intervention were also (inevitably) subject to erroneous prediction.

Among other things, the FTC attorneys’ memo argued the large upfront investments were required to develop cutting-edge algorithms, and that these effectively shielded Google from competition. The memo cites the following as a barrier to entry:

A search engine requires algorithmic technology that enables it to search the Internet, retrieve and organize information, index billions of regularly changing web pages, and return relevant results instantaneously that satisfy the consumer’s inquiry. Developing such algorithms requires highly specialized personnel with high levels of training and knowledge in engineering, economics, mathematics, sciences, and statistical analysis.

If there are barriers to entry in the search-engine industry, algorithms do not seem to be the source. While their market shares may be smaller than Google’s, rival search engines like DuckDuckGo and Bing have been able to enter and gain traction; it is difficult to say that algorithmic technology has proven a barrier to entry. It may be hard to do well, but it certainly has not proved an impediment to new firms entering and developing workable and successful products. Indeed, some extremely successful companies have entered into similar advertising markets on the backs of complex algorithms, notably Instagram, Snapchat, and TikTok. All of these compete with Google for advertising dollars.

The FTC’s legal staff also failed to see that Google would face serious competition in the rapidly growing voice assistant market. In other words, even its search-engine “moat” is far less impregnable than it might at first appear.

Moreover, as Ben Thompson argues in his Stratechery newsletter: 

The Staff memo is completely wrong too, at least in terms of the potential for their proposed remedies to lead to any real change in today’s market. This gets back to why the fundamental premise of the Politico article, along with much of the antitrust chatter in Washington, misses the point: Google is dominant because consumers like it.

This difficulty was deftly highlighted by Heyer’s memo:

If the perceived problems here can be solved only through a draconian remedy of this sort, or perhaps through a remedy that eliminates Google’s legitimately obtained market power (and thus its ability to “do evil”), I believe the remedy would be disproportionate to the violation and that its costs would likely exceed its benefits. Conversely, if a remedy well short of this seems likely to prove ineffective, a remedy would be undesirable for that reason. In brief, I do not see a feasible remedy for the vertical conduct that would be both appropriate and effective, and which would not also be very costly to implement and to police. [EMPHASIS ADDED]

Of course, we now know that this turned out to be a huge issue with the EU’s competition cases against Google. The remedies in both the EU’s Google Shopping and Android decisions were severely criticized by rival firms and consumer-defense organizations (here and here), but were ultimately upheld, in part because even the European Commission likely saw more forceful alternatives as disproportionate.

And in the few places where the legal staff concluded that Google’s conduct may have caused harm, there is good reason to think that their analysis was flawed.

Google’s ‘revenue-sharing’ agreements

It should be noted that neither the lawyers nor the economists at the FTC were particularly bullish on bringing suit against Google. In most areas of the investigation, neither recommended that the commission pursue a case. But one of the most interesting revelations from the recent leaks is that FTC lawyers did advise the commission’s leadership to sue Google over revenue-sharing agreements that called for it to pay Apple and other carriers and manufacturers to pre-install its search bar on mobile devices:

FTC staff urged the agency’s five commissioners to sue Google for signing exclusive contracts with Apple and the major wireless carriers that made sure the company’s search engine came pre-installed on smartphones.

The lawyers’ stance is surprising, and, despite actions subsequently brought by the EU and DOJ on similar claims, a difficult one to countenance. 

To a first approximation, this behavior is precisely what antitrust law seeks to promote: we want companies to compete aggressively to attract consumers. This conclusion is in no way altered when competition is “for the market” (in this case, firms bidding for exclusive placement of their search engines) rather than “in the market” (i.e., equally placed search engines competing for eyeballs).

Competition for exclusive placement has several important benefits. For a start, revenue-sharing agreements effectively subsidize consumers’ mobile device purchases. As Brian Albrecht aptly puts it:

This payment from Google means that Apple can lower its price to better compete for consumers. This is standard; some of the payment from Google to Apple will be passed through to consumers in the form of lower prices.

This finding is not new. For instance, Ronald Coase famously argued that the Federal Communications Commission (FCC) was wrong to ban the broadcasting industry’s equivalent of revenue-sharing agreements, so-called payola:

[I]f the playing of a record by a radio station increases the sales of that record, it is both natural and desirable that there should be a charge for this. If this is not done by the station and payola is not allowed, it is inevitable that more resources will be employed in the production and distribution of records, without any gain to consumers, with the result that the real income of the community will tend to decline. In addition, the prohibition of payola may result in worse record programs, will tend to lessen competition, and will involve additional expenditures for regulation. The gain which the ban is thought to bring is to make the purchasing decisions of record buyers more efficient by eliminating “deception.” It seems improbable to me that this problematical gain will offset the undoubted losses which flow from the ban on Payola.

Applying this logic to Google Search, it is clear that a ban on revenue-sharing agreements would merely lead both Google and its competitors to attract consumers via alternative means. For Google, this might involve “complete” vertical integration into the mobile phone market, rather than the open-licensing model that underpins the Android ecosystem. Valuable specialization may be lost in the process.

Moreover, from Apple’s standpoint, Google’s revenue-sharing agreements are profitable only to the extent that consumers actually like Google’s products. If it turns out they don’t, Google’s payments to Apple may be outweighed by lower iPhone sales. It is thus unlikely that these agreements significantly undermined users’ experience. To the contrary, Apple’s testimony before the European Commission suggests that “exclusive” placement of Google’s search engine was mostly driven by consumer preferences (as the FTC economists’ memo points out):

Apple would not offer simultaneous installation of competing search or mapping applications. Apple’s focus is offering its customers the best products out of the box while allowing them to make choices after purchase. In many countries, Google offers the best product or service … Apple believes that offering additional search boxes on its web browsing software would confuse users and detract from Safari’s aesthetic. Too many choices lead to consumer confusion and greatly affect the ‘out of the box’ experience of Apple products.

Similarly, Kevin Murphy and Benjamin Klein have shown that exclusive contracts intensify competition for distribution. In other words, absent theories of platform envelopment that are arguably inapplicable here, competition for exclusive placement would lead competing search engines to up their bids, ultimately lowering the price of mobile devices for consumers.

Indeed, this revenue-sharing model was likely essential to spur the development of Android in the first place. Without this prominent placement of Google Search on Android devices (notably thanks to revenue-sharing agreements with original equipment manufacturers), Google would likely have been unable to monetize the investment it made in the open source—and thus freely distributed—Android operating system. 

In short, Politico and the FTC legal staff do little to show that Google’s revenue-sharing payments excluded rivals that were, in fact, as efficient. In other words, Bing and Yahoo’s failure to gain traction may simply be the result of inferior products and cost structures. Critics thus fail to show that Google’s behavior harmed consumers, which is the touchstone of antitrust enforcement.

Self-preferencing

Another finding critics claim as important is that FTC leadership declined to bring suit against Google for preferencing its own vertical search services (this information had already been partially leaked by the Wall Street Journal in 2015). Politico’s framing implies this was a mistake:

When Google adopted one algorithm change in 2011, rival sites saw significant drops in traffic. Amazon told the FTC that it saw a 35 percent drop in traffic from the comparison-shopping sites that used to send it customers

The focus on this claim is somewhat surprising. Even the leaked FTC legal staff memo found this theory of harm had little chance of standing up in court:

Staff has investigated whether Google has unlawfully preferenced its own content over that of rivals, while simultaneously demoting rival websites…. 

…Although it is a close call, we do not recommend that the Commission proceed on this cause of action because the case law is not favorable to our theory, which is premised on anticompetitive product design, and in any event, Google’s efficiency justifications are strong. Most importantly, Google can legitimately claim that at least part of the conduct at issue improves its product and benefits users. [EMPHASIS ADDED]

More importantly, as one of us has argued elsewhere, the underlying problem lies not with Google, but with a standard asset-specificity trap:

A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control…. 

…It was entirely predictable, and should have been expected, that Google’s algorithm would evolve. It was also entirely predictable that it would evolve in ways that could diminish or even tank Foundem’s traffic. As one online marketing/SEO expert puts it: On average, Google makes about 500 algorithm changes per year. 500!….

…In the absence of an explicit agreement, should Google be required to make decisions that protect a dependent company’s “asset-specific” investments, thus encouraging others to take the same, excessive risk? 

Even if consumers happily visited rival websites when they were higher-ranked and traffic subsequently plummeted when Google updated its algorithm, that drop in traffic does not amount to evidence of misconduct. To hold otherwise would be to grant these rivals a virtual entitlement to the state of affairs that exists at any given point in time. 

Indeed, there is good reason to believe Google’s decision to favor its own content over that of other sites is procompetitive. Beyond determining and ensuring relevance, Google surely has the prerogative to compete vigorously and decide how to design its products to keep up with a changing market. In this case, that means designing, developing, and offering its own content in ways that partially displace the original “ten blue links” design of its search results page and instead offer its own answers to users’ queries.

Competitor Harm Is Not an Indicator of the Need for Intervention

Some of the other information revealed by the leak is even more tangential, such as that the FTC ignored complaints from Google’s rivals:

Amazon and Facebook privately complained to the FTC about Google’s conduct, saying their business suffered because of the company’s search bias, scraping of content from rival sites and restrictions on advertisers’ use of competing search engines. 

Amazon said it was so concerned about the prospect of Google monopolizing the search advertising business that it willingly sacrificed revenue by making ad deals aimed at keeping Microsoft’s Bing and Yahoo’s search engine afloat.

But complaints from rivals are at least as likely to stem from vigorous competition as from anticompetitive exclusion. This goes to a core principle of antitrust enforcement: antitrust law seeks to protect competition and consumer welfare, not rivals. Competition will always lead to winners and losers. Antitrust law protects this process and (at least theoretically) ensures that rivals cannot manipulate enforcers to safeguard their economic rents. 

This explains why Frank Easterbrook—in his seminal work on “The Limits of Antitrust”—argued that enforcers should be highly suspicious of complaints lodged by rivals:

Antitrust litigation is attractive as a method of raising rivals’ costs because of the asymmetrical structure of incentives…. 

…One line worth drawing is between suits by rivals and suits by consumers. Business rivals have an interest in higher prices, while consumers seek lower prices. Business rivals seek to raise the costs of production, while consumers have the opposite interest…. 

…They [antitrust enforcers] therefore should treat suits by horizontal competitors with the utmost suspicion. They should dismiss outright some categories of litigation between rivals and subject all such suits to additional scrutiny.

Google’s competitors spent millions pressuring the FTC to bring a case against the company. But why should it be a failing for the FTC to resist such pressure? Indeed, as then-commissioner Tom Rosch admonished in an interview following the closing of the case:

They [Google’s competitors] can darn well bring [a case] as a private antitrust action if they think their ox is being gored instead of free-riding on the government to achieve the same result.

Not that they would likely win such a case. Google’s introduction of specialized shopping results (via the Google Shopping box) likely enabled several retailers to bypass the Amazon platform, thus increasing competition in the retail industry. Although this may have temporarily reduced Amazon’s traffic and revenue (Amazon’s sales have grown dramatically since then), it is exactly the outcome that antitrust laws are designed to protect.

Conclusion

When all is said and done, Politico’s revelations provide a rarely glimpsed look into the complex dynamics within the FTC, which many wrongly imagine to be a monolithic agency. Put simply, the FTC’s commissioners, lawyers, and economists often disagree vehemently about the appropriate course of conduct. This is a good thing. As in many other walks of life, having a market for ideas is a sure way to foster sound decision making.

But in the final analysis, what the revelations do not show is that the FTC’s market for ideas failed consumers a decade ago when it declined to bring an antitrust suit against Google. They thus do little to cement the case for antitrust intervention—whether a decade ago, or today.

Critics of big tech companies like Google and Amazon are increasingly focused on the supposed evils of “self-preferencing.” This refers to when digital platforms like Amazon Marketplace or Google Search, which connect competing services with potential customers or users, also offer (and sometimes prioritize) their own in-house products and services. 

The objection, raised by several members and witnesses during a Feb. 25 hearing of the House Judiciary Committee’s antitrust subcommittee, is that it is unfair to third parties that use those sites to allow the site’s owner special competitive advantages. Is it fair, for example, for Amazon to use the data it gathers from its service to design new products if third-party merchants can’t access the same data? This seemingly intuitive complaint was the basis for the European Commission’s landmark case against Google

But we cannot assume that something is bad for competition just because it is bad for certain competitors. A lot of unambiguously procompetitive behavior, like cutting prices, also tends to make life difficult for competitors. The same is true when a digital platform provides a service that is better than alternatives provided by the site’s third-party sellers. 

It’s probably true that Amazon’s access to customer search and purchase data can help it spot products it can undercut with its own versions, driving down prices. But that’s not unusual; most retailers do this, many to a much greater extent than Amazon. For example, you can buy AmazonBasics batteries for less than half the price of branded alternatives, and they’re pretty good.

There’s no doubt this is unpleasant for merchants that have to compete with these offerings. But it is also no different from having to compete with more efficient rivals who have lower costs or better insight into consumer demand. Copying products and seeking ways to offer them with better features or at a lower price, which critics of self-preferencing highlight as a particular concern, has always been a fundamental part of market competition—indeed, it is the primary way competition occurs in most markets. 

Store-branded versions of iPhone cables and Nespresso pods are certainly inconvenient for those companies, but they offer consumers cheaper alternatives. Where such copying may be problematic (say, by deterring investments in product innovations), the law awards and enforces patents and copyrights to reward novel discoveries and creative works, and trademarks to protect brand identity. But in the absence of those cases where a company has intellectual property, this is simply how competition works. 

The fundamental question is “what benefits consumers?” Services like Yelp object that they cannot compete with Google when Google embeds its Google Maps box in Google Search results, while Yelp cannot do the same. But for users, the Maps box adds valuable information to the results page, making it easier to get what they want. Google is not making Yelp worse by making its own product better. Should it have to refrain from offering services that benefit its users because doing so might make competing products comparatively less attractive?

Self-preferencing also enables platforms to promote their offerings in other markets, which is often how large tech companies compete with each other. Amazon has a photo-hosting app that competes with Google Photos and Apple’s iCloud. It recently emailed its customers to promote it. That is undoubtedly self-preferencing, since other services cannot market themselves to Amazon’s customers like this, but if it makes customers aware of an alternative they might not have otherwise considered, that is good for competition. 

This kind of behavior also allows companies to invest in offering services inexpensively, or for free, that they intend to monetize by preferencing their other, more profitable products. For example, Google invests in Android’s operating system and gives much of it away for free precisely because it can encourage Android customers to use the profitable Google Search service. Despite claims to the contrary, it is difficult to see this sort of cross-subsidy as harmful to consumers.

Self-preferencing can even be good for competing services, including third-party merchants. In many cases, it expands the size of their potential customer base. For example, blockbuster video games released by Sony and Microsoft increase demand for games by other publishers because they increase the total number of people who buy Playstations and Xboxes. This effect is clear on Amazon’s Marketplace, which has grown enormously for third-party merchants even as Amazon has increased the number of its own store-brand products on the site. By making the Amazon Marketplace more attractive, third-party sellers also benefit.

All platforms are open or closed to varying degrees. Retail “platforms,” for example, exist on a spectrum on which Craigslist is more open and neutral than eBay, which is more so than Amazon, which is itself relatively more so than, say, Walmart.com. Each position on this spectrum offers its own benefits and trade-offs for consumers. Indeed, some customers’ biggest complaint against Amazon is that it is too open, filled with third parties who leave fake reviews, offer counterfeit products, or have shoddy returns policies. Part of the role of the site is to try to correct those problems by making better rules, excluding certain sellers, or just by offering similar options directly. 

Regulators and legislators often act as if the more open and neutral, the better, but customers have repeatedly shown that they often prefer less open, less neutral options. And critics of self-preferencing frequently find themselves arguing against behavior that improves consumer outcomes, because it hurts competitors. But that is the nature of competition: what’s good for consumers is frequently bad for competitors. If we have to choose, it’s consumers who should always come first.

In our first post, we discussed the weaknesses of an important theoretical underpinning of efforts to expand vertical merger enforcement (including, possibly, the proposed guidelines): the contract/merger equivalency assumption.

In this post we discuss the implications of that assumption and some of the errors it leads to — including some incorporated into the proposed guidelines.

There is no theoretical or empirical justification for more vertical enforcement

Tim Brennan makes a fantastic and regularly overlooked point in his post: If it’s true, as many claim (see, e.g., Steve Salop), that firms can generally realize vertical efficiencies by contracting instead of merging, then it’s also true that they can realize anticompetitive outcomes the same way. While efficiencies have to be merger-specific in order to be relevant to the analysis, so too do harms. But where the assumption is that the outcomes of integration can generally be achieved by the “less-restrictive” means of contracting, that would apply as well to any potential harms, thus negating the transaction-specificity required for enforcement. As Dennis Carlton notes:

There is a symmetry between an evaluation of the harms and benefits of vertical integration. Each must be merger-specific to matter in an evaluation of the merger’s effects…. If transaction costs are low, then vertical integration creates neither benefits nor harms, since everything can be achieved by contract. If transaction costs exist to prevent the achievement of a benefit but not a harm (or vice-versa), then that must be accounted for in a calculation of the overall effect of a vertical merger. (Dennis Carlton, Transaction Costs and Competition Policy)

Of course, this also means that those (like us) who believe that it is not so easy to accomplish by contract what may be accomplished by merger must also consider the possibility that a proposed merger may be anticompetitive because it overcomes an impediment to achieving anticompetitive goals via contract.

There’s one important caveat, though: The potential harms that could arise from a vertical merger are the same as those that would be cognizable under Section 2 of the Sherman Act. Indeed, for a vertical merger to cause harm, it must be expected to result in conduct that would otherwise be illegal under Section 2. This means there is always the possibility of a second bite at the apple when it comes to thwarting anticompetitive conduct. 

The same cannot be said of procompetitive conduct that can arise only through merger if a merger is erroneously prohibited before it even happens

Interestingly, Salop himself — the foremost advocate today for enhanced vertical merger enforcement — recognizes the issue raised by Brennan: 

Exclusionary harms and certain efficiency benefits also might be achieved with vertical contracts and agreements without the need for a vertical merger…. It [] might be argued that the absence of premerger exclusionary contracts implies that the merging firms lack the incentive to engage in conduct that would lead to harmful exclusionary effects. But anticompetitive vertical contracts may face the same types of impediments as procompetitive ones, and may also be deterred by potential Section 1 enforcement. Neither of these arguments thus justify a more or less intrusive vertical merger policy generally. Rather, they are factors that should be considered in analyzing individual mergers. (Salop & Culley, Potential Competitive Effects of Vertical Mergers)

In the same article, however, Salop also points to the reasons why it should be considered insufficient to leave enforcement to Sections 1 and 2, instead of addressing them at their incipiency under Clayton Section 7:

While relying solely on post-merger enforcement might have appealing simplicity, it obscures several key facts that favor immediate enforcement under Section 7.

  • The benefit of HSR review is to prevent the delays and remedial issues inherent in after-the-fact enforcement….
  • There may be severe problems in remedying the concern….
  • Section 1 and Section 2 legal standards are more permissive than Section 7 standards….
  • The agencies might well argue that anticompetitive post-merger conduct was caused by the merger agreement, so that it would be covered by Section 7….

All in all, failure to address these kinds of issues in the context of merger review could lead to significant consumer harm and underdeterrence.

The points are (mostly) well-taken. But they also essentially amount to a preference for more and tougher enforcement against vertical restraints than the judicial interpretations of Sections 1 & 2 currently countenance — a preference, in other words, for the use of Section 7 to bolster enforcement against vertical restraints of any sort (whether contractual or structural).

The problem with that, as others have pointed out in this symposium (see, e.g., Nuechterlein; Werden & Froeb; Wright, et al.), is that there’s simply no empirical basis for adopting a tougher stance against vertical restraints in the first place. Over and over again the empirical research shows that vertical restraints and vertical mergers are unlikely to cause anticompetitive harm: 

In reviewing this literature, two features immediately stand out: First, there is a paucity of support for the proposition that vertical restraints/vertical integration are likely to harm consumers. . . . Second, a far greater number of studies found that the use of vertical restraints in the particular context studied improved welfare unambiguously. (Cooper, et al, Vertical Restrictions and Antitrust Policy: What About the Evidence?)

[W]e did not have a particular conclusion in mind when we began to collect the evidence, and we… are therefore somewhat surprised at what the weight of the evidence is telling us. It says that, under most circumstances, profit-maximizing, vertical-integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view…. We therefore conclude that, faced with a vertical arrangement, the burden of evidence should be placed on competition authorities to demonstrate that that arrangement is harmful before the practice is attacked. (Francine Lafontaine & Margaret Slade, Vertical Integration and Firm Boundaries: The Evidence)

[Table 1 in this paper] indicates that voluntarily adopted restraints are associated with lower costs, greater consumption, higher stock returns, and better chances of survival. (Daniel O’Brien, The Antitrust Treatment of Vertical Restraint: Beyond the Beyond the Possibility Theorems)

In sum, these papers from 2009-2018 continue to support the conclusions from Lafontaine & Slade (2007) and Cooper et al. (2005) that consumers mostly benefit from vertical integration. While vertical integration can certainly foreclose rivals in theory, there is only limited empirical evidence supporting that finding in real markets. (GAI Comment on Vertical Mergers)

To the extent that the proposed guidelines countenance heightened enforcement relative to the status quo, they fall prey to the same defect. And while it is unclear from the fairly terse guidelines whether this is animating them, the removal of language present in the 1984 Non-Horizontal Merger Guidelines acknowledging the relative lack of harm from vertical mergers (“[a]lthough non-horizontal mergers are less likely than horizontal mergers to create competitive problems…”) is concerning.  

The shortcomings of orthodox economics and static formal analysis

There is also a further reason to think that vertical merger enforcement may be more likely to thwart procompetitive than anticompetitive arrangements relative to the status quo ante (i.e., where arrangements among vertical firms are by contract): Our lack of knowledge about the effects of market structure and firm organization on innovation and dynamic competition, and the relative hostility to nonstandard contracting, including vertical integration:

[T]he literature addressing how market structure affects innovation (and vice versa) in the end reveals an ambiguous relationship in which factors unrelated to competition play an important role. (Katz & Shelanski, Mergers and Innovation)

The fixation on the equivalency of the form of vertical integration (i.e., merger versus contract) is likely to lead enforcers to focus on static price and cost effects, and miss the dynamic organizational and informational effects that lead to unexpected, increased innovation across and within firms. 

In the hands of Oliver Williamson, this means that understanding firms in the real world entails taking an organization theory approach, in contrast to the “orthodox” economic perspective:

The lens of contract approach to the study of economic organization is partly complementary but also partly rival to the orthodox [neoclassical economic] lens of choice. Specifically, whereas the latter focuses on simple market exchange, the lens of contract is predominantly concerned with the complex contracts. Among the major differences is that non‐standard and unfamiliar contractual practices and organizational structures that orthodoxy interprets as manifestations of monopoly are often perceived to serve economizing purposes under the lens of contract. A major reason for these and other differences is that orthodoxy is dismissive of organization theory whereas organization theory provides conceptual foundations for the lens of contract. (emphasis added)

We are more likely to miss it when mergers solve market inefficiencies, and more likely to see it when they impose static costs — even if the apparent costs actually represent a move from less efficient contractual arrangements to more efficient integration.

The competition that takes place in the real world and between various groups ultimately depends upon the institution of private contracts, many of which, including the firm itself, are nonstandard. Innovation includes the discovery of new organizational forms and the application of old forms to new contexts. Such contracts prevent or attenuate market failure, moving the market toward what economists would deem a more competitive result. Indeed, as Professor Coase pointed out, many markets deemed “perfectly competitive” are in fact the end result of complex contracts limiting rivalry between competitors. This contractual competition cannot produce perfect results — no human institution ever can. Nonetheless, the result is superior to that which would obtain in a (real) world without nonstandard contracting. These contracts do not depend upon the creation or enhancement of market power and thus do not produce the evils against which antitrust law is directed. (Alan Meese, Price Theory Competition & the Rule of Reason)

Or, as Oliver Williamson more succinctly puts it:

[There is a] rebuttable presumption that nonstandard forms of contracting have efficiency purposes. (Oliver Williamson, The Economic Institutions of Capitalism)

The pinched focus of the guidelines on narrow market definition misses the bigger picture of dynamic competition over time

The proposed guidelines (and the theories of harm undergirding them) focus upon indicia of market power that may not be accurate if assessed in more realistic markets or over more relevant timeframes, and, if applied too literally, may bias enforcement against mergers with dynamic-innovation benefits but static-competition costs.  

Similarly, the proposed guidelines’ enumeration of potential efficiencies doesn’t really begin to cover the categories implicated by the organization of enterprise around dynamic considerations

The proposed guidelines’ efficiencies section notes that:

Vertical mergers bring together assets used at different levels in the supply chain to make a final product. A single firm able to coordinate how these assets are used may be able to streamline production, inventory management, or distribution, or create innovative products in ways that would have been hard to achieve though arm’s length contracts. (emphasis added)

But it is not clear than any of these categories encompasses organizational decisions made to facilitate the coordination of production and commercialization when they are dependent upon intangible assets.

As Thomas Jorde and David Teece write:

For innovations to be commercialized, the economic system must somehow assemble all the relevant complementary assets and create a dynamically-efficient interactive system of learning and information exchange. The necessary complementary assets can conceivably be assembled by either administrative or market processes, as when the innovator simply licenses the technology to firms that already own or are willing to create the relevant assets. These organizational choices have received scant attention in the context of innovation. Indeed, the serial model relies on an implicit belief that arm’s-length contracts between unaffiliated firms in the vertical chain from research to customer will suffice to commercialize technology. In particular, there has been little consideration of how complex contractual arrangements among firms can assist commercialization — that is, translating R&D capability into profitable new products and processes….

* * *

But in reality, the market for know-how is riddled with imperfections. Simple unilateral contracts where technology is sold for cash are unlikely to be efficient. Complex bilateral and multilateral contracts, internal organization, or various hybrid structures are often required to shore up obvious market failures and create procompetitive efficiencies. (Jorde & Teece, Rule of Reason Analysis of Horizontal Arrangements: Agreements Designed to Advance Innovation and Commercialize Technology) (emphasis added)

When IP protection for a given set of valuable pieces of “know-how” is strong — easily defendable, unique patents, for example — firms can rely on property rights to efficiently contract with vertical buyers and sellers. But in cases where the valuable “know how” is less easily defended as IP — e.g. business process innovation, managerial experience, distributed knowledge, corporate culture, and the like — the ability to partially vertically integrate through contract becomes more difficult, if not impossible. 

Perhaps employing these assets is part of what is meant in the draft guidelines by “streamline.” But the very mention of innovation only in the technological context of product innovation is at least some indication that organizational innovation is not clearly contemplated.  

This is a significant lacuna. The impact of each organizational form on knowledge transfers creates a particularly strong division between integration and contract. As Enghin Atalay, Ali Hortaçsu & Chad Syverson point out:

That vertical integration is often about transfers of intangible inputs rather than physical ones may seem unusual at first glance. However, as observed by Arrow (1975) and Teece (1982), it is precisely in the transfer of nonphysical knowledge inputs that the market, with its associated contractual framework, is most likely to fail to be a viable substitute for the firm. Moreover, many theories of the firm, including the four “elemental” theories as identified by Gibbons (2005), do not explicitly invoke physical input transfers in their explanations for vertical integration. (Enghin Atalay, et al., Vertical Integration and Input Flows) (emphasis added)

There is a large economics and organization theory literature discussing how organizations are structured with respect to these sorts of intangible assets. And the upshot is that, while we start — not end, as some would have it — with the Coasian insight that firm boundaries are necessarily a function of production processes and not a hard limit, we quickly come to realize that it is emphatically not the case that integration-via-contract and integration-via-merger are always, or perhaps even often, viable substitutes.

Conclusion

The contract/merger equivalency assumption, coupled with a “least-restrictive alternative” logic that favors contract over merger, puts a thumb on the scale against vertical mergers. While the proposed guidelines as currently drafted do not necessarily portend the inflexible, formalistic application of this logic, they offer little to guide enforcers or courts away from the assumption in the important (and perhaps numerous) cases where it is unwarranted.   

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by Geoffrey A. Manne (President & Founder, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics ); and Kristian Stout (Associate Director, ICLE).]

As many in the symposium have noted — and as was repeatedly noted during the FTC’s Hearings on Competition and Consumer Protection in the 21st Century — there is widespread dissatisfaction with the 1984 Non-Horizontal Merger Guidelines

Although it is doubtless correct that the 1984 guidelines don’t reflect the latest economic knowledge, it is by no means clear that this has actually been a problem — or that a new set of guidelines wouldn’t create even greater problems. Indeed, as others have noted in this symposium, there is a great deal of ambiguity in the proposed guidelines that could lead either to uncertainty as to how the agencies will exercise their discretion, or, more troublingly, could lead courts to take seriously speculative theories of harm

We can do little better in expressing our reservations that new guidelines are needed than did the current Chairman of the FTC, Joe Simons, writing on this very blog in a symposium on what became the 2010 Horizontal Merger Guidelines. In a post entitled, Revisions to the Merger Guidelines: Above All, Do No Harm, Simons writes:

My sense is that there is no need to revise the DOJ/FTC Horizontal Merger Guidelines, with one exception…. The current guidelines lay out the general framework quite well and any change in language relative to that framework are likely to create more confusion rather than less. Based on my own experience, the business community has had a good sense of how the agencies conduct merger analysis…. If, however, the current administration intends to materially change the way merger analysis is conducted at the agencies, then perhaps greater revision makes more sense. But even then, perhaps the best approach is to try out some of the contemplated changes (i.e. in actual investigations) and publicize them in speeches and the like before memorializing them in a document that is likely to have some substantial permanence to it.

Wise words. Unless, of course, “the current [FTC] intends to materially change the way [vertical] merger analysis is conducted.” But the draft guidelines don’t really appear to portend a substantial change, and in several ways they pretty accurately reflect agency practice.

What we want to draw attention to, however, is an implicit underpinning of the draft guidelines that we believe the agencies should clearly disavow (or at least explain more clearly the complexity surrounding): the extent and implications of the presumed functional equivalence of vertical integration by contract and by merger — the contract/merger equivalency assumption.   

Vertical mergers and their discontents

The contract/merger equivalency assumption has been gaining traction with antitrust scholars, but it is perhaps most clearly represented in some of Steve Salop’s work. Salop generally believes that vertical merger enforcement should be heightened. Among his criticisms of current enforcement is his contention that efficiencies that can be realized by merger can often also be achieved by contract. As he discussed during his keynote presentation at last year’s FTC hearing on vertical mergers:

And, finally, the key policy issue is the issue is not about whether or not there are efficiencies; the issue is whether the efficiencies are merger-specific. As I pointed out before, Coase stressed that you can get vertical integration by contract. Very often, you can achieve the vertical efficiencies if they occur, but with contracts rather than having to merge.

And later, in the discussion following his talk:

If there is vertical integration by contract… it meant you could get all the efficiencies from vertical integration with a contract. You did not actually need the vertical integration. 

Salop thus argues that because the existence of a “contract solution” to firm problems can often generate the same sorts of efficiencies as when firms opt to merge, enforcers and courts should generally adopt a presumption against vertical mergers relative to contracting:

Coase’s door swings both ways: Efficiencies often can be achieved by vertical contracts, without the potential anticompetitive harms from merger

In that vertical restraints are characterized as “just” vertical integration “by contract,” then claimed efficiencies in problematical mergers might be achieved with non-merger contracts that do not raise the same anticompetitive concerns. (emphasis in original)

(Salop isn’t alone in drawing such a conclusion, of course; Carl Shapiro, for example, has made a similar point (as have others)).

In our next post we explore the policy errors implicated by this contract/merger equivalency assumption. But here we want to consider whether it makes logical sense in the first place

The logic of vertical integration is not commutative 

It is true that, where contracts are observed, they are likely as (or more, actually)  efficient than merger. But, by the same token, it is also true that where mergers are observed they are likely more efficient than contracts. Indeed, the entire reason for integration is efficiency relative to what could be done by contract — this is the essence of the so-called “make-or-buy” decision. 

For example, a firm that decides to buy its own warehouse has determined that doing so is more efficient than renting warehouse space. Some of these efficiencies can be measured and quantified (e.g., carrying costs of ownership vs. the cost of rent), but many efficiencies cannot be easily measured or quantified (e.g., layout of the facility or site security). Under the contract/merger equivalency assumption, the benefits of owning a warehouse can be achieved “very often” by renting warehouse space. But the fact that many firms using warehouses own some space and rent some space indicates that the make-or-buy decision is often unique to each firm’s idiosyncratic situation. Moreover, the distinctions driving those differences will not always be readily apparent, and whether contracting or integrating is preferable in any given situation may not be inferred from the existence of one or the other elsewhere in the market — or even in the same firm!

There is no reason to presume in any given situation that the outcome from contracting would be the same as from merging, even where both are notionally feasible. The two are, quite simply, different bargaining environments, each with a different risk and cost allocation; accounting treatment; effect on employees, customers, and investors; tax consequence, etc. Even if the parties accomplished nominally “identical” outcomes, they would not, in fact, be identical.

Meanwhile, what if the reason for failure to contract, or the reason to prefer merger, has nothing to do with efficiency? What if there were no anticompetitive aim but there were a tax advantage? What if one of the parties just wanted a larger firm in order to satisfy the CEO’s ego? That these are not cognizable efficiencies under antitrust law is clear. But the adoption of a presumption of equivalence between contract and merger would — ironically — entail their incorporation into antitrust law just the same — by virtue of their effective prohibition under antitrust law

In other words, if the assumption is that contract and merger are equally efficient unless proven otherwise, but the law adopts a suspicion (or, even worse, a presumption) that vertical mergers are anticompetitive which can be rebutted only with highly burdensome evidence of net efficiency gain, this effectively deputizes antitrust law to enforce a preconceived notion of “merger appropriateness” that does not necessarily turn on efficiencies. There may (or may not) be sensible policy reasons for adopting such a stance, but they aren’t antitrust reasons.

More fundamentally, however, while there are surely some situations in which contractual restraints might be able to achieve similar organizational and efficiency gains as a merger, the practical realities of achieving not just greater efficiency, but a whole host of non-efficiency-related, yet nonetheless valid, goals, are rarely equivalent between the two

It may be that the parties don’t know what they don’t know to such an extent that a contract would be too costly because it would be too incomplete, for example. But incomplete contracts and ambiguous control and ownership rights aren’t (as much of) an issue on an ongoing basis after a merger. 

As noted, there is no basis for assuming that the structure of a merger and a contract would be identical. In the same way, there is no basis for assuming that the knowledge transfer that would result from a merger would be the same as that which would result from a contract — and in ways that the parties could even specify or reliably calculate in advance. Knowing that the prospect for knowledge “synergies” would be higher with a merger than a contract might be sufficient to induce the merger outcome. But asked to provide evidence that the parties could not engage in the same conduct via contract, the parties would be unable to do so. The consequence, then, would be the loss of potential gains from closer integration.

At the same time, the cavalier assumption that parties would be able — legally — to enter into an analogous contract in lieu of a merger is problematic, given that it would likely be precisely the form of contract (foreclosing downstream or upstream access) that is alleged to create problems with the merger in the first place.

At the FTC hearings last year, Francine LaFontaine highlighted this exact concern

I want to reemphasize that there are also rules against vertical restraints in antitrust laws, and so to say that the firms could achieve the mergers outcome by using vertical restraints is kind of putting them in a circular motion where we are telling them you cannot merge because you could do it by contract, and then we say, but these contract terms are not acceptable.

Indeed, legal risk is one of the reasons why a merger might be preferable to a contract, and because the relevant markets here are oligopoly markets, the possibility of impermissible vertical restraints between large firms with significant market share is quite real.

More important, the assumptions underlying the contention that contracts and mergers are functionally equivalent legal devices fails to appreciate the importance of varied institutional environments. Consider that one reason some takeovers are hostile is because incumbent managers don’t want to merge, and often believe that they are running a company as well as it can be run — that a change of corporate control would not improve efficiency. The same presumptions may also underlie refusals to contract and, even more likely, may explain why, to the other firm, a contract would be ineffective.

But, while there is no way to contract without bilateral agreement, there is a corporate control mechanism to force a takeover. In this institutional environment a merger may be easier to realize than a contract (and that applies even to a consensual merger, of course, given the hostile outside option). In this case, again, the assumption that contract should be the relevant baseline and the preferred mechanism for coordination is misplaced — even if other firms in the industry are successfully accomplishing the same thing via contract, and even if a contract would be more “efficient” in the abstract.

Conclusion

Properly understood, the choice of whether to contract or merge derives from a host of complicated factors, many of which are difficult to observe and/or quantify. The contract/merger equivalency assumption — and the species of “least-restrictive alternative” reasoning that would demand onerous efficiency arguments to permit a merger when a contract was notionally possible — too readily glosses over these complications and unjustifiably embraces a relative hostility to vertical mergers at odds with both theory and evidence

Rather, as has long been broadly recognized, there can be no legally relevant presumption drawn against a company when it chooses one method of vertical integration over another in the general case. The agencies should clarify in the draft guidelines that the mere possibility of integration via contract or the inability of merging parties to rigorously describe and quantify efficiencies does not condemn a proposed merger.

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by Lawrence J. White (Robert Kavesh Professor of Economics, New York University; former Chief Economist, DOJ Antitrust Division).]

The DOJ/FTC Draft Vertical Merger Guidelines establish a “safe harbor” of a 20% market share for each of the merging parties. But the issue of defining the relevant “market” to which the 20% would apply is not well addressed.

Although reference is made to the market definition paradigm that is offered by the DOJ’s and FTC’s Horizontal Merger Guidelines (“HMGs”), what is neglected is the following: Under the “unilateral effects” theory of competitive harm of the HMGs, the horizontal merger of two firms that sell differentiated products that are imperfect substitutes could lead to significant price increases if the second-choice product for a significant fraction of each of the merging firms’ customers is sold by the partner firm. Such unilateral-effects instances are revealed by examining detailed sales and substitution data with respect to the customers of only the two merging firms.

In such instances, the true “relevant market” is simply the products that are sold by the two firms, and the merger is effectively a “2-to-1” merger. Under these circumstances, any apparently broader market (perhaps based on physical or functional similarities of products) is misleading, and the “market” shares of the merging parties that are based on that broader market are under-representations of the potential for their post-merger exercise of market power.

With a vertical merger, the potential for similar unilateral effects* would have to be captured by examining the detailed sales and substitution patterns of each of the merging firms with all of their significant horizontal competitors. This will require a substantial, data-intensive effort. And, of course, if this effort is not undertaken and an erroneously broader market is designated, the 20% “market” share threshold will understate the potential for competitive harm from a proposed vertical merger.

* With a vertical merger, such “unilateral effects” could arise post-merger in two ways: (a) The downstream partner could maintain a higher price, since some of the lost profits from some of the lost sales could be recaptured by the upstream partner’s profits on the sales of components to the downstream rivals (which gain some of the lost sales); and (b) the upstream partner could maintain a higher price to the downstream rivals, since some of the latter firms’ customers (and the concomitant profits) would be captured by the downstream partner.

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by Jan Rybnicek (Counsel at Freshfields Bruckhaus Deringer US LLP in Washington, D.C. and Senior Fellow and Adjunct Professor at the Global Antitrust Institute at the Antonin Scalia Law School at George Mason University).]

In an area where it may seem that agreement is rare, there is near universal agreement on the benefits of withdrawing the DOJ’s 1984 Non-Horizontal Merger Guidelines. The 1984 Guidelines do not reflect current agency thinking on vertical mergers and are not relied upon by businesses or practitioners to anticipate how the agencies may review a vertical transaction. The more difficult question is whether the agencies should now replace the 1984 Guidelines and, if so, what the modern guidelines should say.

There are several important reasons that counsel against issuing new vertical merger guidelines (VMGs). Most significantly, we likely are better off without new VMGs because they invariably will (1) send the wrong message to agency staff about the relative importance of vertical merger enforcement compared to other agency priorities, (2) create new sufficient conditions that tend to trigger wasteful investigations and erroneous enforcement actions, and (3) add very little, if anything, to our understanding of when the agencies will or will not pursue an in-depth investigation or enforcement action of a vertical merger.

Unfortunately, these problems are magnified rather than mitigated by the draft VMGs. But it is unlikely at this point that the agencies will hit the brakes and not issue new VMGs. The agencies therefore should make several key changes that would help prevent the final VMGs from causing more harm than good.

What is the Purpose of Agency Guidelines? 

Before we can have a meaningful conversation about whether the draft VMGs are good or bad for the world, or how they can be improved to ensure they contribute positively to antitrust law, it is important to identify, and have a shared understanding about, the purpose of guidelines and their potential benefits.

In general, I am supportive of guidelines. In fact, I helped urge the FTC to issue its 2015 Policy Statement articulating the agency’s enforcement principles under its Section 5 Unfair Methods of Competition authority. As I have written before, guidelines can be useful if they accomplish two important goals: (1) provide insight and transparency to businesses and practitioners about the agencies’ analytical approach to an issue and (2) offer agency staff direction as to agency priorities while cabining the agencies’ broad discretion by tethering investigational or enforcement decisions to those guidelines. An additional benefit may be that the guidelines also could prove useful to courts interpreting or applying the antitrust laws.

Transparency is important for the obvious reason that it allows the business community and practitioners to know how the agencies will apply the antitrust laws and thereby allows them to evaluate if a specific merger or business arrangement is likely to receive scrutiny. But guidelines are not only consumed by the public. They also are used by agency staff. As a result, guidelines invariably influence how staff approaches a matter, including whether to open an investigation, how in-depth that investigation is, and whether to recommend an enforcement action. Lastly, for guidelines to be meaningful, they also must accurately reflect agency practice, which requires the agencies’ analysis to be tethered to an analytical framework.

As discussed below, there are many reasons to doubt that the draft VMGs can deliver on these goals.

Draft VMGs Will Lead to Bad Enforcement Policy While Providing Little Benefit

 A chief concern with VMGs is that they will inadvertently usher in a new enforcement regime that treats horizontal and vertical mergers as co-equal enforcement priorities despite the mountain of evidence, not to mention simple logic, that mergers among competitors are a significantly greater threat to competition than are vertical mergers. The draft VMGs exacerbate rather than mitigate this risk by creating a false equivalence between vertical and horizontal merger enforcement and by establishing new minimum conditions that are likely to lead the agencies to pursue wasteful investigations of vertical transactions. And the draft VMGs do all this without meaningfully advancing our understanding of the conditions under which the agencies are likely to pursue investigations and enforcement against vertical mergers.

1. No Recognition of the Differences Between Horizontal and Vertical Mergers

One striking feature of the draft VMGs is that they fail to contextualize vertical mergers in the broader antitrust landscape. As a result, it is easy to walk away from the draft VMGs with the impression that vertical mergers are as likely to lead to anticompetitive harm as are horizontal mergers. That is a position not supported by the economic evidence or logic. It is of course true that vertical mergers can result in competitive harm; that is not a seriously contested point. But it is important to acknowledge and provide background for why that harm is significantly less likely than in horizontal cases. That difference should inform agency enforcement priorities. Potentially due to this the lack of framing, the draft VMGs tend to speak more about when the agencies may identify competitive harm rather than when they will not.

The draft VMGs would benefit greatly from a more comprehensive approach to understanding vertical merger transactions. The agencies should add language explaining that, whereas a consensus exists that eliminating a direct competitor always tends to increase the risk of unilateral effects (although often trivially), there is no such consensus that harm will result from the combination of complementary assets. In fact, the current evidence shows such vertical transactions tend to be procompetitive. Absent such language, the VMGs will over time misguidedly focus more agency resources into investigating vertical mergers where there is unlikely to be harm (with inevitably more enforcement errors) and less time on more important priorities, such as pursuing enforcement of anticompetitive horizontal transactions.

2. The 20% Safe Harbor Provides No Harbor and Will Become a Sufficient Condition

The draft VMGs attempt to provide businesses with guidance about the types of transactions the agencies will not investigate by articulating a market share safe harbor. But that safe harbor does not (1) appear to be grounded in any evidence, (2) is surprisingly low in comparison to the EU vertical merger guidelines, and (3) is likely to become a sufficient condition to trigger an in-depth investigation or enforcement. 

The draft VMGs state:

The Agencies are unlikely to challenge a vertical merger where the parties to the merger have a share in the relevant market of less than 20%, and the related product is used in less than 20% of the relevant market.

But in the very next sentence the draft VMGs render the safe harbor virtually meaningless, stating:

In some circumstance, mergers with shares below the threshold can give rise to competitive concerns.

This caveat comes despite the fact that the 20% threshold is low compared to other jurisdictions. Indeed, the EU’s guidelines create a 30% safe harbor. Nor is it clear what the basis is for the 20% threshold, either in economics or law. While it is important for the agencies to remain flexible, too much flexibility will render the draft VMGs meaningless. The draft VMGs should be less equivocal about the types of mergers that will not receive significant scrutiny and are unlikely to be the subject of enforcement action.

What may be most troubling about the market share safe harbor is the likelihood that it will establish general enforcement norms that did not previously exist. It is likely that agency staff will soon interpret (despite language stating otherwise) the 20% market share as the minimum necessary condition to open an in-depth investigation and to pursue an enforcement action. We have seen other guidelines’ tools have similar effects on agency analysis before (see, GUPPIs). This risk is only exacerbated where the safe harbor is not a true safe harbor that provides businesses with clarity on enforcement priorities.

3. Requirements for Proving EDM and Efficiencies Fails to Recognize Vertical Merger Context

The draft VMGs minimize the significant role of EDM and efficiencies in vertical mergers. The agencies frequently take a skeptical approach to efficiencies in the context of horizontal mergers and it is well-known that the hurdle to substantiate efficiencies is difficult, if not impossible, to meet. The draft VMGs oddly continue this skeptical approach by specifically referencing the standards discussed in the horizontal merger guidelines for efficiencies when discussing EDM and vertical merger efficiencies. The draft VMGs do not recognize that the combination of complementary products is inherently more likely to generate efficiencies than in horizontal mergers between competitors. The draft VMGs also oddly discuss EDM and efficiencies in separate sections and spend a trivial amount of time on what is the core motivating feature of vertical mergers. Even the discussion of EDM is as much about where there may be exceptions to EDM as it is about making clear the uncontroversial view that EDM is frequent in vertical transactions. Without acknowledging the inherent nature of EDM and efficiencies more generally, the final VMGs will send the wrong message that vertical merger enforcement should be on par with horizontal merger enforcement.

4. No New Insights into How Agencies Will Assess Vertical Mergers

Some might argue that the costs associated with the draft VMGs nevertheless are tolerable because the guidelines offer significant benefits that far outweigh their costs. But that is not the case here. The draft VMGs provide no new information about how the agencies will review vertical merger transactions and under what circumstances they are likely to seek enforcement actions. And that is because it is a difficult if not impossible task to identify any such general guiding principles. Indeed, unlike in the context of horizontal transactions where an increase in market power informs our thinking about the likely competitive effects, greater market power in the context of a vertical transaction that combines complements creates downward pricing pressure that often will dominate any potential competitive harm.

The draft VMGs do what they can, though, which is to describe in general terms several theories of harm. But the benefits from that exercise are modest and do not outweigh the significant risks discussed above. The theories described are neither novel or unknown to the public today. Nor do the draft VMGs explain any significant new thinking on vertical mergers, likely because there has been none that can provide insight into general enforcement principles. The draft VMGs also do not clarify changes to statutory text (because it has not changed) or otherwise clarify judicial rulings or past enforcement actions. As a result, the draft VMGs do not offer sufficient benefits that would outweigh their substantial cost.

Conclusion

Despite these concerns, it is worth acknowledging the work the FTC and DOJ have put into preparing the draft VMGs. It is no small task to articulate a unified position between the two agencies on an issue such as vertical merger enforcement where so many have such strong views. To the agencies’ credit, the VMGs are restrained in not including novel or more adventurous theories of harm. I anticipate the DOJ and FTC will engage with commentators and take the feedback seriously as they work to improve the final VMGs.

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by Scott Sher (Partner, Wilson Sonsini Goodrich & Rosati) and Matthew McDonald (Associate, Wilson Sonsini Goodrich & Rosati).]

On January 10, 2020, the United States Department of Justice (“DOJ”) and the Federal Trade Commission (“FTC”) (collectively, “the Agencies”) released their joint draft guidelines outlining their “principal analytical techniques, practices and enforcement policy” with respect to vertical mergers (“Draft Guidelines”). While the Draft Guidelines describe and formalize the Agencies’ existing approaches when investigating vertical mergers, they leave several policy questions unanswered. In particular, the Draft Guidelines do not address how the Agencies might approach the issue of acquisition of potential or nascent competitors through vertical mergers. As many technology mergers are motivated by the desire to enter new industries or add new tools or features to an existing platform (i.e., the Buy-Versus-Build dilemma), the omission leaves a significant hole in the Agencies’ enforcement policy agenda, and leaves the tech industry, in particular, without adequate guidance as to how the Agencies may address these issues.

This is notable, given that the Horizontal Merger Guidelines explicitly address potential competition theories of harm (e.g., at § 1 (referencing mergers and acquisitions “involving actual or potential competitors”); § 2 (“The Agencies consider whether the merging firms have been, or likely will become absent the merger, substantial head-to-head competitors.”). Indeed, the Agencies have recently challenged several proposed horizontal mergers based on nascent competition theories of harm. 

Further, there has been much debate regarding whether increased antitrust scrutiny of vertical acquisitions of nascent competitors, particularly in technology markets, is warranted (See, e.g., Open Markets Institute, The Urgent Need for Strong Vertical Merger Guidelines (“Enforcers should be vigilant toward dominant platforms’ acquisitions of seemingly small or marginal firms and be ready to block acquisitions that may be part of a monopoly protection strategy. Dominant firms should not be permitted to expand through vertical acquisitions and cut off budding threats before they have a chance to bloom.”); Caroline Holland, Taking on Big Tech Through Merger Enforcement (“Vertical mergers that create market power capable of stifling competition could be particularly pernicious when it comes to digital platforms.”)). 

Thus, further policy guidance from the Agencies on this issue is needed. As the Agencies formulate guidance, they should take note that vertical mergers involving technology start-ups generally promote efficiency and innovation, and that any potential competitive harm almost always can be addressed with easy-to-implement behavioral remedies.

The agencies’ draft vertical merger guidelines

The Draft Guidelines outline the following principles that the Agencies will apply when analyzing vertical mergers:

  • Market definition. The Agencies will identify a relevant market and one or more “related products.” (§ 2) This is a product that is supplied by the merged firm, is vertically related to the product in the relevant market, and to which access by the merged firm’s rivals affects competition in the relevant market. (§ 2)
  • Safe harbor. Unlike horizontal merger cases, the Agencies cannot rely on changes in concentration in the relevant market as a screen for competitive effects. Instead, the Agencies consider measures of the competitive significance of the related product. (§ 3) The Draft Guidelines propose a safe harbor, stating that the Agencies are unlikely to challenge a vertical merger “where the parties to the merger have a share in the relevant market of less than 20 percent, and the related product is used in less than 20 percent of the relevant market.” (§ 3) However, shares exceeding the thresholds, taken alone, do not support an inference that the vertical merger is anticompetitive. (§ 3)
  • Theories of unilateral harm. Vertical mergers can result in unilateral competitive effects, including raising rivals’ costs (charging rivals in the relevant market a higher price for the related product) or foreclosure (refusing to supply rivals with the related product altogether). (§ 5.a) Another potential unilateral effect is access to competitively sensitive information: The combined firm may, through the acquisition, gain access to sensitive business information about its upstream or downstream rivals that was unavailable to it before the merger (for example, a downstream rival of the merged firm may have been a premerger customer of the upstream merging party). (§ 5.b)
  • Theories of coordinated harm. Vertical mergers can also increase the likelihood of post-merger coordinated interaction. For example, a vertical merger might eliminate or hobble a maverick firm that would otherwise play an important role in limiting anticompetitive coordination. (§ 7)
  • Procompetitive effects. Vertical mergers can have procompetitive effects, such as the elimination of double marginalization (“EDM”). A merger of vertically related firms can create an incentive for the combined entity to lower prices on the downstream product, because it will capture the additional margins from increased sales on the upstream product. (§ 6) EDM thus may benefit both the merged firm and buyers of the downstream product. (§ 6)
  • Efficiencies. Vertical mergers have the potential to create cognizable efficiencies; the Agencies will evaluate such efficiencies using the standards set out in the Horizontal Merger Guidelines. (§ 8)

Implications for vertical mergers involving nascent start-ups

At present, the Draft Guidelines do not address theories of nascent or potential competition. To the extent the Agencies provide further guidance regarding the treatment of vertical mergers involving nascent start-ups, they should take note of the following facts:

First, empirical evidence from strategy literature indicates that technology-related vertical mergers are likely to be efficiency-enhancing. In a survey of the strategy literature on vertical integration, Professor D. Daniel Sokol observed that vertical acquisitions involving technology start-ups are “largely complementary, combining the strengths of the acquiring firm in process innovation with the product innovation of the target firms.” (p. 1372) The literature shows that larger firms tend to be relatively poor at developing new and improved products outside of their core expertise, but are relatively strong at process innovation (developing new and improved methods of production, distribution, support, and the like). (Sokol, p. 1373) Larger firms need acquisitions to help with innovation; acquisition is more efficient than attempting to innovate through internal efforts. (Sokol, p. 1373)

Second, vertical merger policy towards nascent competitor acquisitions has important implications for the rate of start-up formation, and the innovation that results. Entrepreneurship in technology markets is motivated by the opportunity for commercialization and exit. (Sokol, p. 1362 (“[T]he purpose of such investment [in start-ups] is to reap the rewards of scaling a venture to exit.”))

In recent years, as IPO activity has declined, vertical mergers have become the default method of entrepreneurial exit. (Sokol, p. 1376) Increased vertical merger enforcement against start-up acquisitions thus closes off the primary exit strategy for entrepreneurs. As Prof. Sokol concluded in his study of vertical mergers:

When antitrust agencies, judges, and legislators limit the possibility of vertical mergers as an exit strategy for start-up firms, it creates risk for innovation and entrepreneurship…. it threatens entrepreneurial exits, particularly for tech companies whose very business model is premised upon vertical mergers for purposes of a liquidity event. (p. 1377)

Third, to the extent that the vertical acquisition of a start-up raises competitive concerns, a behavioral remedy is usually preferable to a structural one. As explained above, vertical acquisitions typically result in substantial efficiencies, and these efficiencies are likely to overwhelm any potential competitive harm. Further, a structural remedy is likely infeasible in the case of a start-up acquisition. Thus, behavioral relief is the only way of preserving the deal’s efficiencies while remedying the potential competitive harm. (Which the Agencies have recognized, see DOJ Antitrust Division, Policy Guide to Merger Remedies, p. 20 (“Stand-alone conduct relief is only appropriate when a full-stop prohibition of the merger would sacrifice significant efficiencies and a structural remedy would similarly eliminate such efficiencies or is simply infeasible.”)) Appropriate behavioral remedies for vertical acquisitions of start-ups would include firewalls (restricting the flow of competitively sensitive information between the upstream and downstream units of the combined firm) or a fair dealing or non-discrimination remedy (requiring the merging firm to supply an input or grant customer access to competitors in a non-discriminatory way) with clear benchmarks to ensure compliance. (See Policy Guide to Merger Remedies, pp. 22-24)

To be sure, some vertical mergers may cause harm to competition, and there should be enforcement when the facts justify it. But vertical mergers involving technology start-ups generally enhance efficiency and promote innovation. Antitrust’s goals of promoting competition and innovation are thus best served by taking a measured approach towards vertical mergers involving technology start-ups. (Sokol, pp. 1362–63) (“Thus, a general inference that makes vertical acquisitions, particularly in tech, more difficult to approve leads to direct contravention of antitrust’s role in promoting competition and innovation.”)