Archives For DOJ Antitrust Division

The American Choice and Innovation Online Act (previously called the Platform Anti-Monopoly Act), introduced earlier this summer by U.S. Rep. David Cicilline (D-R.I.), would significantly change the nature of digital platforms and, with them, the internet itself. Taken together, the bill’s provisions would turn platforms into passive intermediaries, undermining many of the features that make them valuable to consumers. This seems likely to remain the case even after potential revisions intended to minimize the bill’s unintended consequences.

In its current form, the bill is split into two parts that each is dangerous in its own right. The first, Section 2(a), would prohibit almost any kind of “discrimination” by platforms. Because it is so open-ended, lawmakers might end up removing it in favor of the nominally more focused provisions of Section 2(b), which prohibit certain named conduct. But despite being more specific, this section of the bill is incredibly far-reaching and would effectively ban swaths of essential services.

I will address the potential effects of these sections point-by-point, but both elements of the bill suffer from the same problem: a misguided assumption that “discrimination” by platforms is necessarily bad from a competition and consumer welfare point of view. On the contrary, this conduct is often exactly what consumers want from platforms, since it helps to bring order and legibility to otherwise-unwieldy parts of the Internet. Prohibiting it, as both main parts of the bill do, would make the Internet harder to use and less competitive.

Section 2(a)

Section 2(a) essentially prohibits any behavior by a covered platform that would advantage that platform’s services over any others that also uses that platform; it characterizes this preferencing as “discrimination.”

As we wrote when the House Judiciary Committee’s antitrust bills were first announced, this prohibition on “discrimination” is so broad that, if it made it into law, it would prevent platforms from excluding or disadvantaging any product of another business that uses the platform or advantaging their own products over those of their competitors.

The underlying assumption here is that platforms should be like telephone networks: providing a way for different sides of a market to communicate with each other, but doing little more than that. When platforms do do more—for example, manipulating search results to favor certain businesses or to give their own products prominence —it is seen as exploitative “leveraging.”

But consumers often want platforms to be more than just a telephone network or directory, because digital markets would be very difficult to navigate without some degree of “discrimination” between sellers. The Internet is so vast and sellers are often so anonymous that any assistance which helps you choose among options can serve to make it more navigable. As John Gruber put it:

From what I’ve seen over the last few decades, the quality of the user experience of every computing platform is directly correlated to the amount of control exerted by its platform owner. The current state of the ownerless world wide web speaks for itself.

Sometimes, this manifests itself as “self-preferencing” of another service, to reduce additional time spent searching for the information you want. When you search for a restaurant on Google, it can be very useful to get information like user reviews, the restaurant’s phone number, a button on mobile to phone them directly, estimates of how busy it is, and a link to a Maps page to see how to actually get there.

This is, undoubtedly, frustrating for competitors like Yelp, who would like this information not to be there and for users to have to click on either a link to Yelp or a link to Google Maps. But whether it is good or bad for Yelp isn’t relevant to whether it is good for users—and it is at least arguable that it is, which makes a blanket prohibition on this kind of behavior almost inevitably harmful.

If it isn’t obvious why removing this kind of feature would be harmful for users, ask yourself why some users search in Yelp’s app directly for this kind of result. The answer, I think, is that Yelp gives you all the information above that Google does (and sometimes is better, although I tend to trust Google Maps’ reviews over Yelp’s), and it’s really convenient to have all that on the same page. If Google could not provide this kind of “rich” result, many users would probably stop using Google Search to look for restaurant information in the first place, because a new friction would have been added that made the experience meaningfully worse. Removing that option would be good for Yelp, but mainly because it removes a competitor.

If all this feels like stating the obvious, then it should highlight a significant problem with Section 2(a) in the Cicilline bill: it prohibits conduct that is directly value-adding for consumers, and that creates competition for dedicated services like Yelp that object to having to compete with this kind of conduct.

This is true across all the platforms the legislation proposes to regulate. Amazon prioritizes some third-party products over others on the basis of user reviews, rates of returns and complaints, and so on; Amazon provides private label products to fill gaps in certain product lines where existing offerings are expensive or unreliable; Apple pre-installs a Camera app on the iPhone that, obviously, enjoys an advantage over rival apps like Halide.

Some or all of this behavior would be prohibited under Section 2(a) of the Cicilline bill. Combined with the bill’s presumption that conduct must be defended affirmatively—that is, the platform is presumed guilty unless it can prove that the challenged conduct is pro-competitive, which may be very difficult to do—and the bill could prospectively eliminate a huge range of socially valuable behavior.

Supporters of the bill have already been left arguing that the law simply wouldn’t be enforced in these cases of benign discrimination. But this would hardly be an improvement. It would mean the Federal Trade Commission (FTC) and U.S. Justice Department (DOJ) have tremendous control over how these platforms are built, since they could challenge conduct in virtually any case. The regulatory uncertainty alone would complicate the calculus for these firms as they refine, develop, and deploy new products and capabilities. 

So one potential compromise might be to do away with this broad-based rule and proscribe specific kinds of “discriminatory” conduct instead. This approach would involve removing Section 2(a) from the bill but retaining Section 2(b), which enumerates 10 practices it deems to be “other discriminatory conduct.” This may seem appealing, as it would potentially avoid the worst abuses of the broad-based prohibition. In practice, however, it would carry many of the same problems. In fact, many of 2(b)’s provisions appear to go even further than 2(a), and would proscribe even more procompetitive conduct that consumers want.

Sections 2(b)(1) and 2(b)(9)

The wording of these provisions is extremely broad and, as drafted, would seem to challenge even the existence of vertically integrated products. As such, these prohibitions are potentially even more extensive and invasive than Section 2(a) would have been. Even a narrower reading here would seem to preclude safety and privacy features that are valuable to many users. iOS’s sandboxing of apps, for example, serves to limit the damage that a malware app can do on a user’s device precisely because of the limitations it imposes on what other features and hardware the app can access.

Section 2(b)(2)

This provision would preclude a firm from conditioning preferred status on use of another service from that firm. This would likely undermine the purpose of platforms, which is to absorb and counter some of the risks involved in doing business online. An example of this is Amazon’s tying eligibility for its Prime program to sellers that use Amazon’s delivery service (FBA – Fulfilled By Amazon). The bill seems to presume in an example like this that Amazon is leveraging its power in the market—in the form of the value of the Prime label—to profit from delivery. But Amazon could, and already does, charge directly for listing positions; it’s unclear why it would benefit from charging via FBA when it could just charge for the Prime label.

An alternate, simpler explanation is that FBA improves the quality of the service, by granting customers greater assurance that a Prime product will arrive when Amazon says it will. Platforms add value by setting out rules and providing services that reduce the uncertainties between buyers and sellers they’d otherwise experience if they transacted directly with each other. This section’s prohibition—which, as written, would seem to prevent any kind of quality assurance—likely would bar labelling by a platform, even where customers explicitly want it.

Section 2(b)(3)

As written, this would prohibit platforms from using aggregated data to improve their services at all. If Apple found that 99% of its users uninstalled an app immediately after it was installed, it would be reasonable to conclude that the app may be harmful or broken in some way, and that Apple should investigate. This provision would ban that.

Sections 2(b)(4) and 2(b)(6)

These two provisions effectively prohibit a platform from using information it does not also provide to sellers. Such prohibitions ignore the fact that it is often good for sellers to lack certain information, since withholding information can prevent abuse by malicious users. For example, a seller may sometimes try to bribe their customers to post positive reviews of their products, or even threaten customers who have posted negative ones. Part of the role of a platform is to combat that kind of behavior by acting as a middleman and forcing both consumer users and business users to comply with the platform’s own mechanisms to control that kind of behavior.

If this seems overly generous to platforms—since, obviously, it gives them a lot of leverage over business users—ask yourself why people use platforms at all. It is not a coincidence that people often prefer Amazon to dealing with third-party merchants and having to navigate those merchants’ sites themselves. The assurance that Amazon provides is extremely valuable for users. Much of it comes from the company’s ability to act as a middleman in this way, lowering the transaction costs between buyers and sellers.

Section 2(b)(5)

This provision restricts the treatment of defaults. It is, however, relatively restrained when compared to, for example, the DOJ’s lawsuit against Google, which treats as anticompetitive even payment for defaults that can be changed. Still, many of the arguments that apply in that case also apply here: default status for apps can be a way to recoup income foregone elsewhere (e.g., a browser provided for free that makes its money by selling the right to be the default search engine).

Section 2(b)(7)

This section gets to the heart of why “discrimination” can often be procompetitive: that it facilitates competition between platforms. The kind of self-preferencing that this provision would prohibit can allow firms that have a presence in one market to extend that position into another, increasing competition in the process. Both Apple and Amazon have used their customer bases in smartphones and e-commerce, respectively, to grow their customer bases for video streaming, in competition with Netflix, Google’s YouTube, cable television, and each other. If Apple designed a search engine to compete with Google, it would do exactly the same thing, and we would be better off because of it. Restricting this kind of behavior is, perversely, exactly what you would do if you wanted to shield these incumbents from competition.

Section 2(b)(8)

As with other provisions, this one would preclude one of the mechanisms by which platforms add value: creating assurance for customers about the products they can expect if they visit the platform. Some of this relates to child protection; some of the most frustrating stories involve children being overcharged when they use an iPhone or Android app, and effectively being ripped off because of poor policing of the app (or insufficiently strict pricing rules by Apple or Google). This may also relate to rules that state that the seller cannot offer a cheaper product elsewhere (Amazon’s “General Pricing Rule” does this, for example). Prohibiting this would simply impose a tax on customers who cannot shop around and would prefer to use a platform that they trust has the lowest prices for the item they want.

Section 2(b)(10)

Ostensibly a “whistleblower” provision, this section could leave platforms with no recourse, not even removing a user from its platform, in response to spurious complaints intended purely to extract value for the complaining business rather than to promote competition. On its own, this sort of provision may be fairly harmless, but combined with the provisions above, it allows the bill to add up to a rent-seekers’ charter.

Conclusion

In each case above, it’s vital to remember that a reversed burden of proof applies. So, there is a high chance that the law will side against the defendant business, and a large downside for conduct that ends up being found to violate these provisions. That means that platforms will likely err on the side of caution in many cases, avoiding conduct that is ambiguous, and society will probably lose a lot of beneficial behavior in the process.

Put together, the provisions undermine much of what has become an Internet platform’s role: to act as an intermediary, de-risk transactions between customers and merchants who don’t know each other, and tweak the rules of the market to maximize its attractiveness as a place to do business. The “discrimination” that the bill would outlaw is, in practice, behavior that makes it easier for consumers to navigate marketplaces of extreme complexity and uncertainty, in which they often know little or nothing about the firms with whom they are trying to transact business.

Customers do not want platforms to be neutral, open utilities. They can choose platforms that are like that already, such as eBay. They generally tend to prefer ones like Amazon, which are not neutral and which carefully cultivate their service to be as streamlined, managed, and “discriminatory” as possible. Indeed, many of people’s biggest complaints with digital platforms relate to their openness: the fake reviews, counterfeit products, malware, and spam that come with letting more unknown businesses use your service. While these may be unavoidable by-products of running a platform, platforms compete on their ability to ferret them out. Customers are unlikely to thank legislators for regulating Amazon into being another eBay.

The language of the federal antitrust laws is extremely general. Over more than a century, the federal courts have applied common-law techniques to construe this general language to provide guidance to the private sector as to what does or does not run afoul of the law. The interpretive process has been fraught with some uncertainty, as judicial approaches to antitrust analysis have changed several times over the past century. Nevertheless, until very recently, judges and enforcers had converged toward relying on a consumer welfare standard as the touchstone for antitrust evaluations (see my antitrust primer here, for an overview).

While imperfect and subject to potential error in application—a problem of legal interpretation generally—the consumer welfare principle has worked rather well as the focus both for antitrust-enforcement guidance and judicial decision-making. The general stability and predictability of antitrust under a consumer welfare framework has advanced the rule of law. It has given businesses sufficient information to plan transactions in a manner likely to avoid antitrust liability. It thereby has cabined uncertainty and increased the probability that private parties would enter welfare-enhancing commercial arrangements, to the benefit of society.

In a very thoughtful 2017 speech, then Acting Assistant Attorney General for Antitrust Andrew Finch commented on the importance of the rule of law to principled antitrust enforcement. He noted:

[H]ow do we administer the antitrust laws more rationally, accurately, expeditiously, and efficiently? … Law enforcement requires stability and continuity both in rules and in their application to specific cases.

Indeed, stability and continuity in enforcement are fundamental to the rule of law. The rule of law is about notice and reliance. When it is impossible to make reasonable predictions about how a law will be applied, or what the legal consequences of conduct will be, these important values are diminished. To call our antitrust regime a “rule of law” regime, we must enforce the law as written and as interpreted by the courts and advance change with careful thought.

The reliance fostered by stability and continuity has obvious economic benefits. Businesses invest, not only in innovation but in facilities, marketing, and personnel, and they do so based on the economic and legal environment they expect to face.

Of course, we want businesses to make those investments—and shape their overall conduct—in accordance with the antitrust laws. But to do so, they need to be able to rely on future application of those laws being largely consistent with their expectations. An antitrust enforcement regime with frequent changes is one that businesses cannot plan for, or one that they will plan for by avoiding certain kinds of investments.

That is certainly not to say there has not been positive change in the antitrust laws in the past, or that we would have been better off without those changes. U.S. antitrust law has been refined, and occasionally recalibrated, with the courts playing their appropriate interpretive role. And enforcers must always be on the watch for new or evolving threats to competition.  As markets evolve and products develop over time, our analysis adapts. But as those changes occur, we pursue reliability and consistency in application in the antitrust laws as much as possible.

Indeed, we have enjoyed remarkable continuity and consensus for many years. Antitrust law in the U.S. has not been a “paradox” for quite some time, but rather a stable and valuable law enforcement regime with appropriately widespread support.

Unfortunately, policy decisions taken by the new Federal Trade Commission (FTC) leadership in recent weeks have rejected antitrust continuity and consensus. They have injected substantial uncertainty into the application of competition-law enforcement by the FTC. This abrupt change in emphasis undermines the rule of law and threatens to reduce economic welfare.

As of now, the FTC’s departure from the rule of law has been notable in two areas:

  1. Its rejection of previous guidance on the agency’s “unfair methods of competition” authority, the FTC’s primary non-merger-related enforcement tool; and
  2. Its new advice rejecting time limits for the review of generally routine proposed mergers.

In addition, potential FTC rulemakings directed at “unfair methods of competition” would, if pursued, prove highly problematic.

Rescission of the Unfair Methods of Competition Policy Statement

The FTC on July 1 voted 3-2 to rescind the 2015 FTC Policy Statement Regarding Unfair Methods of Competition under Section 5 of the FTC Act (UMC Policy Statement).

The bipartisan UMC Policy Statement has originally been supported by all three Democratic commissioners, including then-Chairwoman Edith Ramirez. The policy statement generally respected and promoted the rule of law by emphasizing that, in applying the facially broad “unfair methods of competition” (UMC) language, the FTC would be guided by the well-established principles of the antitrust rule of reason (including considering any associated cognizable efficiencies and business justifications) and the consumer welfare standard. The FTC also explained that it would not apply “standalone” Section 5 theories to conduct that would violate the Sherman or Clayton Acts.

In short, the UMC Policy Statement sent a strong signal that the commission would apply UMC in a manner fully consistent with accepted and well-understood antitrust policy principles. As in the past, the vast bulk of FTC Section 5 prosecutions would be brought against conduct that violated the core antitrust laws. Standalone Section 5 cases would be directed solely at those few practices that harmed consumer welfare and competition, but somehow fell into a narrow crack in the basic antitrust statutes (such as, perhaps, “invitations to collude” that lack plausible efficiency justifications). Although the UMC Statement did not answer all questions regarding what specific practices would justify standalone UMC challenges, it substantially limited business uncertainty by bringing Section 5 within the boundaries of settled antitrust doctrine.

The FTC’s announcement of the UMC Policy Statement rescission unhelpfully proclaimed that “the time is right for the Commission to rethink its approach and to recommit to its mandate to police unfair methods of competition even if they are outside the ambit of the Sherman or Clayton Acts.” As a dissenting statement by Commissioner Christine S. Wilson warned, consumers would be harmed by the commission’s decision to prioritize other unnamed interests. And as Commissioner Noah Joshua Phillips stressed in his dissent, the end result would be reduced guidance and greater uncertainty.

In sum, by suddenly leaving private parties in the dark as to how to conform themselves to Section 5’s UMC requirements, the FTC’s rescission offends the rule of law.

New Guidance to Parties Considering Mergers

For decades, parties proposing mergers that are subject to statutory Hart-Scott-Rodino (HSR) Act pre-merger notification requirements have operated under the understanding that:

  1. The FTC and U.S. Justice Department (DOJ) will routinely grant “early termination” of review (before the end of the initial 30-day statutory review period) to those transactions posing no plausible competitive threat; and
  2. An enforcement agency’s decision not to request more detailed documents (“second requests”) after an initial 30-day pre-merger review effectively serves as an antitrust “green light” for the proposed acquisition to proceed.

Those understandings, though not statutorily mandated, have significantly reduced antitrust uncertainty and related costs in the planning of routine merger transactions. The rule of law has been advanced through an effective assurance that business combinations that appear presumptively lawful will not be the target of future government legal harassment. This has advanced efficiency in government, as well; it is a cost-beneficial optimal use of resources for DOJ and the FTC to focus exclusively on those proposed mergers that present a substantial potential threat to consumer welfare.

Two recent FTC pronouncements (one in tandem with DOJ), however, have generated great uncertainty by disavowing (at least temporarily) those two welfare-promoting review policies. Joined by DOJ, the FTC on Feb. 4 announced that the agencies would temporarily suspend early terminations, citing an “unprecedented volume of filings” and a transition to new leadership. More than six months later, this “temporary” suspension remains in effect.

Citing “capacity constraints” and a “tidal wave of merger filings,” the FTC subsequently published an Aug. 3 blog post that effectively abrogated the 30-day “green lighting” of mergers not subject to a second request. It announced that it was sending “warning letters” to firms reminding them that FTC investigations remain open after the initial 30-day period, and that “[c]ompanies that choose to proceed with transactions that have not been fully investigated are doing so at their own risk.”

The FTC’s actions interject unwarranted uncertainty into merger planning and undermine the rule of law. Preventing early termination on transactions that have been approved routinely not only imposes additional costs on business; it hints that some transactions might be subject to novel theories of liability that fall outside the antitrust consensus.

Perhaps more significantly, as three prominent antitrust practitioners point out, the FTC’s warning letters states that:

[T]he FTC may challenge deals that “threaten to reduce competition and harm consumers, workers, and honest businesses.” Adding in harm to both “workers and honest businesses” implies that the FTC may be considering more ways that transactions can have an adverse impact other than just harm to competition and consumers [citation omitted].

Because consensus antitrust merger analysis centers on consumer welfare, not the protection of labor or business interests, any suggestion that the FTC may be extending its reach to these new areas is inconsistent with established legal principles and generates new business-planning risks.

More generally, the Aug. 6 FTC “blog post could be viewed as an attempt to modify the temporal framework of the HSR Act”—in effect, an effort to displace an implicit statutory understanding in favor of an agency diktat, contrary to the rule of law. Commissioner Wilson sees the blog post as a means to keep investigations open indefinitely and, thus, an attack on the decades-old HSR framework for handling most merger reviews in an expeditious fashion (see here). Commissioner Phillips is concerned about an attempt to chill legal M&A transactions across the board, particularly unfortunate when there is no reason to conclude that particular transactions are illegal (see here).

Finally, the historical record raises serious questions about the “resource constraint” justification for the FTC’s new merger review policies:

Through the end of July 2021, more than 2,900 transactions were reported to the FTC. It is not clear, however, whether these record-breaking HSR filing numbers have led (or will lead) to more deals being investigated. Historically, only about 13 percent of all deals reported are investigated in some fashion, and roughly 3 percent of all deals reported receive a more thorough, substantive review through the issuance of a Second Request. Even if more deals are being reported, for the majority of transactions, the HSR process is purely administrative, raising no antitrust concerns, and, theoretically, uses few, if any, agency resources. [Citations omitted.]

Proposed FTC Competition Rulemakings

The new FTC leadership is strongly considering competition rulemakings. As I explained in a recent Truth on the Market post, such rulemakings would fail a cost-benefit test. They raise serious legal risks for the commission and could impose wasted resource costs on the FTC and on private parties. More significantly, they would raise two very serious economic policy concerns:

First, competition rules would generate higher error costs than adjudications. Adjudications cabin error costs by allowing for case-specific analysis of likely competitive harms and procompetitive benefits. In contrast, competition rules inherently would be overbroad and would suffer from a very high rate of false positives. By characterizing certain practices as inherently anticompetitive without allowing for consideration of case-specific facts bearing on actual competitive effects, findings of rule violations inevitably would condemn some (perhaps many) efficient arrangements.

Second, competition rules would undermine the rule of law and thereby reduce economic welfare. FTC-only competition rules could lead to disparate legal treatment of a firm’s business practices, depending upon whether the FTC or the U.S. Justice Department was the investigating agency. Also, economic efficiency gains could be lost due to the chilling of aggressive efficiency-seeking business arrangements in those sectors subject to rules. [Emphasis added.]

In short, common law antitrust adjudication, focused on the consumer welfare standard, has done a good job of promoting a vibrant competitive economy in an efficient fashion. FTC competition rulemaking would not.

Conclusion

Recent FTC actions have undermined consensus antitrust-enforcement standards and have departed from established merger-review procedures with respect to seemingly uncontroversial consolidations. Those decisions have imposed costly uncertainty on the business sector and are thereby likely to disincentivize efficiency-seeking arrangements. What’s more, by implicitly rejecting consensus antitrust principles, they denigrate the primacy of the rule of law in antitrust enforcement. The FTC’s pursuit of competition rulemaking would further damage the rule of law by imposing arbitrary strictures that ignore matter-specific considerations bearing on the justifications for particular business decisions.

Fortunately, these are early days in the Biden administration. The problematic initial policy decisions delineated in this comment could be reversed based on further reflection and deliberation within the commission. Chairwoman Lina Khan and her fellow Democratic commissioners would benefit by consulting more closely with Commissioners Wilson and Phillips to reach agreement on substantive and procedural enforcement policies that are better tailored to promote consumer welfare and enhance vibrant competition. Such policies would benefit the U.S. economy in a manner consistent with the rule of law.

The recent launch of the international Multilateral Pharmaceutical Merger Task Force (MPMTF) is just the latest example of burgeoning cooperative efforts by leading competition agencies to promote convergence in antitrust enforcement. (See my recent paper on the globalization of antitrust, which assesses multinational cooperation and convergence initiatives in greater detail.) In what is a first, the U.S. Federal Trade Commission (FTC), the U.S. Justice Department’s (DOJ) Antitrust Division, offices of state Attorneys General, the European Commission’s Competition Directorate, Canada’s Competition Bureau, and the U.K.’s Competition and Market Authority (CMA) jointly created the MPMTF in March 2021 “to update their approach to analyzing the effects of pharmaceutical mergers.”

To help inform its analysis, in May 2021 the MPMTF requested public comments concerning the effects of pharmaceutical mergers. The MPMTF sought submissions regarding (among other issues) seven sets of questions:   

  1. What theories of harm should enforcement agencies consider when evaluating pharmaceutical mergers, including theories of harm beyond those currently considered?
  2. What is the full range of a pharmaceutical merger’s effects on innovation? What challenges arise when mergers involve proprietary drug discovery and manufacturing platforms?
  3. In pharmaceutical merger review, how should we consider the risks or effects of conduct such as price-setting practices, reverse payments, and other ways in which pharmaceutical companies respond to or rely on regulatory processes?
  4. How should we approach market definition in pharmaceutical mergers, and how is that implicated by new or evolving theories of harm?
  5. What evidence may be relevant or necessary to assess and, if applicable, challenge a pharmaceutical merger based on any new or expanded theories of harm?
  6. What types of remedies would work in the cases to which those theories are applied?
  7. What factors, such as the scope of assets and characteristics of divestiture buyers, influence the likelihood and success of pharmaceutical divestitures to resolve competitive concerns?

My research assistant Andrew Mercado and I recently submitted comments for the record addressing the questions posed by the MPMTF. We concluded:

Federal merger enforcement in general and FTC pharmaceutical merger enforcement in particular have been effective in promoting competition and consumer welfare. Proposed statutory amendments to strengthen merger enforcement not only are unnecessary, but also would, if enacted, tend to undermine welfare and would thus be poor public policy. A brief analysis of seven questions propounded by the Multilateral Pharmaceutical Merger Task Force suggests that: (a) significant changes in enforcement policies are not warranted; and (b) investigators should employ sound law and economics analysis, taking full account of merger-related efficiencies, when evaluating pharmaceutical mergers. 

While we leave it to interested readers to review our specific comments, this commentary highlights one key issue which we stressed—the importance of giving due weight to efficiencies (and, in particular, dynamic efficiencies) in evaluating pharma mergers. We also note an important critique by FTC Commissioner Christine Wilson of the treatment accorded merger-related efficiencies by U.S. antitrust enforcers.   

Discussion

Innovation in pharmaceuticals and vaccines has immensely significant economic and social consequences, as demonstrated most recently in the handling of the COVID-19 pandemic. As such, it is particularly important that public policy not stand in the way of realizing efficiencies that promote innovation in these markets. This observation applies directly, of course, to pharmaceutical antitrust enforcement, in general, and to pharma merger enforcement, in particular.

Regrettably, however, though general merger-enforcement policy has been generally sound, it has somewhat undervalued merger-related efficiencies.

Although U.S. antitrust enforcers give lip service to their serious consideration of efficiencies in merger reviews, the reality appears to be quite different, as documented by Commissioner Wilson in a 2020 speech.

Wilson’s General Merger-Efficiencies Critique: According to Wilson, the combination of finding narrow markets and refusing to weigh out-of-market efficiencies has created major “legal and evidentiary hurdles a defendant must clear when seeking to prove offsetting procompetitive efficiencies.” What’s more, the “courts [have] largely continue[d] to follow the Agencies’ lead in minimizing the importance of efficiencies.” Wilson shows that “the Horizontal Merger Guidelines text and case law appear to set different standards for demonstrating harms and efficiencies,” and argues that this “asymmetric approach has the obvious potential consequence of preventing some procompetitive mergers that increase consumer welfare.” Wilson concludes on a more positive note that this problem can be addressed by having enforcers: (1) treat harms and efficiencies symmetrically; and (2) establish clear and reasonable expectations for what types of efficiency analysis will and will not pass muster.

While our filing with the MPMTF did not discuss Wilson’s general treatment of merger efficiencies, one would hope that the task force will appropriately weigh it in its deliberations. Our filing instead briefly addressed two “informational efficiencies” that may arise in the context of pharmaceutical mergers. These include:

More Efficient Resource Reallocation: The theory of the firm teaches that mergers may be motivated by the underutilization or misallocation of assets, or the opportunity to create welfare-enhancing synergies. In the pharmaceutical industry, these synergies may come from joining complementary research and development programs, combining diverse and specialized expertise that may be leveraged for better, faster drug development and more innovation.

Enhanced R&D: Currently, much of the R&D for large pharmaceutical companies is achieved through partnerships or investment in small biotechnology and research firms specializing in a single type of therapy. Whereas large pharmaceutical companies have expertise in marketing, navigating regulation, and undertaking trials of new drugs, small, research-focused firms can achieve greater advancements in medicine with smaller budgets. Furthermore, changes within firms brought about by a merger may increase innovation.

With increases in intellectual property and proprietary data that come from the merging of two companies, smaller research firms that work with the merged entity may have access to greater pools of information, enhancing the potential for innovation without increasing spending. This change not only raises the efficiency of the research being conducted in these small firms, but also increases the probability of a breakthrough without an increase in risk.

Conclusion

U.S. pharmaceutical merger enforcement has been fairly effective in forestalling anticompetitive combinations while allowing consumer welfare-enhancing transactions to go forward. Policy in this area should remain generally the same. Enforcers should continue to base enforcement decisions on sound economic theory fully supported by case-specific facts. Enforcement agencies could benefit, however, by placing a greater emphasis on efficiencies analysis. In particular, they should treat harms and efficiencies symmetrically (as recommend by Commissioner Wilson), and fully take into account likely resource reallocation and innovation-related efficiencies. 

Democratic leadership of the House Judiciary Committee have leaked the approach they plan to take to revise U.S. antitrust law and enforcement, with a particular focus on digital platforms. 

Broadly speaking, the bills would: raise fees for larger mergers and increase appropriations to the FTC and DOJ; require data portability and interoperability; declare that large platforms can’t own businesses that compete with other businesses that use the platform; effectively ban large platforms from making any acquisitions; and generally declare that large platforms cannot preference their own products or services. 

All of these are ideas that have been discussed before. They are very much in line with the EU’s approach to competition, which places more regulation-like burdens on big businesses, and which is introducing a Digital Markets Act that mirrors the Democrats’ proposals. Some Republicans are reportedly supportive of the proposals, which is surprising since they mean giving broad, discretionary powers to antitrust authorities that are controlled by Democrats who take an expansive view of antitrust enforcement as a way to achieve their other social and political goals. The proposals may also be unpopular with consumers if, for example, they would mean that popular features like integrating Maps into relevant Google Search results becomes prohibited.

The multi-bill approach here suggests that the committee is trying to throw as much at the wall as possible to see what sticks. It may reflect a lack of confidence among the proposers in their ability to get their proposals through wholesale, especially given that Amy Klobuchar’s CALERA bill in the Senate creates an alternative that, while still highly interventionist, does not create ex ante regulation of the Internet the same way these proposals do.

In general, the bills are misguided for three main reasons. 

One, they seek to make digital platforms into narrow conduits for other firms to operate on, ignoring the value created by platforms curating their own services by, for example, creating quality controls on entry (as Apple does on its App Store) or by integrating their services with related products (like, say, Google adding events from Gmail to users’ Google Calendars). 

Two, they ignore the procompetitive effects of digital platforms extending into each other’s markets and competing with each other there, in ways that often lead to far more intense competition—and better outcomes for consumers—than if the only firms that could compete with the incumbent platform were small startups.

Three, they ignore the importance of incentives for innovation. Platforms invest in new and better products when they can make money from doing so, and limiting their ability to do that means weakened incentives to innovate. Startups and their founders and investors are driven, in part, by the prospect of being acquired, often by the platforms themselves. Making those acquisitions more difficult, or even impossible, means removing one of the key ways startup founders can exit their firms, and hence one of the key rewards and incentives for starting an innovative new business. 

For more, our “Joint Submission of Antitrust Economists, Legal Scholars, and Practitioners” set out why many of the House Democrats’ assumptions about the state of the economy and antitrust enforcement were mistaken. And my post, “Buck’s “Third Way”: A Different Road to the Same Destination”, argued that House Republicans like Ken Buck were misguided in believing they could support some of the proposals and avoid the massive regulatory oversight that they said they rejected.

Platform Anti-Monopoly Act 

The flagship bill, introduced by Antitrust Subcommittee Chairman David Cicilline (D-R.I.), establishes a definition of “covered platform” used by several of the other bills. The measures would apply to platforms with at least 500,000 U.S.-based users, a market capitalization of more than $600 billion, and that is deemed a “critical trading partner” with the ability to restrict or impede the access that a “dependent business” has to its users or customers.

Cicilline’s bill would bar these covered platforms from being able to promote their own products and services over the products and services of competitors who use the platform. It also defines a number of other practices that would be regarded as discriminatory, including: 

  • Restricting or impeding “dependent businesses” from being able to access the platform or its software on the same terms as the platform’s own lines of business;
  • Conditioning access or status on purchasing other products or services from the platform; 
  • Using user data to support the platform’s own products in ways not extended to competitors; 
  • Restricting the platform’s commercial users from using or accessing data generated on the platform from their own customers;
  • Restricting platform users from uninstalling software pre-installed on the platform;
  • Restricting platform users from providing links to facilitate business off of the platform;
  • Preferencing the platform’s own products or services in search results or rankings;
  • Interfering with how a dependent business prices its products; 
  • Impeding a dependent business’ users from connecting to services or products that compete with those offered by the platform; and
  • Retaliating against users who raise concerns with law enforcement about potential violations of the act.

On a basic level, these would prohibit lots of behavior that is benign and that can improve the quality of digital services for users. Apple pre-installing a Weather app on the iPhone would, for example, run afoul of these rules, and the rules as proposed could prohibit iPhones from coming with pre-installed apps at all. Instead, users would have to manually download each app themselves, if indeed Apple was allowed to include the App Store itself pre-installed on the iPhone, given that this competes with other would-be app stores.

Apart from the obvious reduction in the quality of services and convenience for users that this would involve, this kind of conduct (known as “self-preferencing”) is usually procompetitive. For example, self-preferencing allows platforms to compete with one another by using their strength in one market to enter a different one; Google’s Shopping results in the Search page increase the competition that Amazon faces, because it presents consumers with a convenient alternative when they’re shopping online for products. Similarly, Amazon’s purchase of the video-game streaming service Twitch, and the self-preferencing it does to encourage Amazon customers to use Twitch and support content creators on that platform, strengthens the competition that rivals like YouTube face. 

It also helps innovation, because it gives firms a reason to invest in services that would otherwise be unprofitable for them. Google invests in Android, and gives much of it away for free, because it can bundle Google Search into the OS, and make money from that. If Google could not self-preference Google Search on Android, the open source business model simply wouldn’t work—it wouldn’t be able to make money from Android, and would have to charge for it in other ways that may be less profitable and hence give it less reason to invest in the operating system. 

This behavior can also increase innovation by the competitors of these companies, both by prompting them to improve their products (as, for example, Google Android did with Microsoft’s mobile operating system offerings) and by growing the size of the customer base for products of this kind. For example, video games published by console manufacturers (like Nintendo’s Zelda and Mario games) are often blockbusters that grow the overall size of the user base for the consoles, increasing demand for third-party titles as well.

For more, check out “Against the Vertical Discrimination Presumption” by Geoffrey Manne and Dirk Auer’s piece “On the Origin of Platforms: An Evolutionary Perspective”.

Ending Platform Monopolies Act 

Sponsored by Rep. Pramila Jayapal (D-Wash.), this bill would make it illegal for covered platforms to control lines of business that pose “irreconcilable conflicts of interest,” enforced through civil litigation powers granted to the Federal Trade Commission (FTC) and the U.S. Justice Department (DOJ).

Specifically, the bill targets lines of business that create “a substantial incentive” for the platform to advantage its own products or services over those of competitors that use the platform, or to exclude or disadvantage competing businesses from using the platform. The FTC and DOJ could potentially order that platforms divest lines of business that violate the act.

This targets similar conduct as the previous bill, but involves the forced separation of different lines of business. It also appears to go even further, seemingly implying that companies like Google could not even develop services like Google Maps or Chrome because their existence would create such “substantial incentives” to self-preference them over the products of their competitors. 

Apart from the straightforward loss of innovation and product developments this would involve, requiring every tech company to be narrowly focused on a single line of business would substantially entrench Big Tech incumbents, because it would make it impossible for them to extend into adjacent markets to compete with one another. For example, Apple could not develop a search engine to compete with Google under these rules, and Amazon would be forced to sell its video-streaming services that compete with Netflix and Youtube.

For more, check out Geoffrey Manne’s written testimony to the House Antitrust Subcommittee and “Platform Self-Preferencing Can Be Good for Consumers and Even Competitors” by Geoffrey and me. 

Platform Competition and Opportunity Act

Introduced by Rep. Hakeem Jeffries (D-N.Y.), this bill would bar covered platforms from making essentially any acquisitions at all. To be excluded from the ban on acquisitions, the platform would have to present “clear and convincing evidence” that the acquired business does not compete with the platform for any product or service, does not pose a potential competitive threat to the platform, and would not in any way enhance or help maintain the acquiring platform’s market position. 

The two main ways that founders and investors can make a return on a successful startup are to float the company at IPO or to be acquired by another business. The latter of these, acquisitions, is extremely important. Between 2008 and 2019, 90 percent of U.S. start-up exits happened through acquisition. In a recent survey, half of current startup executives said they aimed to be acquired. One study found that countries that made it easier for firms to be taken over saw a 40-50 percent increase in VC activity, and that U.S. states that made acquisitions harder saw a 27 percent decrease in VC investment deals

So this proposal would probably reduce investment in U.S. startups, since it makes it more difficult for them to be acquired. It would therefore reduce innovation as a result. It would also reduce inter-platform competition by banning deals that allow firms to move into new markets, like the acquisition of Beats that helped Apple to build a Spotify competitor, or the deals that helped Google, Microsoft, and Amazon build cloud-computing services that all compete with each other. It could also reduce competition faced by old industries, by preventing tech companies from buying firms that enable it to move into new markets—like Amazon’s acquisitions of health-care companies that it has used to build a health-care offering. Even Walmart’s acquisition of Jet.com, which it has used to build an Amazon competitor, could have been banned under this law if Walmart had had a higher market cap at the time.

For more, check out Dirk Auer’s piece “Facebook and the Pros and Cons of Ex Post Merger Reviews” and my piece “Cracking down on mergers would leave us all worse off”. 

ACCESS Act

The Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, sponsored by Rep. Mary Gay Scanlon (D-Pa.), would establish data portability and interoperability requirements for platforms. 

Under terms of the legislation, covered platforms would be required to allow third parties to transfer data to their users or, with the user’s consent, to a competing business. It also would require platforms to facilitate compatible and interoperable communications with competing businesses. The law directs the FTC to establish technical committees to promulgate the standards for portability and interoperability. 

Data portability and interoperability involve trade-offs in terms of security and usability, and overseeing them can be extremely costly and difficult. In security terms, interoperability requirements prevent companies from using closed systems to protect users from hostile third parties. Mandatory openness means increasing—sometimes, substantially so—the risk of data breaches and leaks. In practice, that could mean users’ private messages or photos being leaked more frequently, or activity on a social media page that a user considers to be “their” private data, but that “belongs” to another user under the terms of use, can be exported and publicized as such. 

It can also make digital services more buggy and unreliable, by requiring that they are built in a more “open” way that may be more prone to unanticipated software mismatches. A good example is that of Windows vs iOS; Windows is far more interoperable with third-party software than iOS is, but tends to be less stable as a result, and users often prefer the closed, stable system. 

Interoperability requirements also entail ongoing regulatory oversight, to make sure data is being provided to third parties reliably. It’s difficult to build an app around another company’s data without assurance that the data will be available when users want it. For a requirement as broad as this bill’s, that could mean setting up quite a large new de facto regulator. 

In the UK, Open Banking (an interoperability requirement imposed on British retail banks) has suffered from significant service outages, and targets a level of uptime that many developers complain is too low for them to build products around. Nor has Open Banking yet led to any obvious competition benefits.

For more, check out Gus Hurwitz’s piece “Portable Social Media Aren’t Like Portable Phone Numbers” and my piece “Why Data Interoperability Is Harder Than It Looks: The Open Banking Experience”.

Merger Filing Fee Modernization Act

A bill that mirrors language in the Endless Frontier Act recently passed by the U.S. Senate, would significantly raise filing fees for the largest mergers. Rather than the current cap of $280,000 for mergers valued at more than $500 million, the bill—sponsored by Rep. Joe Neguse (D-Colo.)–the new schedule would assess fees of $2.25 million for mergers valued at more than $5 billion; $800,000 for those valued at between $2 billion and $5 billion; and $400,000 for those between $1 billion and $2 billion.

Smaller mergers would actually see their filing fees cut: from $280,000 to $250,000 for those between $500 million and $1 billion; from $125,000 to $100,000 for those between $161.5 million and $500 million; and from $45,000 to $30,000 for those less than $161.5 million. 

In addition, the bill would appropriate $418 million to the FTC and $252 million to the DOJ’s Antitrust Division for Fiscal Year 2022. Most people in the antitrust world are generally supportive of more funding for the FTC and DOJ, although whether this is actually good or not depends both on how it’s spent at those places. 

It’s hard to object if it goes towards deepening the agencies’ capacities and knowledge, by hiring and retaining higher quality staff with salaries that are more competitive with those offered by the private sector, and on greater efforts to study the effects of the antitrust laws and past cases on the economy. If it goes toward broadening the activities of the agencies, by doing more and enabling them to pursue a more aggressive enforcement agenda, and supporting whatever of the above proposals make it into law, then it could be very harmful. 

For more, check out my post “Buck’s “Third Way”: A Different Road to the Same Destination” and Thom Lambert’s post “Bad Blood at the FTC”.

Image by Gerd Altmann from Pixabay

AT&T’s $102 billion acquisition of Time Warner in 2019 will go down in M&A history as an exceptionally ill-advised transaction, resulting in the loss of tens of billions of dollars of shareholder value. It should also go down in history as an exceptional ill-chosen target of antitrust intervention.  The U.S. Department of Justice, with support from many academic and policy commentators, asserted with confidence that the vertical combination of these content and distribution powerhouses would result in an entity that could exercise market power to the detriment of competitors and consumers.

The chorus of condemnation continued with vigor even after the DOJ’s loss in court and AT&T’s consummation of the transaction. With AT&T’s May 17 announcement that it will unwind the two-year-old acquisition and therefore abandon its strategy to integrate content and distribution, it is clear these predictions of impending market dominance were unfounded. 

This widely shared overstatement of antitrust risk derives from a simple but fundamental error: regulators and commentators were looking at the wrong market.  

The DOJ’s Antitrust Case against the Transaction

The business case for the AT&T/Time Warner transaction was straightforward: it promised to generate synergies by combining a leading provider of wireless, broadband, and satellite television services with a leading supplier of video content. The DOJ’s antitrust case against the transaction was similarly straightforward: the combined entity would have the ability to foreclose “must have” content from other “pay TV” (cable and satellite television) distributors, resulting in adverse competitive effects. 

This foreclosure strategy was expected to take two principal forms. First, AT&T could temporarily withhold (or threaten to withhold) content from rival distributors absent payment of a higher carriage fee, which would then translate into higher fees for subscribers. Second, AT&T could permanently withhold content from rival distributors, who would then lose subscribers to AT&T’s DirectTV satellite television service, further enhancing AT&T’s market power. 

Many commentators, both in the trade press and significant portions of the scholarly community, characterized the transaction as posing a high-risk threat to competitive conditions in the pay TV market. These assertions reflected the view that the new entity would exercise a bottleneck position over video-content distribution in the pay TV market and would exercise that power to impose one-sided terms to the detriment of content distributors and consumers. 

Notwithstanding this bevy of endorsements, the DOJ’s case was rejected by the district court and the decision was upheld by the D.C. appellate court. The district judge concluded that the DOJ had failed to show that the combined entity would exercise any credible threat to withhold “must have” content from distributors. A key reason: the lost carriage fees AT&T would incur if it did withhold content were so high, and the migration of subscribers from rival pay TV services so speculative, that it would represent an obviously irrational business strategy. In short: no sophisticated business party would ever take AT&T’s foreclosure threat seriously, in which case the DOJ’s predictions of market power were insufficiently compelling to justify the use of government power to block the transaction.

The Fundamental Flaws in the DOJ’s Antitrust Case

The logical and factual infirmities of the DOJ’s foreclosure hypothesis have been extensively and ably covered elsewhere and I will not repeat that analysis. Following up on my previous TOTM commentary on the transaction, I would like to emphasize the point that the DOJ’s case against the transaction was flawed from the outset for two more fundamental reasons. 

False Assumption #1

The assumption that the combined entity could withhold so-called “must have” content to cause significant and lasting competitive injury to rival distributors flies in the face of market realities.  Content is an abundant, renewable, and mobile resource. There are few entry barriers to the content industry: a commercially promising idea will likely attract capital, which will in turn secure the necessary equipment and personnel for production purposes. Any rival distributor can access a rich menu of valuable content from a plethora of sources, both domestically and worldwide, each of which can provide new content, as required. Even if the combined entity held a license to distribute purportedly “must have” content, that content would be up for sale (more precisely, re-licensing) to the highest bidder as soon as the applicable contract term expired. This is not mere theorizing: it is a widely recognized feature of the entertainment industry.

False Assumption #2

Even assuming the combined entity could wield a portfolio of “must have” content to secure a dominant position in the pay TV market and raise content acquisition costs for rival pay TV services, it still would lack any meaningful pricing power in the relevant consumer market. The reason: significant portions of the viewing population do not want any pay TV or only want dramatically “slimmed-down” packages. Instead, viewers increasingly consume content primarily through video-streaming services—a market in which platforms such as Amazon and Netflix already enjoyed leading positions at the time of the transaction. Hence, even accepting the DOJ’s theory that the combined entity could somehow monopolize the pay TV market consisting of cable and satellite television services, the theory still fails to show any reasonable expectation of anticompetitive effects in the broader and economically relevant market comprising pay TV and streaming services.  Any attempt to exercise pricing power in the pay TV market would be economically self-defeating, since it would likely prompt a significant portion of consumers to switch to (or start to only use) streaming services.

The Antitrust Case for the Transaction

When properly situated within the market that was actually being targeted in the AT&T/Time Warner acquisition, the combined entity posed little credible threat of exercising pricing power. To the contrary, the combined entity was best understood as an entrant that sought to challenge the two pioneer entities—Amazon and Netflix—in the “over the top” content market.

Each of these incumbent platforms individually had (and have) multi-billion-dollar content production budgets that rival or exceed the budgets of major Hollywood studios and enjoy worldwide subscriber bases numbering in the hundreds of millions. If that’s not enough, AT&T was not the only entity that observed the displacement of pay TV by streaming services, as illustrated by the roughly concurrent entry of Disney’s Disney+ service, Apple’s Apple TV+ service, Comcast NBCUniversal’s Peacock service, and others. Both the existing and new competitors are formidable entities operating in a market with formidable capital requirements. In 2019, Netflix, Amazon, and Apple TV expended approximately $15 billion, $6 billion, and again, $6 billion, respectively, on content; by contrast, HBO Max, AT&T’s streaming service, expended approximately $3.5 billion. 

In short, the combined entity faced stiff competition from existing and reasonably anticipated competitors, requiring several billions of dollars on “content spend” to even stay in the running. Far from being able to exercise pricing power in an imaginary market defined by DOJ litigators for strategic purposes, the AT&T/Time Warner entity faced the challenge of merely surviving in a real-world market populated by several exceptionally well-financed competitors. At best, the combined entity “threatened” to deliver incremental competitive benefits by adding a robust new platform to the video-streaming market; at worst, it would fail in this objective and cause no incremental competitive harm. As it turns out, the latter appears to be the case.

The Enduring Virtues of Antitrust Prudence

AT&T’s M&A fiasco has important lessons for broader antitrust debates about the evidentiary standards that should be applied by courts and agencies when assessing alleged antitrust violations, in general, and vertical restraints, in particular.  

Among some scholars, regulators, and legislators, it has become increasingly received wisdom that prevailing evidentiary standards, as reflected in federal case law and agency guidelines, are excessively demanding, and have purportedly induced chronic underenforcement. It has been widely asserted that the courts’ and regulators’ focus on avoiding “false positives” and the associated costs of disrupting innocuous or beneficial business practices has resulted in an overly cautious enforcement posture, especially with respect to mergers and vertical restraints.

In fact, these views were expressed by some commentators in endorsing the antitrust case against the AT&T/Time-Warner transaction. Some legislators have gone further and argued for substantial amendments to the antitrust law to provide enforcers and courts with greater latitude to block or re-engineer combinations that would not pose sufficiently demonstrated competitive risks under current statutory or case law.

The swift downfall of the AT&T/Time-Warner transaction casts great doubt on this critique and accompanying policy proposals. It was precisely the district court’s rigorous application of those “overly” demanding evidentiary standards that avoided what would have been a clear false-positive error. The failure of the “blockbuster” combination to achieve not only market dominance, but even reasonably successful entry, validates the wisdom of retaining those standards.

The fundamental mismatch between the widely supported antitrust case against the transaction and the widely overlooked business realities of the economically relevant consumer market illustrates the ease with which largely theoretical and decontextualized economic models of competitive harm can lead to enforcement actions that lack any reasonable basis in fact.   

Politico has released a cache of confidential Federal Trade Commission (FTC) documents in connection with a series of articles on the commission’s antitrust probe into Google Search a decade ago. The headline of the first piece in the series argues the FTC “fumbled the future” by failing to follow through on staff recommendations to pursue antitrust intervention against the company. 

But while the leaked documents shed interesting light on the inner workings of the FTC, they do very little to substantiate the case that the FTC dropped the ball when the commissioners voted unanimously not to bring an action against Google.

Drawn primarily from memos by the FTC’s lawyers, the Politico report purports to uncover key revelations that undermine the FTC’s decision not to sue Google. None of the revelations, however, provide evidence that Google’s behavior actually harmed consumers.

The report’s overriding claim—and the one most consistently forwarded by antitrust activists on Twitter—is that FTC commissioners wrongly sided with the agency’s economists (who cautioned against intervention) rather than its lawyers (who tenuously recommended very limited intervention). 

Indeed, the overarching narrative is that the lawyers knew what was coming and the economists took wildly inaccurate positions that turned out to be completely off the mark:

But the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed:

— They saw only “limited potential for growth” in ads that track users across the web — now the backbone of Google parent company Alphabet’s $182.5 billion in annual revenue.

— They expected consumers to continue relying mainly on computers to search for information. Today, about 62 percent of those queries take place on mobile phones and tablets, nearly all of which use Google’s search engine as the default.

— They thought rivals like Microsoft, Mozilla or Amazon would offer viable competition to Google in the market for the software that runs smartphones. Instead, nearly all U.S. smartphones run on Google’s Android and Apple’s iOS.

— They underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic.

The report thus asserts that:

The agency ultimately voted against taking action, saying changes Google made to its search algorithm gave consumers better results and therefore didn’t unfairly harm competitors.

That conclusion underplays what the FTC’s staff found during the probe. In 312 pages of documents, the vast majority never publicly released, staffers outlined evidence that Google had taken numerous steps to ensure it would continue to dominate the market — including emerging arenas such as mobile search and targeted advertising. [EMPHASIS ADDED]

What really emerges from the leaked memos, however, is analysis by both the FTC’s lawyers and economists infused with a healthy dose of humility. There were strong political incentives to bring a case. As one of us noted upon the FTC’s closing of the investigation: “It’s hard to imagine an agency under more pressure, from more quarters (including the Hill), to bring a case around search.” Yet FTC staff and commissioners resisted that pressure, because prediction is hard. 

Ironically, the very prediction errors that the agency’s staff cautioned against are now being held against them. Yet the claims that these errors (especially the economists’) systematically cut in one direction (i.e., against enforcement) and that all of their predictions were wrong are both wide of the mark. 

Decisions Under Uncertainty

In seeking to make an example out of the FTC economists’ inaccurate predictions, critics ignore that antitrust investigations in dynamic markets always involve a tremendous amount of uncertainty; false predictions are the norm. Accordingly, the key challenge for policymakers is not so much to predict correctly, but to minimize the impact of incorrect predictions.

Seen in this light, the FTC economists’ memo is far from the laissez-faire manifesto that critics make it out to be. Instead, it shows agency officials wrestling with uncertain market outcomes, and choosing a course of action under the assumption the predictions they make might indeed be wrong. 

Consider the following passage from FTC economist Ken Heyer’s memo:

The great American philosopher Yogi Berra once famously remarked “Predicting is difficult, especially about the future.” How right he was. And yet predicting, and making decisions based on those predictions, is what we are charged with doing. Ignoring the potential problem is not an option. So I will be reasonably clear about my own tentative conclusions and recommendation, recognizing that reasonable people, perhaps applying a somewhat different standard, may disagree. My recommendation derives from my read of the available evidence, combined with the standard I personally find appropriate to apply to Commission intervention. [EMPHASIS ADDED]

In other words, contrary to what many critics have claimed, it simply is not the case that the FTC’s economists based their recommendations on bullish predictions about the future that ultimately failed to transpire. Instead, they merely recognized that, in a dynamic and unpredictable environment, antitrust intervention requires both a clear-cut theory of anticompetitive harm and a reasonable probability that remedies can improve consumer welfare. According to the economists, those conditions were absent with respect to Google Search.

Perhaps more importantly, it is worth asking why the economists’ erroneous predictions matter at all. Do critics believe that developments the economists missed warrant a different normative stance today?

In that respect, it is worth noting that the economists’ skepticism appeared to have rested first and foremost on the speculative nature of the harms alleged and the difficulty associated with designing appropriate remedies. And yet, if anything, these two concerns appear even more salient today. 

Indeed, the remedies imposed against Google in the EU have not delivered the outcomes that enforcers expected (here and here). This could either be because the remedies were insufficient or because Google’s market position was not due to anticompetitive conduct. Similarly, there is still no convincing economic theory or empirical research to support the notion that exclusive pre-installation and self-preferencing by incumbents harm consumers, and a great deal of reason to think they benefit them (see, e.g., our discussions of the issue here and here). 

Against this backdrop, criticism of the FTC economists appears to be driven more by a prior assumption that intervention is necessary—and that it was and is disingenuous to think otherwise—than evidence that erroneous predictions materially affected the outcome of the proceedings.

To take one example, the fact that ad tracking grew faster than the FTC economists believed it would is no less consistent with vigorous competition—and Google providing a superior product—than with anticompetitive conduct on Google’s part. The same applies to the growth of mobile operating systems. Ditto the fact that no rival has managed to dislodge Google in its most important markets. 

In short, not only were the economist memos informed by the very prediction difficulties that critics are now pointing to, but critics have not shown that any of the staff’s (inevitably) faulty predictions warranted a different normative outcome.

Putting Erroneous Predictions in Context

So what were these faulty predictions, and how important were they? Politico asserts that “the FTC’s economists successfully argued against suing the company, and the agency’s staff experts made a series of predictions that would fail to match where the online world was headed,” tying this to the FTC’s failure to intervene against Google over “tactics that European regulators and the U.S. Justice Department would later label antitrust violations.” The clear message is that the current actions are presumptively valid, and that the FTC’s economists thwarted earlier intervention based on faulty analysis.

But it is far from clear that these faulty predictions would have justified taking a tougher stance against Google. One key question for antitrust authorities is whether they can be reasonably certain that more efficient competitors will be unable to dislodge an incumbent. This assessment is necessarily forward-looking. Framed this way, greater market uncertainty (for instance, because policymakers are dealing with dynamic markets) usually cuts against antitrust intervention.

This does not entirely absolve the FTC economists who made the faulty predictions. But it does suggest the right question is not whether the economists made mistakes, but whether virtually everyone did so. The latter would be evidence of uncertainty, and thus weigh against antitrust intervention.

In that respect, it is worth noting that the staff who recommended that the FTC intervene also misjudged the future of digital markets.For example, while Politico surmises that the FTC “underestimated Google’s market share, a heft that gave it power over advertisers as well as companies like Yelp and Tripadvisor that rely on search results for traffic,” there is a case to be made that the FTC overestimated this power. If anything, Google’s continued growth has opened new niches in the online advertising space.

Pinterest provides a fitting example; despite relying heavily on Google for traffic, its ad-funded service has witnessed significant growth. The same is true of other vertical search engines like Airbnb, Booking.com, and Zillow. While we cannot know the counterfactual, the vertical search industry has certainly not been decimated by Google’s “monopoly”; quite the opposite. Unsurprisingly, this has coincided with a significant decrease in the cost of online advertising, and the growth of online advertising relative to other forms.

Politico asserts not only that the economists’ market share and market power calculations were wrong, but that the lawyers knew better:

The economists, relying on data from the market analytics firm Comscore, found that Google had only limited impact. They estimated that between 10 and 20 percent of traffic to those types of sites generally came from the search engine.

FTC attorneys, though, used numbers provided by Yelp and found that 92 percent of users visited local review sites from Google. For shopping sites like eBay and TheFind, the referral rate from Google was between 67 and 73 percent.

This compares apples and oranges, or maybe oranges and grapefruit. The economists’ data, from Comscore, applied to vertical search overall. They explicitly noted that shares for particular sites could be much higher or lower: for comparison shopping, for example, “ranging from 56% to less than 10%.” This, of course, highlights a problem with the data provided by Yelp, et al.: it concerns only the websites of companies complaining about Google, not the overall flow of traffic for vertical search.

But the more important point is that none of the data discussed in the memos represents the overall flow of traffic for vertical search. Take Yelp, for example. According to the lawyers’ memo, 92 percent of Yelp searches were referred from Google. Only, that’s not true. We know it’s not true because, as Yelp CEO Jerry Stoppelman pointed out around this time in Yelp’s 2012 Q2 earnings call: 

When you consider that 40% of our searches come from mobile apps, there is quite a bit of un-monetized mobile traffic that we expect to unlock in the near future.

The numbers being analyzed by the FTC staff were apparently limited to referrals to Yelp’s website from browsers. But is there any reason to think that is the relevant market, or the relevant measure of customer access? Certainly there is nothing in the staff memos to suggest they considered the full scope of the market very carefully here. Indeed, the footnote in the lawyers’ memo presenting the traffic data is offered in support of this claim:

Vertical websites, such as comparison shopping and local websites, are heavily dependent on Google’s web search results to reach users. Thus, Google is in the unique position of being able to “make or break any web-based business.”

It’s plausible that vertical search traffic is “heavily dependent” on Google Search, but the numbers offered in support of that simply ignore the (then) 40 percent of traffic that Yelp acquired through its own mobile app, with no Google involvement at all. In any case, it is also notable that, while there are still somewhat fewer app users than web users (although the number has consistently increased), Yelp’s app users view significantly more pages than its website users do — 10 times as many in 2015, for example.

Also noteworthy is that, for whatever speculative harm Google might be able to visit on the company, at the time of the FTC’s analysis Yelp’s local ad revenue was consistently increasing — by 89% in Q3 2012. And that was without any ad revenue coming from its app (display ads arrived on Yelp’s mobile app in Q1 2013, a few months after the staff memos were written and just after the FTC closed its Google Search investigation). 

In short, the search-engine industry is extremely dynamic and unpredictable. Contrary to what many have surmised from the FTC staff memo leaks, this cuts against antitrust intervention, not in favor of it.

The FTC Lawyers’ Weak Case for Prosecuting Google

At the same time, although not discussed by Politico, the lawyers’ memo also contains errors, suggesting that arguments for intervention were also (inevitably) subject to erroneous prediction.

Among other things, the FTC attorneys’ memo argued the large upfront investments were required to develop cutting-edge algorithms, and that these effectively shielded Google from competition. The memo cites the following as a barrier to entry:

A search engine requires algorithmic technology that enables it to search the Internet, retrieve and organize information, index billions of regularly changing web pages, and return relevant results instantaneously that satisfy the consumer’s inquiry. Developing such algorithms requires highly specialized personnel with high levels of training and knowledge in engineering, economics, mathematics, sciences, and statistical analysis.

If there are barriers to entry in the search-engine industry, algorithms do not seem to be the source. While their market shares may be smaller than Google’s, rival search engines like DuckDuckGo and Bing have been able to enter and gain traction; it is difficult to say that algorithmic technology has proven a barrier to entry. It may be hard to do well, but it certainly has not proved an impediment to new firms entering and developing workable and successful products. Indeed, some extremely successful companies have entered into similar advertising markets on the backs of complex algorithms, notably Instagram, Snapchat, and TikTok. All of these compete with Google for advertising dollars.

The FTC’s legal staff also failed to see that Google would face serious competition in the rapidly growing voice assistant market. In other words, even its search-engine “moat” is far less impregnable than it might at first appear.

Moreover, as Ben Thompson argues in his Stratechery newsletter: 

The Staff memo is completely wrong too, at least in terms of the potential for their proposed remedies to lead to any real change in today’s market. This gets back to why the fundamental premise of the Politico article, along with much of the antitrust chatter in Washington, misses the point: Google is dominant because consumers like it.

This difficulty was deftly highlighted by Heyer’s memo:

If the perceived problems here can be solved only through a draconian remedy of this sort, or perhaps through a remedy that eliminates Google’s legitimately obtained market power (and thus its ability to “do evil”), I believe the remedy would be disproportionate to the violation and that its costs would likely exceed its benefits. Conversely, if a remedy well short of this seems likely to prove ineffective, a remedy would be undesirable for that reason. In brief, I do not see a feasible remedy for the vertical conduct that would be both appropriate and effective, and which would not also be very costly to implement and to police. [EMPHASIS ADDED]

Of course, we now know that this turned out to be a huge issue with the EU’s competition cases against Google. The remedies in both the EU’s Google Shopping and Android decisions were severely criticized by rival firms and consumer-defense organizations (here and here), but were ultimately upheld, in part because even the European Commission likely saw more forceful alternatives as disproportionate.

And in the few places where the legal staff concluded that Google’s conduct may have caused harm, there is good reason to think that their analysis was flawed.

Google’s ‘revenue-sharing’ agreements

It should be noted that neither the lawyers nor the economists at the FTC were particularly bullish on bringing suit against Google. In most areas of the investigation, neither recommended that the commission pursue a case. But one of the most interesting revelations from the recent leaks is that FTC lawyers did advise the commission’s leadership to sue Google over revenue-sharing agreements that called for it to pay Apple and other carriers and manufacturers to pre-install its search bar on mobile devices:

FTC staff urged the agency’s five commissioners to sue Google for signing exclusive contracts with Apple and the major wireless carriers that made sure the company’s search engine came pre-installed on smartphones.

The lawyers’ stance is surprising, and, despite actions subsequently brought by the EU and DOJ on similar claims, a difficult one to countenance. 

To a first approximation, this behavior is precisely what antitrust law seeks to promote: we want companies to compete aggressively to attract consumers. This conclusion is in no way altered when competition is “for the market” (in this case, firms bidding for exclusive placement of their search engines) rather than “in the market” (i.e., equally placed search engines competing for eyeballs).

Competition for exclusive placement has several important benefits. For a start, revenue-sharing agreements effectively subsidize consumers’ mobile device purchases. As Brian Albrecht aptly puts it:

This payment from Google means that Apple can lower its price to better compete for consumers. This is standard; some of the payment from Google to Apple will be passed through to consumers in the form of lower prices.

This finding is not new. For instance, Ronald Coase famously argued that the Federal Communications Commission (FCC) was wrong to ban the broadcasting industry’s equivalent of revenue-sharing agreements, so-called payola:

[I]f the playing of a record by a radio station increases the sales of that record, it is both natural and desirable that there should be a charge for this. If this is not done by the station and payola is not allowed, it is inevitable that more resources will be employed in the production and distribution of records, without any gain to consumers, with the result that the real income of the community will tend to decline. In addition, the prohibition of payola may result in worse record programs, will tend to lessen competition, and will involve additional expenditures for regulation. The gain which the ban is thought to bring is to make the purchasing decisions of record buyers more efficient by eliminating “deception.” It seems improbable to me that this problematical gain will offset the undoubted losses which flow from the ban on Payola.

Applying this logic to Google Search, it is clear that a ban on revenue-sharing agreements would merely lead both Google and its competitors to attract consumers via alternative means. For Google, this might involve “complete” vertical integration into the mobile phone market, rather than the open-licensing model that underpins the Android ecosystem. Valuable specialization may be lost in the process.

Moreover, from Apple’s standpoint, Google’s revenue-sharing agreements are profitable only to the extent that consumers actually like Google’s products. If it turns out they don’t, Google’s payments to Apple may be outweighed by lower iPhone sales. It is thus unlikely that these agreements significantly undermined users’ experience. To the contrary, Apple’s testimony before the European Commission suggests that “exclusive” placement of Google’s search engine was mostly driven by consumer preferences (as the FTC economists’ memo points out):

Apple would not offer simultaneous installation of competing search or mapping applications. Apple’s focus is offering its customers the best products out of the box while allowing them to make choices after purchase. In many countries, Google offers the best product or service … Apple believes that offering additional search boxes on its web browsing software would confuse users and detract from Safari’s aesthetic. Too many choices lead to consumer confusion and greatly affect the ‘out of the box’ experience of Apple products.

Similarly, Kevin Murphy and Benjamin Klein have shown that exclusive contracts intensify competition for distribution. In other words, absent theories of platform envelopment that are arguably inapplicable here, competition for exclusive placement would lead competing search engines to up their bids, ultimately lowering the price of mobile devices for consumers.

Indeed, this revenue-sharing model was likely essential to spur the development of Android in the first place. Without this prominent placement of Google Search on Android devices (notably thanks to revenue-sharing agreements with original equipment manufacturers), Google would likely have been unable to monetize the investment it made in the open source—and thus freely distributed—Android operating system. 

In short, Politico and the FTC legal staff do little to show that Google’s revenue-sharing payments excluded rivals that were, in fact, as efficient. In other words, Bing and Yahoo’s failure to gain traction may simply be the result of inferior products and cost structures. Critics thus fail to show that Google’s behavior harmed consumers, which is the touchstone of antitrust enforcement.

Self-preferencing

Another finding critics claim as important is that FTC leadership declined to bring suit against Google for preferencing its own vertical search services (this information had already been partially leaked by the Wall Street Journal in 2015). Politico’s framing implies this was a mistake:

When Google adopted one algorithm change in 2011, rival sites saw significant drops in traffic. Amazon told the FTC that it saw a 35 percent drop in traffic from the comparison-shopping sites that used to send it customers

The focus on this claim is somewhat surprising. Even the leaked FTC legal staff memo found this theory of harm had little chance of standing up in court:

Staff has investigated whether Google has unlawfully preferenced its own content over that of rivals, while simultaneously demoting rival websites…. 

…Although it is a close call, we do not recommend that the Commission proceed on this cause of action because the case law is not favorable to our theory, which is premised on anticompetitive product design, and in any event, Google’s efficiency justifications are strong. Most importantly, Google can legitimately claim that at least part of the conduct at issue improves its product and benefits users. [EMPHASIS ADDED]

More importantly, as one of us has argued elsewhere, the underlying problem lies not with Google, but with a standard asset-specificity trap:

A content provider that makes itself dependent upon another company for distribution (or vice versa, of course) takes a significant risk. Although it may benefit from greater access to users, it places itself at the mercy of the other — or at least faces great difficulty (and great cost) adapting to unanticipated, crucial changes in distribution over which it has no control…. 

…It was entirely predictable, and should have been expected, that Google’s algorithm would evolve. It was also entirely predictable that it would evolve in ways that could diminish or even tank Foundem’s traffic. As one online marketing/SEO expert puts it: On average, Google makes about 500 algorithm changes per year. 500!….

…In the absence of an explicit agreement, should Google be required to make decisions that protect a dependent company’s “asset-specific” investments, thus encouraging others to take the same, excessive risk? 

Even if consumers happily visited rival websites when they were higher-ranked and traffic subsequently plummeted when Google updated its algorithm, that drop in traffic does not amount to evidence of misconduct. To hold otherwise would be to grant these rivals a virtual entitlement to the state of affairs that exists at any given point in time. 

Indeed, there is good reason to believe Google’s decision to favor its own content over that of other sites is procompetitive. Beyond determining and ensuring relevance, Google surely has the prerogative to compete vigorously and decide how to design its products to keep up with a changing market. In this case, that means designing, developing, and offering its own content in ways that partially displace the original “ten blue links” design of its search results page and instead offer its own answers to users’ queries.

Competitor Harm Is Not an Indicator of the Need for Intervention

Some of the other information revealed by the leak is even more tangential, such as that the FTC ignored complaints from Google’s rivals:

Amazon and Facebook privately complained to the FTC about Google’s conduct, saying their business suffered because of the company’s search bias, scraping of content from rival sites and restrictions on advertisers’ use of competing search engines. 

Amazon said it was so concerned about the prospect of Google monopolizing the search advertising business that it willingly sacrificed revenue by making ad deals aimed at keeping Microsoft’s Bing and Yahoo’s search engine afloat.

But complaints from rivals are at least as likely to stem from vigorous competition as from anticompetitive exclusion. This goes to a core principle of antitrust enforcement: antitrust law seeks to protect competition and consumer welfare, not rivals. Competition will always lead to winners and losers. Antitrust law protects this process and (at least theoretically) ensures that rivals cannot manipulate enforcers to safeguard their economic rents. 

This explains why Frank Easterbrook—in his seminal work on “The Limits of Antitrust”—argued that enforcers should be highly suspicious of complaints lodged by rivals:

Antitrust litigation is attractive as a method of raising rivals’ costs because of the asymmetrical structure of incentives…. 

…One line worth drawing is between suits by rivals and suits by consumers. Business rivals have an interest in higher prices, while consumers seek lower prices. Business rivals seek to raise the costs of production, while consumers have the opposite interest…. 

…They [antitrust enforcers] therefore should treat suits by horizontal competitors with the utmost suspicion. They should dismiss outright some categories of litigation between rivals and subject all such suits to additional scrutiny.

Google’s competitors spent millions pressuring the FTC to bring a case against the company. But why should it be a failing for the FTC to resist such pressure? Indeed, as then-commissioner Tom Rosch admonished in an interview following the closing of the case:

They [Google’s competitors] can darn well bring [a case] as a private antitrust action if they think their ox is being gored instead of free-riding on the government to achieve the same result.

Not that they would likely win such a case. Google’s introduction of specialized shopping results (via the Google Shopping box) likely enabled several retailers to bypass the Amazon platform, thus increasing competition in the retail industry. Although this may have temporarily reduced Amazon’s traffic and revenue (Amazon’s sales have grown dramatically since then), it is exactly the outcome that antitrust laws are designed to protect.

Conclusion

When all is said and done, Politico’s revelations provide a rarely glimpsed look into the complex dynamics within the FTC, which many wrongly imagine to be a monolithic agency. Put simply, the FTC’s commissioners, lawyers, and economists often disagree vehemently about the appropriate course of conduct. This is a good thing. As in many other walks of life, having a market for ideas is a sure way to foster sound decision making.

But in the final analysis, what the revelations do not show is that the FTC’s market for ideas failed consumers a decade ago when it declined to bring an antitrust suit against Google. They thus do little to cement the case for antitrust intervention—whether a decade ago, or today.

In a constructive development, the Federal Trade Commission has joined its British counterpart in investigating Nvidia’s proposed $40 billion acquisition of chip designer Arm, a subsidiary of Softbank. Arm provides the technological blueprints for wireless communications devices and, subject to a royalty fee, makes those crown-jewel assets available to all interested firms. Notwithstanding Nvidia’s stated commitment to keep the existing policy in place, there is an obvious risk that the new parent, one of the world’s leading chip makers, would at some time modify this policy with adverse competitive effects.

Ironically, the FTC is likely part of the reason that the Nvidia-Arm transaction is taking place.

Since the mid-2000s, the FTC and other leading competition regulators (except for the U.S. Department of Justice’s Antitrust Division under the leadership of former Assistant Attorney General Makan Delrahim) have intervened extensively in licensing arrangements in wireless device markets, culminating in the FTC’s recent failed suit against Qualcomm. The Nvidia-Arm transaction suggests that these actions may simply lead chip designers to abandon the licensing model and shift toward structures that monetize chip-design R&D through integrated hardware and software ecosystems. Amazon and Apple are already undertaking chip innovation through this model. Antitrust action that accelerates this movement toward in-house chip design is likely to have adverse effects for the competitive health of the wireless ecosystem.

How IP Licensing Promotes Market Access

Since its inception, the wireless communications market has relied on a handful of IP licensors to supply device producers and other intermediate users with a common suite of technology inputs. The result has been an efficient division of labor between firms that specialize in upstream innovation and firms that specialize in production and other downstream functions. Contrary to the standard assumption that IP rights limit access, this licensing-based model ensures technology access to any firm willing to pay the royalty fee.

Efforts by regulators to reengineer existing relationships between innovators and implementers endanger this market structure by inducing innovators to abandon licensing-based business models, which now operate under a cloud of legal insecurity, for integrated business models in which returns on R&D investments are captured internally through hardware and software products. Rather than expanding technology access and intensifying competition, antitrust restraints on licensing freedom are liable to limit technology access and increase market concentration.

Regulatory Intervention and Market Distortion

This interventionist approach has relied on the assertion that innovators can “lock in” producers and extract a disproportionate fee in exchange for access. This prediction has never found support in fact. Contrary to theoretical arguments that patent owners can impose double-digit “royalty stacks” on device producers, empirical researchers have repeatedly found that the estimated range of aggregate rates lies in the single digits. These findings are unsurprising given market performance over more than two decades: adoption has accelerated as quality-adjusted prices have fallen and innovation has never ceased. If rates were exorbitant, market growth would have been slow, and the smartphone would be a luxury for the rich.

Despite these empirical infirmities, the FTC and other competition regulators have persisted in taking action to mitigate “holdup risk” through policy statements and enforcement actions designed to preclude IP licensors from seeking injunctive relief. The result is a one-sided legal environment in which the world’s largest device producers can effectively infringe patents at will, knowing that the worst-case scenario is a “reasonable royalty” award determined by a court, plus attorneys’ fees. Without any credible threat to deny access even after a favorable adjudication on the merits, any IP licensor’s ability to negotiate a royalty rate that reflects the value of its technology contribution is constrained.

Assuming no change in IP licensing policy on the horizon, it is therefore not surprising that an IP licensor would seek to shift toward an integrated business model in which IP is not licensed but embedded within an integrated suite of products and services. Or alternatively, an IP licensor entity might seek to be acquired by a firm that already has such a model in place. Hence, FTC v. Qualcomm leads Arm to Nvidia.

The Error Costs of Non-Evidence-Based Antitrust

These counterproductive effects of antitrust intervention demonstrate the error costs that arise when regulators act based on unverified assertions of impending market failure. Relying on the somewhat improbable assumption that chip suppliers can dictate licensing terms to device producers that are among the world’s largest companies, competition regulators have placed at risk the legal predicates of IP rights and enforceable contracts that have made the wireless-device market an economic success. As antitrust risk intensifies, the return on licensing strategies falls and competitive advantage shifts toward integrated firms that can monetize R&D internally through stand-alone product and service ecosystems.

Far from increasing competitiveness, regulators’ current approach toward IP licensing in wireless markets is likely to reduce it.

In our first post, we discussed the weaknesses of an important theoretical underpinning of efforts to expand vertical merger enforcement (including, possibly, the proposed guidelines): the contract/merger equivalency assumption.

In this post we discuss the implications of that assumption and some of the errors it leads to — including some incorporated into the proposed guidelines.

There is no theoretical or empirical justification for more vertical enforcement

Tim Brennan makes a fantastic and regularly overlooked point in his post: If it’s true, as many claim (see, e.g., Steve Salop), that firms can generally realize vertical efficiencies by contracting instead of merging, then it’s also true that they can realize anticompetitive outcomes the same way. While efficiencies have to be merger-specific in order to be relevant to the analysis, so too do harms. But where the assumption is that the outcomes of integration can generally be achieved by the “less-restrictive” means of contracting, that would apply as well to any potential harms, thus negating the transaction-specificity required for enforcement. As Dennis Carlton notes:

There is a symmetry between an evaluation of the harms and benefits of vertical integration. Each must be merger-specific to matter in an evaluation of the merger’s effects…. If transaction costs are low, then vertical integration creates neither benefits nor harms, since everything can be achieved by contract. If transaction costs exist to prevent the achievement of a benefit but not a harm (or vice-versa), then that must be accounted for in a calculation of the overall effect of a vertical merger. (Dennis Carlton, Transaction Costs and Competition Policy)

Of course, this also means that those (like us) who believe that it is not so easy to accomplish by contract what may be accomplished by merger must also consider the possibility that a proposed merger may be anticompetitive because it overcomes an impediment to achieving anticompetitive goals via contract.

There’s one important caveat, though: The potential harms that could arise from a vertical merger are the same as those that would be cognizable under Section 2 of the Sherman Act. Indeed, for a vertical merger to cause harm, it must be expected to result in conduct that would otherwise be illegal under Section 2. This means there is always the possibility of a second bite at the apple when it comes to thwarting anticompetitive conduct. 

The same cannot be said of procompetitive conduct that can arise only through merger if a merger is erroneously prohibited before it even happens

Interestingly, Salop himself — the foremost advocate today for enhanced vertical merger enforcement — recognizes the issue raised by Brennan: 

Exclusionary harms and certain efficiency benefits also might be achieved with vertical contracts and agreements without the need for a vertical merger…. It [] might be argued that the absence of premerger exclusionary contracts implies that the merging firms lack the incentive to engage in conduct that would lead to harmful exclusionary effects. But anticompetitive vertical contracts may face the same types of impediments as procompetitive ones, and may also be deterred by potential Section 1 enforcement. Neither of these arguments thus justify a more or less intrusive vertical merger policy generally. Rather, they are factors that should be considered in analyzing individual mergers. (Salop & Culley, Potential Competitive Effects of Vertical Mergers)

In the same article, however, Salop also points to the reasons why it should be considered insufficient to leave enforcement to Sections 1 and 2, instead of addressing them at their incipiency under Clayton Section 7:

While relying solely on post-merger enforcement might have appealing simplicity, it obscures several key facts that favor immediate enforcement under Section 7.

  • The benefit of HSR review is to prevent the delays and remedial issues inherent in after-the-fact enforcement….
  • There may be severe problems in remedying the concern….
  • Section 1 and Section 2 legal standards are more permissive than Section 7 standards….
  • The agencies might well argue that anticompetitive post-merger conduct was caused by the merger agreement, so that it would be covered by Section 7….

All in all, failure to address these kinds of issues in the context of merger review could lead to significant consumer harm and underdeterrence.

The points are (mostly) well-taken. But they also essentially amount to a preference for more and tougher enforcement against vertical restraints than the judicial interpretations of Sections 1 & 2 currently countenance — a preference, in other words, for the use of Section 7 to bolster enforcement against vertical restraints of any sort (whether contractual or structural).

The problem with that, as others have pointed out in this symposium (see, e.g., Nuechterlein; Werden & Froeb; Wright, et al.), is that there’s simply no empirical basis for adopting a tougher stance against vertical restraints in the first place. Over and over again the empirical research shows that vertical restraints and vertical mergers are unlikely to cause anticompetitive harm: 

In reviewing this literature, two features immediately stand out: First, there is a paucity of support for the proposition that vertical restraints/vertical integration are likely to harm consumers. . . . Second, a far greater number of studies found that the use of vertical restraints in the particular context studied improved welfare unambiguously. (Cooper, et al, Vertical Restrictions and Antitrust Policy: What About the Evidence?)

[W]e did not have a particular conclusion in mind when we began to collect the evidence, and we… are therefore somewhat surprised at what the weight of the evidence is telling us. It says that, under most circumstances, profit-maximizing, vertical-integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view…. We therefore conclude that, faced with a vertical arrangement, the burden of evidence should be placed on competition authorities to demonstrate that that arrangement is harmful before the practice is attacked. (Francine Lafontaine & Margaret Slade, Vertical Integration and Firm Boundaries: The Evidence)

[Table 1 in this paper] indicates that voluntarily adopted restraints are associated with lower costs, greater consumption, higher stock returns, and better chances of survival. (Daniel O’Brien, The Antitrust Treatment of Vertical Restraint: Beyond the Beyond the Possibility Theorems)

In sum, these papers from 2009-2018 continue to support the conclusions from Lafontaine & Slade (2007) and Cooper et al. (2005) that consumers mostly benefit from vertical integration. While vertical integration can certainly foreclose rivals in theory, there is only limited empirical evidence supporting that finding in real markets. (GAI Comment on Vertical Mergers)

To the extent that the proposed guidelines countenance heightened enforcement relative to the status quo, they fall prey to the same defect. And while it is unclear from the fairly terse guidelines whether this is animating them, the removal of language present in the 1984 Non-Horizontal Merger Guidelines acknowledging the relative lack of harm from vertical mergers (“[a]lthough non-horizontal mergers are less likely than horizontal mergers to create competitive problems…”) is concerning.  

The shortcomings of orthodox economics and static formal analysis

There is also a further reason to think that vertical merger enforcement may be more likely to thwart procompetitive than anticompetitive arrangements relative to the status quo ante (i.e., where arrangements among vertical firms are by contract): Our lack of knowledge about the effects of market structure and firm organization on innovation and dynamic competition, and the relative hostility to nonstandard contracting, including vertical integration:

[T]he literature addressing how market structure affects innovation (and vice versa) in the end reveals an ambiguous relationship in which factors unrelated to competition play an important role. (Katz & Shelanski, Mergers and Innovation)

The fixation on the equivalency of the form of vertical integration (i.e., merger versus contract) is likely to lead enforcers to focus on static price and cost effects, and miss the dynamic organizational and informational effects that lead to unexpected, increased innovation across and within firms. 

In the hands of Oliver Williamson, this means that understanding firms in the real world entails taking an organization theory approach, in contrast to the “orthodox” economic perspective:

The lens of contract approach to the study of economic organization is partly complementary but also partly rival to the orthodox [neoclassical economic] lens of choice. Specifically, whereas the latter focuses on simple market exchange, the lens of contract is predominantly concerned with the complex contracts. Among the major differences is that non‐standard and unfamiliar contractual practices and organizational structures that orthodoxy interprets as manifestations of monopoly are often perceived to serve economizing purposes under the lens of contract. A major reason for these and other differences is that orthodoxy is dismissive of organization theory whereas organization theory provides conceptual foundations for the lens of contract. (emphasis added)

We are more likely to miss it when mergers solve market inefficiencies, and more likely to see it when they impose static costs — even if the apparent costs actually represent a move from less efficient contractual arrangements to more efficient integration.

The competition that takes place in the real world and between various groups ultimately depends upon the institution of private contracts, many of which, including the firm itself, are nonstandard. Innovation includes the discovery of new organizational forms and the application of old forms to new contexts. Such contracts prevent or attenuate market failure, moving the market toward what economists would deem a more competitive result. Indeed, as Professor Coase pointed out, many markets deemed “perfectly competitive” are in fact the end result of complex contracts limiting rivalry between competitors. This contractual competition cannot produce perfect results — no human institution ever can. Nonetheless, the result is superior to that which would obtain in a (real) world without nonstandard contracting. These contracts do not depend upon the creation or enhancement of market power and thus do not produce the evils against which antitrust law is directed. (Alan Meese, Price Theory Competition & the Rule of Reason)

Or, as Oliver Williamson more succinctly puts it:

[There is a] rebuttable presumption that nonstandard forms of contracting have efficiency purposes. (Oliver Williamson, The Economic Institutions of Capitalism)

The pinched focus of the guidelines on narrow market definition misses the bigger picture of dynamic competition over time

The proposed guidelines (and the theories of harm undergirding them) focus upon indicia of market power that may not be accurate if assessed in more realistic markets or over more relevant timeframes, and, if applied too literally, may bias enforcement against mergers with dynamic-innovation benefits but static-competition costs.  

Similarly, the proposed guidelines’ enumeration of potential efficiencies doesn’t really begin to cover the categories implicated by the organization of enterprise around dynamic considerations

The proposed guidelines’ efficiencies section notes that:

Vertical mergers bring together assets used at different levels in the supply chain to make a final product. A single firm able to coordinate how these assets are used may be able to streamline production, inventory management, or distribution, or create innovative products in ways that would have been hard to achieve though arm’s length contracts. (emphasis added)

But it is not clear than any of these categories encompasses organizational decisions made to facilitate the coordination of production and commercialization when they are dependent upon intangible assets.

As Thomas Jorde and David Teece write:

For innovations to be commercialized, the economic system must somehow assemble all the relevant complementary assets and create a dynamically-efficient interactive system of learning and information exchange. The necessary complementary assets can conceivably be assembled by either administrative or market processes, as when the innovator simply licenses the technology to firms that already own or are willing to create the relevant assets. These organizational choices have received scant attention in the context of innovation. Indeed, the serial model relies on an implicit belief that arm’s-length contracts between unaffiliated firms in the vertical chain from research to customer will suffice to commercialize technology. In particular, there has been little consideration of how complex contractual arrangements among firms can assist commercialization — that is, translating R&D capability into profitable new products and processes….

* * *

But in reality, the market for know-how is riddled with imperfections. Simple unilateral contracts where technology is sold for cash are unlikely to be efficient. Complex bilateral and multilateral contracts, internal organization, or various hybrid structures are often required to shore up obvious market failures and create procompetitive efficiencies. (Jorde & Teece, Rule of Reason Analysis of Horizontal Arrangements: Agreements Designed to Advance Innovation and Commercialize Technology) (emphasis added)

When IP protection for a given set of valuable pieces of “know-how” is strong — easily defendable, unique patents, for example — firms can rely on property rights to efficiently contract with vertical buyers and sellers. But in cases where the valuable “know how” is less easily defended as IP — e.g. business process innovation, managerial experience, distributed knowledge, corporate culture, and the like — the ability to partially vertically integrate through contract becomes more difficult, if not impossible. 

Perhaps employing these assets is part of what is meant in the draft guidelines by “streamline.” But the very mention of innovation only in the technological context of product innovation is at least some indication that organizational innovation is not clearly contemplated.  

This is a significant lacuna. The impact of each organizational form on knowledge transfers creates a particularly strong division between integration and contract. As Enghin Atalay, Ali Hortaçsu & Chad Syverson point out:

That vertical integration is often about transfers of intangible inputs rather than physical ones may seem unusual at first glance. However, as observed by Arrow (1975) and Teece (1982), it is precisely in the transfer of nonphysical knowledge inputs that the market, with its associated contractual framework, is most likely to fail to be a viable substitute for the firm. Moreover, many theories of the firm, including the four “elemental” theories as identified by Gibbons (2005), do not explicitly invoke physical input transfers in their explanations for vertical integration. (Enghin Atalay, et al., Vertical Integration and Input Flows) (emphasis added)

There is a large economics and organization theory literature discussing how organizations are structured with respect to these sorts of intangible assets. And the upshot is that, while we start — not end, as some would have it — with the Coasian insight that firm boundaries are necessarily a function of production processes and not a hard limit, we quickly come to realize that it is emphatically not the case that integration-via-contract and integration-via-merger are always, or perhaps even often, viable substitutes.

Conclusion

The contract/merger equivalency assumption, coupled with a “least-restrictive alternative” logic that favors contract over merger, puts a thumb on the scale against vertical mergers. While the proposed guidelines as currently drafted do not necessarily portend the inflexible, formalistic application of this logic, they offer little to guide enforcers or courts away from the assumption in the important (and perhaps numerous) cases where it is unwarranted.   

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by Geoffrey A. Manne (President & Founder, ICLE; Distinguished Fellow, Northwestern University Center on Law, Business, and Economics ); and Kristian Stout (Associate Director, ICLE).]

As many in the symposium have noted — and as was repeatedly noted during the FTC’s Hearings on Competition and Consumer Protection in the 21st Century — there is widespread dissatisfaction with the 1984 Non-Horizontal Merger Guidelines

Although it is doubtless correct that the 1984 guidelines don’t reflect the latest economic knowledge, it is by no means clear that this has actually been a problem — or that a new set of guidelines wouldn’t create even greater problems. Indeed, as others have noted in this symposium, there is a great deal of ambiguity in the proposed guidelines that could lead either to uncertainty as to how the agencies will exercise their discretion, or, more troublingly, could lead courts to take seriously speculative theories of harm

We can do little better in expressing our reservations that new guidelines are needed than did the current Chairman of the FTC, Joe Simons, writing on this very blog in a symposium on what became the 2010 Horizontal Merger Guidelines. In a post entitled, Revisions to the Merger Guidelines: Above All, Do No Harm, Simons writes:

My sense is that there is no need to revise the DOJ/FTC Horizontal Merger Guidelines, with one exception…. The current guidelines lay out the general framework quite well and any change in language relative to that framework are likely to create more confusion rather than less. Based on my own experience, the business community has had a good sense of how the agencies conduct merger analysis…. If, however, the current administration intends to materially change the way merger analysis is conducted at the agencies, then perhaps greater revision makes more sense. But even then, perhaps the best approach is to try out some of the contemplated changes (i.e. in actual investigations) and publicize them in speeches and the like before memorializing them in a document that is likely to have some substantial permanence to it.

Wise words. Unless, of course, “the current [FTC] intends to materially change the way [vertical] merger analysis is conducted.” But the draft guidelines don’t really appear to portend a substantial change, and in several ways they pretty accurately reflect agency practice.

What we want to draw attention to, however, is an implicit underpinning of the draft guidelines that we believe the agencies should clearly disavow (or at least explain more clearly the complexity surrounding): the extent and implications of the presumed functional equivalence of vertical integration by contract and by merger — the contract/merger equivalency assumption.   

Vertical mergers and their discontents

The contract/merger equivalency assumption has been gaining traction with antitrust scholars, but it is perhaps most clearly represented in some of Steve Salop’s work. Salop generally believes that vertical merger enforcement should be heightened. Among his criticisms of current enforcement is his contention that efficiencies that can be realized by merger can often also be achieved by contract. As he discussed during his keynote presentation at last year’s FTC hearing on vertical mergers:

And, finally, the key policy issue is the issue is not about whether or not there are efficiencies; the issue is whether the efficiencies are merger-specific. As I pointed out before, Coase stressed that you can get vertical integration by contract. Very often, you can achieve the vertical efficiencies if they occur, but with contracts rather than having to merge.

And later, in the discussion following his talk:

If there is vertical integration by contract… it meant you could get all the efficiencies from vertical integration with a contract. You did not actually need the vertical integration. 

Salop thus argues that because the existence of a “contract solution” to firm problems can often generate the same sorts of efficiencies as when firms opt to merge, enforcers and courts should generally adopt a presumption against vertical mergers relative to contracting:

Coase’s door swings both ways: Efficiencies often can be achieved by vertical contracts, without the potential anticompetitive harms from merger

In that vertical restraints are characterized as “just” vertical integration “by contract,” then claimed efficiencies in problematical mergers might be achieved with non-merger contracts that do not raise the same anticompetitive concerns. (emphasis in original)

(Salop isn’t alone in drawing such a conclusion, of course; Carl Shapiro, for example, has made a similar point (as have others)).

In our next post we explore the policy errors implicated by this contract/merger equivalency assumption. But here we want to consider whether it makes logical sense in the first place

The logic of vertical integration is not commutative 

It is true that, where contracts are observed, they are likely as (or more, actually)  efficient than merger. But, by the same token, it is also true that where mergers are observed they are likely more efficient than contracts. Indeed, the entire reason for integration is efficiency relative to what could be done by contract — this is the essence of the so-called “make-or-buy” decision. 

For example, a firm that decides to buy its own warehouse has determined that doing so is more efficient than renting warehouse space. Some of these efficiencies can be measured and quantified (e.g., carrying costs of ownership vs. the cost of rent), but many efficiencies cannot be easily measured or quantified (e.g., layout of the facility or site security). Under the contract/merger equivalency assumption, the benefits of owning a warehouse can be achieved “very often” by renting warehouse space. But the fact that many firms using warehouses own some space and rent some space indicates that the make-or-buy decision is often unique to each firm’s idiosyncratic situation. Moreover, the distinctions driving those differences will not always be readily apparent, and whether contracting or integrating is preferable in any given situation may not be inferred from the existence of one or the other elsewhere in the market — or even in the same firm!

There is no reason to presume in any given situation that the outcome from contracting would be the same as from merging, even where both are notionally feasible. The two are, quite simply, different bargaining environments, each with a different risk and cost allocation; accounting treatment; effect on employees, customers, and investors; tax consequence, etc. Even if the parties accomplished nominally “identical” outcomes, they would not, in fact, be identical.

Meanwhile, what if the reason for failure to contract, or the reason to prefer merger, has nothing to do with efficiency? What if there were no anticompetitive aim but there were a tax advantage? What if one of the parties just wanted a larger firm in order to satisfy the CEO’s ego? That these are not cognizable efficiencies under antitrust law is clear. But the adoption of a presumption of equivalence between contract and merger would — ironically — entail their incorporation into antitrust law just the same — by virtue of their effective prohibition under antitrust law

In other words, if the assumption is that contract and merger are equally efficient unless proven otherwise, but the law adopts a suspicion (or, even worse, a presumption) that vertical mergers are anticompetitive which can be rebutted only with highly burdensome evidence of net efficiency gain, this effectively deputizes antitrust law to enforce a preconceived notion of “merger appropriateness” that does not necessarily turn on efficiencies. There may (or may not) be sensible policy reasons for adopting such a stance, but they aren’t antitrust reasons.

More fundamentally, however, while there are surely some situations in which contractual restraints might be able to achieve similar organizational and efficiency gains as a merger, the practical realities of achieving not just greater efficiency, but a whole host of non-efficiency-related, yet nonetheless valid, goals, are rarely equivalent between the two

It may be that the parties don’t know what they don’t know to such an extent that a contract would be too costly because it would be too incomplete, for example. But incomplete contracts and ambiguous control and ownership rights aren’t (as much of) an issue on an ongoing basis after a merger. 

As noted, there is no basis for assuming that the structure of a merger and a contract would be identical. In the same way, there is no basis for assuming that the knowledge transfer that would result from a merger would be the same as that which would result from a contract — and in ways that the parties could even specify or reliably calculate in advance. Knowing that the prospect for knowledge “synergies” would be higher with a merger than a contract might be sufficient to induce the merger outcome. But asked to provide evidence that the parties could not engage in the same conduct via contract, the parties would be unable to do so. The consequence, then, would be the loss of potential gains from closer integration.

At the same time, the cavalier assumption that parties would be able — legally — to enter into an analogous contract in lieu of a merger is problematic, given that it would likely be precisely the form of contract (foreclosing downstream or upstream access) that is alleged to create problems with the merger in the first place.

At the FTC hearings last year, Francine LaFontaine highlighted this exact concern

I want to reemphasize that there are also rules against vertical restraints in antitrust laws, and so to say that the firms could achieve the mergers outcome by using vertical restraints is kind of putting them in a circular motion where we are telling them you cannot merge because you could do it by contract, and then we say, but these contract terms are not acceptable.

Indeed, legal risk is one of the reasons why a merger might be preferable to a contract, and because the relevant markets here are oligopoly markets, the possibility of impermissible vertical restraints between large firms with significant market share is quite real.

More important, the assumptions underlying the contention that contracts and mergers are functionally equivalent legal devices fails to appreciate the importance of varied institutional environments. Consider that one reason some takeovers are hostile is because incumbent managers don’t want to merge, and often believe that they are running a company as well as it can be run — that a change of corporate control would not improve efficiency. The same presumptions may also underlie refusals to contract and, even more likely, may explain why, to the other firm, a contract would be ineffective.

But, while there is no way to contract without bilateral agreement, there is a corporate control mechanism to force a takeover. In this institutional environment a merger may be easier to realize than a contract (and that applies even to a consensual merger, of course, given the hostile outside option). In this case, again, the assumption that contract should be the relevant baseline and the preferred mechanism for coordination is misplaced — even if other firms in the industry are successfully accomplishing the same thing via contract, and even if a contract would be more “efficient” in the abstract.

Conclusion

Properly understood, the choice of whether to contract or merge derives from a host of complicated factors, many of which are difficult to observe and/or quantify. The contract/merger equivalency assumption — and the species of “least-restrictive alternative” reasoning that would demand onerous efficiency arguments to permit a merger when a contract was notionally possible — too readily glosses over these complications and unjustifiably embraces a relative hostility to vertical mergers at odds with both theory and evidence

Rather, as has long been broadly recognized, there can be no legally relevant presumption drawn against a company when it chooses one method of vertical integration over another in the general case. The agencies should clarify in the draft guidelines that the mere possibility of integration via contract or the inability of merging parties to rigorously describe and quantify efficiencies does not condemn a proposed merger.

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by Lawrence J. White (Robert Kavesh Professor of Economics, New York University; former Chief Economist, DOJ Antitrust Division).]

The DOJ/FTC Draft Vertical Merger Guidelines establish a “safe harbor” of a 20% market share for each of the merging parties. But the issue of defining the relevant “market” to which the 20% would apply is not well addressed.

Although reference is made to the market definition paradigm that is offered by the DOJ’s and FTC’s Horizontal Merger Guidelines (“HMGs”), what is neglected is the following: Under the “unilateral effects” theory of competitive harm of the HMGs, the horizontal merger of two firms that sell differentiated products that are imperfect substitutes could lead to significant price increases if the second-choice product for a significant fraction of each of the merging firms’ customers is sold by the partner firm. Such unilateral-effects instances are revealed by examining detailed sales and substitution data with respect to the customers of only the two merging firms.

In such instances, the true “relevant market” is simply the products that are sold by the two firms, and the merger is effectively a “2-to-1” merger. Under these circumstances, any apparently broader market (perhaps based on physical or functional similarities of products) is misleading, and the “market” shares of the merging parties that are based on that broader market are under-representations of the potential for their post-merger exercise of market power.

With a vertical merger, the potential for similar unilateral effects* would have to be captured by examining the detailed sales and substitution patterns of each of the merging firms with all of their significant horizontal competitors. This will require a substantial, data-intensive effort. And, of course, if this effort is not undertaken and an erroneously broader market is designated, the 20% “market” share threshold will understate the potential for competitive harm from a proposed vertical merger.

* With a vertical merger, such “unilateral effects” could arise post-merger in two ways: (a) The downstream partner could maintain a higher price, since some of the lost profits from some of the lost sales could be recaptured by the upstream partner’s profits on the sales of components to the downstream rivals (which gain some of the lost sales); and (b) the upstream partner could maintain a higher price to the downstream rivals, since some of the latter firms’ customers (and the concomitant profits) would be captured by the downstream partner.

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by Jan Rybnicek (Counsel at Freshfields Bruckhaus Deringer US LLP in Washington, D.C. and Senior Fellow and Adjunct Professor at the Global Antitrust Institute at the Antonin Scalia Law School at George Mason University).]

In an area where it may seem that agreement is rare, there is near universal agreement on the benefits of withdrawing the DOJ’s 1984 Non-Horizontal Merger Guidelines. The 1984 Guidelines do not reflect current agency thinking on vertical mergers and are not relied upon by businesses or practitioners to anticipate how the agencies may review a vertical transaction. The more difficult question is whether the agencies should now replace the 1984 Guidelines and, if so, what the modern guidelines should say.

There are several important reasons that counsel against issuing new vertical merger guidelines (VMGs). Most significantly, we likely are better off without new VMGs because they invariably will (1) send the wrong message to agency staff about the relative importance of vertical merger enforcement compared to other agency priorities, (2) create new sufficient conditions that tend to trigger wasteful investigations and erroneous enforcement actions, and (3) add very little, if anything, to our understanding of when the agencies will or will not pursue an in-depth investigation or enforcement action of a vertical merger.

Unfortunately, these problems are magnified rather than mitigated by the draft VMGs. But it is unlikely at this point that the agencies will hit the brakes and not issue new VMGs. The agencies therefore should make several key changes that would help prevent the final VMGs from causing more harm than good.

What is the Purpose of Agency Guidelines? 

Before we can have a meaningful conversation about whether the draft VMGs are good or bad for the world, or how they can be improved to ensure they contribute positively to antitrust law, it is important to identify, and have a shared understanding about, the purpose of guidelines and their potential benefits.

In general, I am supportive of guidelines. In fact, I helped urge the FTC to issue its 2015 Policy Statement articulating the agency’s enforcement principles under its Section 5 Unfair Methods of Competition authority. As I have written before, guidelines can be useful if they accomplish two important goals: (1) provide insight and transparency to businesses and practitioners about the agencies’ analytical approach to an issue and (2) offer agency staff direction as to agency priorities while cabining the agencies’ broad discretion by tethering investigational or enforcement decisions to those guidelines. An additional benefit may be that the guidelines also could prove useful to courts interpreting or applying the antitrust laws.

Transparency is important for the obvious reason that it allows the business community and practitioners to know how the agencies will apply the antitrust laws and thereby allows them to evaluate if a specific merger or business arrangement is likely to receive scrutiny. But guidelines are not only consumed by the public. They also are used by agency staff. As a result, guidelines invariably influence how staff approaches a matter, including whether to open an investigation, how in-depth that investigation is, and whether to recommend an enforcement action. Lastly, for guidelines to be meaningful, they also must accurately reflect agency practice, which requires the agencies’ analysis to be tethered to an analytical framework.

As discussed below, there are many reasons to doubt that the draft VMGs can deliver on these goals.

Draft VMGs Will Lead to Bad Enforcement Policy While Providing Little Benefit

 A chief concern with VMGs is that they will inadvertently usher in a new enforcement regime that treats horizontal and vertical mergers as co-equal enforcement priorities despite the mountain of evidence, not to mention simple logic, that mergers among competitors are a significantly greater threat to competition than are vertical mergers. The draft VMGs exacerbate rather than mitigate this risk by creating a false equivalence between vertical and horizontal merger enforcement and by establishing new minimum conditions that are likely to lead the agencies to pursue wasteful investigations of vertical transactions. And the draft VMGs do all this without meaningfully advancing our understanding of the conditions under which the agencies are likely to pursue investigations and enforcement against vertical mergers.

1. No Recognition of the Differences Between Horizontal and Vertical Mergers

One striking feature of the draft VMGs is that they fail to contextualize vertical mergers in the broader antitrust landscape. As a result, it is easy to walk away from the draft VMGs with the impression that vertical mergers are as likely to lead to anticompetitive harm as are horizontal mergers. That is a position not supported by the economic evidence or logic. It is of course true that vertical mergers can result in competitive harm; that is not a seriously contested point. But it is important to acknowledge and provide background for why that harm is significantly less likely than in horizontal cases. That difference should inform agency enforcement priorities. Potentially due to this the lack of framing, the draft VMGs tend to speak more about when the agencies may identify competitive harm rather than when they will not.

The draft VMGs would benefit greatly from a more comprehensive approach to understanding vertical merger transactions. The agencies should add language explaining that, whereas a consensus exists that eliminating a direct competitor always tends to increase the risk of unilateral effects (although often trivially), there is no such consensus that harm will result from the combination of complementary assets. In fact, the current evidence shows such vertical transactions tend to be procompetitive. Absent such language, the VMGs will over time misguidedly focus more agency resources into investigating vertical mergers where there is unlikely to be harm (with inevitably more enforcement errors) and less time on more important priorities, such as pursuing enforcement of anticompetitive horizontal transactions.

2. The 20% Safe Harbor Provides No Harbor and Will Become a Sufficient Condition

The draft VMGs attempt to provide businesses with guidance about the types of transactions the agencies will not investigate by articulating a market share safe harbor. But that safe harbor does not (1) appear to be grounded in any evidence, (2) is surprisingly low in comparison to the EU vertical merger guidelines, and (3) is likely to become a sufficient condition to trigger an in-depth investigation or enforcement. 

The draft VMGs state:

The Agencies are unlikely to challenge a vertical merger where the parties to the merger have a share in the relevant market of less than 20%, and the related product is used in less than 20% of the relevant market.

But in the very next sentence the draft VMGs render the safe harbor virtually meaningless, stating:

In some circumstance, mergers with shares below the threshold can give rise to competitive concerns.

This caveat comes despite the fact that the 20% threshold is low compared to other jurisdictions. Indeed, the EU’s guidelines create a 30% safe harbor. Nor is it clear what the basis is for the 20% threshold, either in economics or law. While it is important for the agencies to remain flexible, too much flexibility will render the draft VMGs meaningless. The draft VMGs should be less equivocal about the types of mergers that will not receive significant scrutiny and are unlikely to be the subject of enforcement action.

What may be most troubling about the market share safe harbor is the likelihood that it will establish general enforcement norms that did not previously exist. It is likely that agency staff will soon interpret (despite language stating otherwise) the 20% market share as the minimum necessary condition to open an in-depth investigation and to pursue an enforcement action. We have seen other guidelines’ tools have similar effects on agency analysis before (see, GUPPIs). This risk is only exacerbated where the safe harbor is not a true safe harbor that provides businesses with clarity on enforcement priorities.

3. Requirements for Proving EDM and Efficiencies Fails to Recognize Vertical Merger Context

The draft VMGs minimize the significant role of EDM and efficiencies in vertical mergers. The agencies frequently take a skeptical approach to efficiencies in the context of horizontal mergers and it is well-known that the hurdle to substantiate efficiencies is difficult, if not impossible, to meet. The draft VMGs oddly continue this skeptical approach by specifically referencing the standards discussed in the horizontal merger guidelines for efficiencies when discussing EDM and vertical merger efficiencies. The draft VMGs do not recognize that the combination of complementary products is inherently more likely to generate efficiencies than in horizontal mergers between competitors. The draft VMGs also oddly discuss EDM and efficiencies in separate sections and spend a trivial amount of time on what is the core motivating feature of vertical mergers. Even the discussion of EDM is as much about where there may be exceptions to EDM as it is about making clear the uncontroversial view that EDM is frequent in vertical transactions. Without acknowledging the inherent nature of EDM and efficiencies more generally, the final VMGs will send the wrong message that vertical merger enforcement should be on par with horizontal merger enforcement.

4. No New Insights into How Agencies Will Assess Vertical Mergers

Some might argue that the costs associated with the draft VMGs nevertheless are tolerable because the guidelines offer significant benefits that far outweigh their costs. But that is not the case here. The draft VMGs provide no new information about how the agencies will review vertical merger transactions and under what circumstances they are likely to seek enforcement actions. And that is because it is a difficult if not impossible task to identify any such general guiding principles. Indeed, unlike in the context of horizontal transactions where an increase in market power informs our thinking about the likely competitive effects, greater market power in the context of a vertical transaction that combines complements creates downward pricing pressure that often will dominate any potential competitive harm.

The draft VMGs do what they can, though, which is to describe in general terms several theories of harm. But the benefits from that exercise are modest and do not outweigh the significant risks discussed above. The theories described are neither novel or unknown to the public today. Nor do the draft VMGs explain any significant new thinking on vertical mergers, likely because there has been none that can provide insight into general enforcement principles. The draft VMGs also do not clarify changes to statutory text (because it has not changed) or otherwise clarify judicial rulings or past enforcement actions. As a result, the draft VMGs do not offer sufficient benefits that would outweigh their substantial cost.

Conclusion

Despite these concerns, it is worth acknowledging the work the FTC and DOJ have put into preparing the draft VMGs. It is no small task to articulate a unified position between the two agencies on an issue such as vertical merger enforcement where so many have such strong views. To the agencies’ credit, the VMGs are restrained in not including novel or more adventurous theories of harm. I anticipate the DOJ and FTC will engage with commentators and take the feedback seriously as they work to improve the final VMGs.

[TOTM: The following is part of a symposium by TOTM guests and authors on the 2020 Vertical Merger Guidelines. The entire series of posts is available here.

This post is authored by Scott Sher (Partner, Wilson Sonsini Goodrich & Rosati) and Matthew McDonald (Associate, Wilson Sonsini Goodrich & Rosati).]

On January 10, 2020, the United States Department of Justice (“DOJ”) and the Federal Trade Commission (“FTC”) (collectively, “the Agencies”) released their joint draft guidelines outlining their “principal analytical techniques, practices and enforcement policy” with respect to vertical mergers (“Draft Guidelines”). While the Draft Guidelines describe and formalize the Agencies’ existing approaches when investigating vertical mergers, they leave several policy questions unanswered. In particular, the Draft Guidelines do not address how the Agencies might approach the issue of acquisition of potential or nascent competitors through vertical mergers. As many technology mergers are motivated by the desire to enter new industries or add new tools or features to an existing platform (i.e., the Buy-Versus-Build dilemma), the omission leaves a significant hole in the Agencies’ enforcement policy agenda, and leaves the tech industry, in particular, without adequate guidance as to how the Agencies may address these issues.

This is notable, given that the Horizontal Merger Guidelines explicitly address potential competition theories of harm (e.g., at § 1 (referencing mergers and acquisitions “involving actual or potential competitors”); § 2 (“The Agencies consider whether the merging firms have been, or likely will become absent the merger, substantial head-to-head competitors.”). Indeed, the Agencies have recently challenged several proposed horizontal mergers based on nascent competition theories of harm. 

Further, there has been much debate regarding whether increased antitrust scrutiny of vertical acquisitions of nascent competitors, particularly in technology markets, is warranted (See, e.g., Open Markets Institute, The Urgent Need for Strong Vertical Merger Guidelines (“Enforcers should be vigilant toward dominant platforms’ acquisitions of seemingly small or marginal firms and be ready to block acquisitions that may be part of a monopoly protection strategy. Dominant firms should not be permitted to expand through vertical acquisitions and cut off budding threats before they have a chance to bloom.”); Caroline Holland, Taking on Big Tech Through Merger Enforcement (“Vertical mergers that create market power capable of stifling competition could be particularly pernicious when it comes to digital platforms.”)). 

Thus, further policy guidance from the Agencies on this issue is needed. As the Agencies formulate guidance, they should take note that vertical mergers involving technology start-ups generally promote efficiency and innovation, and that any potential competitive harm almost always can be addressed with easy-to-implement behavioral remedies.

The agencies’ draft vertical merger guidelines

The Draft Guidelines outline the following principles that the Agencies will apply when analyzing vertical mergers:

  • Market definition. The Agencies will identify a relevant market and one or more “related products.” (§ 2) This is a product that is supplied by the merged firm, is vertically related to the product in the relevant market, and to which access by the merged firm’s rivals affects competition in the relevant market. (§ 2)
  • Safe harbor. Unlike horizontal merger cases, the Agencies cannot rely on changes in concentration in the relevant market as a screen for competitive effects. Instead, the Agencies consider measures of the competitive significance of the related product. (§ 3) The Draft Guidelines propose a safe harbor, stating that the Agencies are unlikely to challenge a vertical merger “where the parties to the merger have a share in the relevant market of less than 20 percent, and the related product is used in less than 20 percent of the relevant market.” (§ 3) However, shares exceeding the thresholds, taken alone, do not support an inference that the vertical merger is anticompetitive. (§ 3)
  • Theories of unilateral harm. Vertical mergers can result in unilateral competitive effects, including raising rivals’ costs (charging rivals in the relevant market a higher price for the related product) or foreclosure (refusing to supply rivals with the related product altogether). (§ 5.a) Another potential unilateral effect is access to competitively sensitive information: The combined firm may, through the acquisition, gain access to sensitive business information about its upstream or downstream rivals that was unavailable to it before the merger (for example, a downstream rival of the merged firm may have been a premerger customer of the upstream merging party). (§ 5.b)
  • Theories of coordinated harm. Vertical mergers can also increase the likelihood of post-merger coordinated interaction. For example, a vertical merger might eliminate or hobble a maverick firm that would otherwise play an important role in limiting anticompetitive coordination. (§ 7)
  • Procompetitive effects. Vertical mergers can have procompetitive effects, such as the elimination of double marginalization (“EDM”). A merger of vertically related firms can create an incentive for the combined entity to lower prices on the downstream product, because it will capture the additional margins from increased sales on the upstream product. (§ 6) EDM thus may benefit both the merged firm and buyers of the downstream product. (§ 6)
  • Efficiencies. Vertical mergers have the potential to create cognizable efficiencies; the Agencies will evaluate such efficiencies using the standards set out in the Horizontal Merger Guidelines. (§ 8)

Implications for vertical mergers involving nascent start-ups

At present, the Draft Guidelines do not address theories of nascent or potential competition. To the extent the Agencies provide further guidance regarding the treatment of vertical mergers involving nascent start-ups, they should take note of the following facts:

First, empirical evidence from strategy literature indicates that technology-related vertical mergers are likely to be efficiency-enhancing. In a survey of the strategy literature on vertical integration, Professor D. Daniel Sokol observed that vertical acquisitions involving technology start-ups are “largely complementary, combining the strengths of the acquiring firm in process innovation with the product innovation of the target firms.” (p. 1372) The literature shows that larger firms tend to be relatively poor at developing new and improved products outside of their core expertise, but are relatively strong at process innovation (developing new and improved methods of production, distribution, support, and the like). (Sokol, p. 1373) Larger firms need acquisitions to help with innovation; acquisition is more efficient than attempting to innovate through internal efforts. (Sokol, p. 1373)

Second, vertical merger policy towards nascent competitor acquisitions has important implications for the rate of start-up formation, and the innovation that results. Entrepreneurship in technology markets is motivated by the opportunity for commercialization and exit. (Sokol, p. 1362 (“[T]he purpose of such investment [in start-ups] is to reap the rewards of scaling a venture to exit.”))

In recent years, as IPO activity has declined, vertical mergers have become the default method of entrepreneurial exit. (Sokol, p. 1376) Increased vertical merger enforcement against start-up acquisitions thus closes off the primary exit strategy for entrepreneurs. As Prof. Sokol concluded in his study of vertical mergers:

When antitrust agencies, judges, and legislators limit the possibility of vertical mergers as an exit strategy for start-up firms, it creates risk for innovation and entrepreneurship…. it threatens entrepreneurial exits, particularly for tech companies whose very business model is premised upon vertical mergers for purposes of a liquidity event. (p. 1377)

Third, to the extent that the vertical acquisition of a start-up raises competitive concerns, a behavioral remedy is usually preferable to a structural one. As explained above, vertical acquisitions typically result in substantial efficiencies, and these efficiencies are likely to overwhelm any potential competitive harm. Further, a structural remedy is likely infeasible in the case of a start-up acquisition. Thus, behavioral relief is the only way of preserving the deal’s efficiencies while remedying the potential competitive harm. (Which the Agencies have recognized, see DOJ Antitrust Division, Policy Guide to Merger Remedies, p. 20 (“Stand-alone conduct relief is only appropriate when a full-stop prohibition of the merger would sacrifice significant efficiencies and a structural remedy would similarly eliminate such efficiencies or is simply infeasible.”)) Appropriate behavioral remedies for vertical acquisitions of start-ups would include firewalls (restricting the flow of competitively sensitive information between the upstream and downstream units of the combined firm) or a fair dealing or non-discrimination remedy (requiring the merging firm to supply an input or grant customer access to competitors in a non-discriminatory way) with clear benchmarks to ensure compliance. (See Policy Guide to Merger Remedies, pp. 22-24)

To be sure, some vertical mergers may cause harm to competition, and there should be enforcement when the facts justify it. But vertical mergers involving technology start-ups generally enhance efficiency and promote innovation. Antitrust’s goals of promoting competition and innovation are thus best served by taking a measured approach towards vertical mergers involving technology start-ups. (Sokol, pp. 1362–63) (“Thus, a general inference that makes vertical acquisitions, particularly in tech, more difficult to approve leads to direct contravention of antitrust’s role in promoting competition and innovation.”)